Background
Maintenance at 3 PM CT - Downtime expected

Frequently Asked Questions

About FluxAI's security, reliability, and model management

🔒 Privacy & Data Security

How does FluxAI guarantee that chats and documents remain private?

Legal Framework: Our Privacy Policy explicitly states that your data is never sold/moved/copied/shared/transferred to anywhere or anyone else; except when required by law enforcement.

Data Usage: We collect platform activity solely for the purpose of service operation and to improve our services.

Processing Location: Data is processed in the United Stated only.

âš¡ Infrastructure Reliability & Stability

How can FluxAI offer competitive pricing while maintaining performance?

FluxAI uses US based decentralized infrastructure that distributes AI workloads across on high grade GPU machines on the FluxEdge platform. This approach dramatically reduces costs compared to centralized cloud providers while maintaining high performance.

What is decentralized infrastructure and how does it work?

FluxAI's decentralized infrastructure consists of:

  • FluxEdge Network: Hundreds of distributed GPU rigs
  • Secure Environment: AI workloads are distributed across a Private Secure Network
  • Load Balancing: Intelligent routing to servers with lowest current load

How do you ensure infrastructure stability when servers join and leave the network?

Reliability Mechanisms:

  • Health Monitoring: Continuous server status checks every minute
  • Automatic Failover: Failed servers automatically removed from load balancer groups
  • Redundancy: Multiple backup servers in each load balancer group
  • Blue-Green Deployment: Zero-downtime updates using dual systems
  • Geographic Distribution: Servers spread across multiple US locations

Stability Features:

  • Servers marked "dead" after multiple health check failures
  • Real-time notifications for infrastructure issues
  • Automated recovery processes
  • Load distribution prevents single points of failure

🤖 LLM Updates & Model Management

Do you have a routine for updating LLMs when new open-source models are released?

Yes, we have a systematic model update process:

Discovery & Integration:

  • Automated system imports new LLMs from various sources
  • Models scored and ranked by performance metrics
  • Regular scanning for new open-source releases

Update Process:

  1. Testing: Models go through approval pipeline with performance benchmarks
  2. Deployment: Approved models added to sandboxed systems for rigorous testing
  3. Rollout: Models deployed to appropriate GPU servers

Will my preferred models be removed from the platform?

Customer Assurances:

  • Model availability: We've typically expanded choice over time rather than removing options
  • Model testing: In rare circumstances, after trialing a model, if output remains poor, it will be discontinued
  • Popular models remain available: Models like Llama 3 added in July 2024, over one year ago, continue to be supported
  • Transparent tracking: All models include performance metadata and benchmarks

Model Management Features:

  • Shortlist: Curated list of approved models for production use
  • Benchmarking: Models tracked with performance metrics (ARC, HellaSwag, MMLU, etc.)
  • Systematic Deployment: Stable release system ensures consistency across environments

Still have questions?

If you need additional information or have specific questions about FluxAI's services, please contact our support team at [email protected].