About the job
FriendliAI is seeking a Forward Deployed Engineer (FDE) to assist enterprises in deploying, scaling, and operating generative and agentic AI workloads on FriendliAI infrastructure. You will work directly with customers to solve and implement production-grade applications using our products, such as Serverless Endpoints, Dedicated Endpoints, or Container.
Friendli Container is our service that allows customers to download our inference engine as Docker images and deploy it in their chosen environment, such as private clouds or on-premises. Our Friendli Container can be adopted directly to AWS EKS clusters using our EKS add-on product.
You will work directly on our customers’ projects, collaborating with their engineering teams to solve AI inference challenges like scaling, orchestration, and monitoring. This is a hands-on, customer-embedded role. If you have worked in DevOps, platform engineering, or SRE for AI applications, this is your ideal position.
Key Responsibilities
- Design and implement large-scale deployment architectures for LLM and multimodal inference
- Deploy and manage containerized workloads across Kubernetes clusters
- Diagnose production issues, such as performance bottlenecks, and implement temporary fixes as needed
- Collaborate with customers’ DevOps teams to integrate FriendliAI’s infrastructure into their CI/CD workflows
- Develop scripts, Helm charts, and Terraform modules that simplify repeated deployments
- Contribute field insights to shape our platform reliability, observability, and scaling strategies
- Lead workshops, technical sessions, or webinars to help customers master infrastructure best practices