5 days ago

LLM Inference Frameworks and Optimization Engineer

San Francisco, Singapore, Amsterdam

About Us

At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs). Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency.

We are seeking an Inference Frameworks and Optimization Engineer to design, develop, and optimize distributed inference engines that support multimodal and language models at scale. This role will focus on low-latency, high-throughput inference, GPU/accelerator optimizations, and software-hardware co-design, ensuring efficient large-scale deployment of LLMs and vision models.

 

Responsibilities

Inference Framework Development and Optimization

  • Design and develop fault-tolerant, high-concurrency distributed inference engine for text, image, and multimodal generation models.
  • Implement and optimize distributed inference strategies, including Mixture of Experts (MoE) parallelism, tensor parallelism, pipeline parallelism for high-performance serving.
  • Apply CUDA graph optimizations, TensorRT/TRT-LLM graph optimizations, and PyTorch-based compilation (torch.compile), and speculative decoding to enhance efficiency and scalability.

Software-Hardware Co-Design and AI Infrastructure

  • Collaborate with hardware teams on performance bottleneck analysis, co-optimize inference performance for GPUs, TPUs, or custom accelerators.
  • Work closely with AI researchers and infrastructure engineers to develop efficient model execution plans and optimize E2E model serving pipelines.

 

Qualifications

Must-Have:

  • Experience:
    • 3+ years of experience in deep learning inference frameworks, distributed systems, or high-performance computing.
  • Technical Skills:
    • Familiar with at least one LLM inference frameworks (e.g., TensorRT-LLM, vLLM, SGLang, TGI(Text Generation Inference)).
    • Background knowledge and experience in at least one of the following: GPU programming (CUDA/Triton/TensorRT), compiler, model quantization, and GPU cluster scheduling.
    • Deep understanding of KV cache systems like Mooncake, PagedAttention, or custom in-house variants.
  • Programming:
    • Proficient in Python and C++/CUDA for high-performance deep learning inference.
  • Optimization Techniques:
    • Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.
    • Knowledge of inference optimization, such as workload scheduling, CUDA graph, compiled, efficient kernels
  • Soft Skills:
    • Strong analytical problem-solving skills with a performance-driven mindset.
    • Excellent collaboration and communication skills across teams.

Nice-to-Have:

  • Experience in developing software systems for large-scale data center networks with RDMA/RoCE
  • Familiar with distributed filesystem(e.g., 3FS, HDFS, Ceph)
  • Familiar with open source distributed scheduling/orchestration frameworks, such as Kubernetes (K8S)
  • Contributions to open-source deep learning inference projects.

 

Why Join Us?

This role offers a unique opportunity to shape the future of LLM inference infrastructure, ensuring scalable, high-performance AI deployment across a diverse range of applications. If you're passionate about pushing the boundaries of AI inference, we’d love to hear from you!

 

 

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

 

Please see our privacy policy at https://www.together.ai/privacy  

 

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

Share this job opportunity

Related Jobs

Hippocratic AI
1 week ago

LLM Inference Engineer

Palo Alto
Meta
1 week ago

Research Engineer, Fundamental AI Research - CoreML and Optimization

Menlo Park, CA, Seattle, WA, New York, NY
Meta
1 week ago

Research Engineer, Fundamental AI Research - CoreML and Optimization

Menlo Park, CA, Seattle, WA, New York, NY
Meta
5 days ago

Research Engineer, Fundamental AI Research - CoreML and Optimization

Menlo Park, CA, Seattle, WA, New York, NY
Meta
5 days ago

Research Engineer, Fundamental AI Research - CoreML and Optimization

Menlo Park, CA, Seattle, WA, New York, NY