Together AI

LLM Training Dataset and Checkpoint Optimization Engineer

San Francisco
Yesterday

Share:

About Us

Together.ai is a leader in developing AI infrastructure that powers the training of state-of-the-art models. We focus on creating scalable, efficient systems for handling massive datasets and managing large-scale distributed checkpoints, ensuring seamless workflows for training and fine-tuning AI models.

We are seeking a Training Dataset and Checkpoint Acceleration Engineer to optimize data pipelines and checkpoint mechanisms for large-scale machine learning workloads. In this role, you will work at the intersection of data engineering and distributed systems, ensuring that training workflows are highly performant, reliable, and cost-efficient.

 

Responsibilities

  • Dataset Acceleration:
    • Design and optimize high-throughput data pipelines for streaming and processing massive training datasets.
    • Implement caching, sharding, and prefetching techniques to maximize data-loading efficiency.
    • Ensure efficient integration with distributed storage systems (e.g., S3, GCS, Lustre, Ceph).
  • Checkpointing Systems:
    • Build and optimize distributed checkpoint mechanisms for large-scale training workflows.
    • Implement techniques to minimize checkpoint I/O overhead and ensure fault tolerance.
    • Develop incremental and differential checkpointing solutions to reduce storage costs.
  • Performance Optimization:
    • Profile and debug bottlenecks in data pipelines and checkpoint systems.
    • Optimize for GPU/TPU utilization by ensuring efficient data feeding and checkpoint recovery times.
  • Scalability and Reliability:
    • Develop systems that scale efficiently across thousands of nodes and petabyte-scale datasets.
    • Ensure fault-tolerant recovery and resume mechanisms for long-running training jobs.
  • Collaboration and Support:
    • Work closely with ML researchers, data engineers, and infrastructure teams to understand workload requirements.
    • Build tools and frameworks to enable seamless integration of dataset and checkpointing systems with existing ML workflows.

 

Qualifications

Must-Have:

  • Experience:
    • 5+ years of experience in data engineering, distributed systems, or ML infrastructure.
  • Technical Skills:
    • Expertise in high-performance data processing libraries (e.g., PyTorch DataLoader, TensorFlow Data, DALI).
    • Proficiency in distributed storage systems and data formats (e.g., Parquet, HDF5).
    • Strong understanding of checkpointing frameworks and file systems (e.g., POSIX, Lustre, GPFS).
  • Programming:
    • Proficient in Python, C++, or Go for performance-critical systems.
  • Optimization Techniques:
    • Experience with I/O optimization techniques (e.g., asynchronous data loading, prefetching).
    • Familiarity with compression and serialization for large datasets and checkpoints.
  • Soft Skills:
    • Analytical and problem-solving mindset.
    • Strong communication and collaboration skills across teams.

Nice-to-Have:

  • Experience with ML frameworks (e.g., PyTorch, TensorFlow, JAX) and distributed training.
  • Familiarity with hardware accelerators (e.g., GPUs, TPUs) and storage optimizations.
  • Knowledge of open-source contributions or projects related to data pipelines or checkpointing.
  • Experience with incremental and real-time checkpointing solutions.

 

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

 

Please see our privacy policy at https://www.together.ai/privacy  

 

Please mention that you found this job on MoAIJobs, this helps us grow, thanks!

Related Jobs

Together AI
LLM Training Frameworks and Optimization Engineer
San Francisco
Together AI
LLM Training Resilience Engineer
San Francisco
Together AI
GPU Cluster Resource Scheduling and Optimization Engineer
San Francisco
Fluence
Senior Optimization Engineer
Fluence
Senior Optimization Engineer