/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Mira Murati's Thinking Machines Lab launches its first product, Tinker, an API for fine-tuning language models, in private beta, with support for Qwen and Llama

Today, we are launching Tinker, a flexible API for fine-tuning language models. Moneycontrol : Ex-OpenAI CEO Mira Murati stealth AI lab launches its first ever product Matthias Bastian / The Decoder : Ex-OpenAI CTO Mira Murati introduces Tinker, an API for fine-tuning of open-weight LLMs Markus Kasanmascheff / WinBuzzer : Mira Murati's Thinking Machines Lab Launches ‘Tinker’ API to Challenge OpenAI in Model Customization Ann O'Dea / Silicon Republic : Mira Murati's Thinking Machines launches first product, Tinker Anyscale : Fine-tuning a Text-to-SQL Model with Tinker and Ray Robert Brown / Implicator.ai : Mira Murati's Thinking Machines Lab: Six OpenAI veterans launch Tinker with $2B seed, $12B valuation Stephanie Palazzolo / The Information : Mira Murati's Thinking Machines Lab Launches First Finetuning Product Maria Deutscher / SiliconANGLE : Thinking Machines launches Tinker language model fine-tuning service X: Mira Murati / @miramurati : Today we launched Tinker. Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what Andrej Karpathy / @karpathy : Tinker is cool.  If you're a researcher/developer, tinker dramatically simplifies LLM post-training.  You retain 90% of algorithmic creative control ... while tinker handles the hard parts that you usually want to touch much less often ... . Compared to the more common and existing paradigm of “upload your data, we'll post-train your LLM”, this is imo a more clever place to “slice up” the complexity of post-training ... Sarah Guo / @saranormous : custom thinking for everyone! Rafael Rafailov / @rm_rafailov : The biggest advantage of Tinker is it allows you to run your own environments or interaction loops and will hugely accelerate training custom agents! Sam Schoenholz / @sschoenholz : Tinker brings tools similar to the ones we use internally to the community. It provides a clean, transparent, abstraction that lets researchers write expressive experiments and training pipelines, while we manage the complexities of distributed training and sampling. We hope Kevin Lu / @_kevinlu : anyone who's tried running RL on top of language models knows how painful it is — building on top of new research, tinker makes finetuning frontier LLMs easy and performant! it's the latest in a long-standing dream to use finetuning to democratize training and personalization. Devendra Chaplot / @dchaplot : Announcing our first product: Tinker! Tinker is a training API for everyone! It lets you focus on what matters in LLM training - your data and algorithms - while we handle the heavy lifting of distributed training. You can train your own models using Tinker even if you have no Omar Khattab / @lateinteraction : One of OpenAI's biggest understated contributions was the reliable API-ification of LLMs. Maybe this will begin that trend for finetuning. Exciting. John Schulman / @johnschulman2 : Tinker provides an abstraction layer that is the right one for post-training R&D — it's the infrastructure I've always wanted. I'm excited to see what people build with it. “Civilization advances by extending the number of important operations which we can perform without Ruiqi Zhong / @zhongruiqi : Very excited about this release!! As a former grad student I struggled to finetune llms. Even when the gpus are enough, it was painful to set up the infra correctly. Tinker allows more researchers to understand and language models, beyond a few well-funded labs. Alexander Doria / @dorialexander : Apparently @thinkymachines moving in the direction of training environment as a service (similar to prime rl with more focus on sft?). Not yet sure what to think: for SFT biggest hurdle has always been the data. Barret Zoph / @barret_zoph : Excited to release Tinker and see what the community uses it for. Myle Ott / @myleott : So excited about this! Tinker provides a simple+powerful interface for postraining/RL research. It also manages all the infrastructure so that users can focus on data and environments. Hidden behind that simple interface is a ton of interesting and complex ML systems challenges! Eric Gan / @ejcgan : I've been using Tinker at Redwood Research to RL-train long-context models like Qwen3-32B on difficult AI control tasks - specifically teaching models to write unsuspicious backdoors in code similar to the AI control paper. Early stages but seeing some interesting backdoors 👀 Robert Nishihara / @robertnishihara : Very excited to see the Tinker release! @pcmoritz and I had a chance to experiment with the API. It does a nice job of providing flexibility while abstracting away GPU handling. Here's a simple example showing how to generate synthetic data and fine tune a text to SQL model. Philipp Moritz / @pcmoritz : Very excited to see the Tinker release by @thinkymachines! @robertnishihara and I had a chance to experiment with the API, see https://www.anyscale.com/.... It does a nice job of providing flexibility while abstracting away GPU handling. This will be 🔥 when combined with [image] Sebastian Ibarraran / @s_ibarraran : While reinforcement learning has been demonstrated to improve LLM performance on mathematical reasoning tasks, currently, there is far less evidence of performant scientific reasoning models. Using Tinker by @thinkymachines, we were able to rapidly train a variety of models on - [image] Chi Jin / @chijinml : 🚀With early access to Tinker, we matched full-parameter SFT performance as in Goedel-Prover V2 (32B) (on the same 20% data) using LoRA + 20% of the data. 📊MiniF2F Pass@32 ≈ 81 (20% SFT). Next: full-scale training + RL. This is something that previously took a lot more effort Tyler Griggs / @tyler_griggs_ : I had the chance to try @thinkymachines' Tinker API for the past couple weeks. Some early impressions: Very hackable & lifts a lot of the LLM training burden, a great fit for researchers who want to focus on algs + data, not infra. My research is in RL, and many RL fine-tuning @testingcatalog : Thinking Machines announced Tinker, a fine tuning API for LLMs, in private beta. “Write training loops in Python on your laptop; we'll run them on distributed GPUs.” Tinker Machines 🤖 [image] Rafael Rafailov / @rm_rafailov : Very excited to share what I have been working on with a great team of people at @thinkymachines. Tinker is a whole new way to train and customize models all the way up to frontier scale. Most importantly, it allows everyone to use their own code, data, tools and environments, Charlie George / @__charlie_g : 1/ How do you verify complex AI outputs at scale without expert-labelled data? Working with @thinkymachines' new RL API Tinker, I've been expanding on some previous work I shared around using unstructured internet data to train models to grade IMO / USAMO solutions. [image] @thinkymachines : Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models! [image] LinkedIn: Dan Checketts : I've been on many calls with businesses pleading with me to fix their fine-tuned model “It's 95% of the way there”  —  Truth is, LLMs live in an unstable equilibrium. … Threads: Venkatesh Thallam / @vthallam : Thinking Machines launched their first product, Tinker, an API to fine tune models for developers.  Very interesting launch, that if works could help create a lot of vertical agents that are slightly better at some use cases than foundational models. Forums: Hacker News : Announcing Tinker r/artificial : Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product r/technews : Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product BeauHD / Slashdot : Mira Murati's Stealth AI Lab Launches Its First Product

Wired Will Knight

Discussion

  • @miramurati Mira Murati on x
    Today we launched Tinker. Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what
  • @karpathy Andrej Karpathy on x
    Tinker is cool.  If you're a researcher/developer, tinker dramatically simplifies LLM post-training.  You retain 90% of algorithmic creative control ... while tinker handles the hard parts that you usually want to touch much less often ... . Compared to the more common and existi…
  • @saranormous Sarah Guo on x
    custom thinking for everyone!
  • @rm_rafailov Rafael Rafailov on x
    The biggest advantage of Tinker is it allows you to run your own environments or interaction loops and will hugely accelerate training custom agents!
  • @sschoenholz Sam Schoenholz on x
    Tinker brings tools similar to the ones we use internally to the community. It provides a clean, transparent, abstraction that lets researchers write expressive experiments and training pipelines, while we manage the complexities of distributed training and sampling. We hope
  • @_kevinlu Kevin Lu on x
    anyone who's tried running RL on top of language models knows how painful it is — building on top of new research, tinker makes finetuning frontier LLMs easy and performant! it's the latest in a long-standing dream to use finetuning to democratize training and personalization.
  • @dchaplot Devendra Chaplot on x
    Announcing our first product: Tinker! Tinker is a training API for everyone! It lets you focus on what matters in LLM training - your data and algorithms - while we handle the heavy lifting of distributed training. You can train your own models using Tinker even if you have no
  • @lateinteraction Omar Khattab on x
    One of OpenAI's biggest understated contributions was the reliable API-ification of LLMs. Maybe this will begin that trend for finetuning. Exciting.
  • @johnschulman2 John Schulman on x
    Tinker provides an abstraction layer that is the right one for post-training R&D — it's the infrastructure I've always wanted. I'm excited to see what people build with it. “Civilization advances by extending the number of important operations which we can perform without
  • @zhongruiqi Ruiqi Zhong on x
    Very excited about this release!! As a former grad student I struggled to finetune llms. Even when the gpus are enough, it was painful to set up the infra correctly. Tinker allows more researchers to understand and language models, beyond a few well-funded labs.
  • @dorialexander Alexander Doria on x
    Apparently @thinkymachines moving in the direction of training environment as a service (similar to prime rl with more focus on sft?). Not yet sure what to think: for SFT biggest hurdle has always been the data.
  • @barret_zoph Barret Zoph on x
    Excited to release Tinker and see what the community uses it for.
  • @myleott Myle Ott on x
    So excited about this! Tinker provides a simple+powerful interface for postraining/RL research. It also manages all the infrastructure so that users can focus on data and environments. Hidden behind that simple interface is a ton of interesting and complex ML systems challenges!
  • @ejcgan Eric Gan on x
    I've been using Tinker at Redwood Research to RL-train long-context models like Qwen3-32B on difficult AI control tasks - specifically teaching models to write unsuspicious backdoors in code similar to the AI control paper. Early stages but seeing some interesting backdoors 👀
  • @robertnishihara Robert Nishihara on x
    Very excited to see the Tinker release! @pcmoritz and I had a chance to experiment with the API. It does a nice job of providing flexibility while abstracting away GPU handling. Here's a simple example showing how to generate synthetic data and fine tune a text to SQL model.
  • @pcmoritz Philipp Moritz on x
    Very excited to see the Tinker release by @thinkymachines! @robertnishihara and I had a chance to experiment with the API, see https://www.anyscale.com/.... It does a nice job of providing flexibility while abstracting away GPU handling. This will be 🔥 when combined with [image]
  • @s_ibarraran Sebastian Ibarraran on x
    While reinforcement learning has been demonstrated to improve LLM performance on mathematical reasoning tasks, currently, there is far less evidence of performant scientific reasoning models. Using Tinker by @thinkymachines, we were able to rapidly train a variety of models on - …
  • @chijinml Chi Jin on x
    🚀With early access to Tinker, we matched full-parameter SFT performance as in Goedel-Prover V2 (32B) (on the same 20% data) using LoRA + 20% of the data. 📊MiniF2F Pass@32 ≈ 81 (20% SFT). Next: full-scale training + RL. This is something that previously took a lot more effort
  • @tyler_griggs_ Tyler Griggs on x
    I had the chance to try @thinkymachines' Tinker API for the past couple weeks. Some early impressions: Very hackable & lifts a lot of the LLM training burden, a great fit for researchers who want to focus on algs + data, not infra. My research is in RL, and many RL fine-tuning
  • @testingcatalog @testingcatalog on x
    Thinking Machines announced Tinker, a fine tuning API for LLMs, in private beta. “Write training loops in Python on your laptop; we'll run them on distributed GPUs.” Tinker Machines 🤖 [image]
  • @rm_rafailov Rafael Rafailov on x
    Very excited to share what I have been working on with a great team of people at @thinkymachines. Tinker is a whole new way to train and customize models all the way up to frontier scale. Most importantly, it allows everyone to use their own code, data, tools and environments,
  • @__charlie_g Charlie George on x
    1/ How do you verify complex AI outputs at scale without expert-labelled data? Working with @thinkymachines' new RL API Tinker, I've been expanding on some previous work I shared around using unstructured internet data to train models to grade IMO / USAMO solutions. [image]
  • @thinkymachines @thinkymachines on x
    Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models! [image]
  • r/artificial r on reddit
    Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product
  • r/technews r on reddit
    Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product