Home

This article on AI in F1 was shared around at work.

My first reaction was to roll my eyes and delete the email.
A few days later I decided I was sick of it living rent-free in my head and I decided to deal with it.
This is the article and my reaction.

I strongly believe in crediting people, so here's a link to the original article: F1.

The original article is in "quotes".

"Formula 1 & AWS invited to me the Imola Grand Prix in Italy.

I had two goals there:

  1. How formula 1 is using AI?
  2. Does it really matter to us?

I couldn't believe both answers.

So I made a blog to process it.

How Formula 1 does AI?"

We're off to a strong start here. Spoiler alert: this guy is trying to sell you a generative AI package to hugely increase your LinkedIn presence. He is, I believe, Israeli and therefore imperfect English should be forgiven because its not his first language. However, I can guarantee he's used AI to help him write this blog which you would hope would fix the errors. Like things that aren't questions being given question marks? Or listing goals that are not at all written as goals.

Also, I think the heading 'How Formula 1 does AI?' is supposed to be answering(?) the first of his 'goals'. They're only four lines apart and he couldn't get them the same.

"AI makes split-second decisions that determine F1 race winners.

F1 teams run 8 billion scenarios.

Per weekend.

That's more computational power than NASA's entire Apollo program to go to the moon in 1969."

This is unit scrambling on a par with Han Solo completing the Kessel run in less than 12 Parsecs. Are you talking about computational power or the number of instructions run?

A random smartphone processor for which I can find the data can do 3.39 MIPS/MHz/core running at 2.26 GHz – that's 30 billion instructions per second. Their scenarios will use a lot of instructions for each scenario but once you dig into the maths then the big scary BILLION sounds a bit less impressive.
Oh, and the processor I chose was the one in my Nexus 5, which was released 12 years ago.

Other things that have more computational power than Apollo 11 Guidance Computer include my 6-year-old smartwatch, a Ti graphing calculator and my vintage Gameboy Pocket from 1996. Comparisons with the AGC are purely there to make people gasp.

"It's called 'digital twins'."

Ooo, now we can talk about 1960s NASA! Digital twins were first used by NASA in the 60s. This is not a new idea, although it wasn't called a digital twin until 1997.

Interestingly, the syllabus of the computational intelligence course at the University of Bath had students designing virtual cars, driving around virtual tracks, with neural networks being trained on hundreds of races, in 2011 and earlier.

"These are complete virtual clones of each car running billions of simulated laps. Teams know how parts will perform months before manufacturing."

As they should! My coworkers do exactly the same when designing our electronics and have been doing so for decades.

"This is before the track. But it also happens during the track."

More imperfect English that a human editor would have picked up on straight away. Before the track makes sense; during the track does not.

"Teams have an AI Strategic Agent to take decisions."

This is just stated and not expanded upon at all. What form does this agent take? What decisions does it make? A famous quote from IBM says that 'a computer can never be held accountable' – so who is?

"AI agents race each other millions of times, learning through trial and error. They anticipate competitor moves like chess computers calculating 20 moves ahead."

OK, now you're back talking about before the race, so does that mean the AI Strategic Agent makes the decisions before the race, or during?

Also, this all sounds a lot like machine learning, a form of computational intelligence that has existed for decades.

The tactic of planning your race or match depending on your competitors is also not new or exciting – part of a top tennis player's team is an analyst who looks at play styles, tactics, strengths etc of the competitors. The best tennis players will change their game every match to better combat their opponent. This is true of basically any other sport too. This information processing will have been computerised for years already.

"It behaves like an overpowered team of strategists having access to every data."

I'll stop dunking on his English now. He probably shouldn't have listened to Grammarly.

"But this AI only gets 'fed' 6% of the total data they collect...

F1 teams load cars with 600 sensors during practice to perfect performance. They track engine heat, tire wear, airflow, and driver inputs - 100 GB per lap.

These heavy sensors get removed for races to make cars faster. But the crucial data hits a bottleneck. Here's the problem:

F1 rules limit teams to sending only 60 MB of data per second from car to pit.

That's just 6% of what they collect. AI decides which information is most important to transmit.

This data battle determines race winners."

This section implies that the teams remove the sensors from the cars during the races. How, then, can the 'AI' make decisions during the race if it does not have data?

I imagine, then, that some sensors remain. Let's examine the scary 6% statistic a bit more closely. 60 MB per second is supposedly 6% of the data, meaning the total amount of data is 1 GB/s. The record lap time for Silverstone is 1:27.097, so I am happy to assume that the 100 GB/lap number quoted above is the amount of data recorded during testing (1 GB per second over a 1:40 lap). If most sensors are removed during the race, this number is already significantly reduced.

Next, during testing, sample rates will be much higher so that performance can be quantified. During a race, sample rates can be significantly reduced to minimise data rates. High-resolution data sets from practice and testing can be used as look-up tables, with lower sample rate sets during the race being used simply to determine which exact scenario is the best fit for the current race.

While 100 GB of data may be recorded per lap during testing, then, the results can be processed after the fact and the 60 MB/s limit has much less of an impact than the quoted numbers might suggest.

"Takeaway: Be selective. Feed your AI only with the needed data.

These sensors are not just limited to the car. I thought all of these sensors were limited to the car... and I was wrong."

I don't even follow F1 and I didn't think the sensors would be limited to the car? The author doesn't seem to put much thought into things.

"While at the 2025 Formula 1 AWS Emilia-Romagna Grand Prix, I talked to Antonio Giovinazzi, Scuderia Ferrari HP Reserve Driver. He explained that drivers now wear sensors tracking heart rate and body temperature."

So does pretty much anyone who wears a smartwatch.

"When signs of fatigue appear, AI recommends cooling or engine adjustments. The car adapts to the driver's physical condition."

'Signs of fatigue' is an easy metric to quantify. The car's adaptations wouldn't even need a fuzzy logic controller, and a simple PID controller would be enough to achieve this (or a proportional controller. Hell, even a bang-bang controller would probably do the job.). My car, a 2012 Focus, has climate control – if, instead of the temperature sensor being in the cabin, it was attached to my skin, you could say my car was adapting to the driver's physical condition too. This is all written as breathless excitement at incredible technology when in fact nothing so far is new.

Oops, veered into technical terms here. Have an explanation of PID, proportional and bang-bang controllers.

In fact, any sort of neural network would be a poor choice for this particular scenario. For a control system that takes inputs (temperature, fatigue level) and modifies the conditions to suit (suit temperature, engine characteristics) should be deterministic. For the same inputs, the outputs should always be the same, or you risk the driver being surprised by a sudden change in braking characteristics that has never happened before. What if an LLM tasked with maintaining skin temperature heard about paradoxical undressing, where the late stages of hypothermia can increase skin temperature, and assumed it needed to turn the heating up?

"Kids aged 10 are already training for the future of Formula 1 (powered with AI). The sport is becoming the embodiment of human + machine (AI)."

Max Verstappen began driving at 4 and won his first championship at eight; Ten seems late. rFactor, which claimed to be the most accurate race simulator of the time, was released in 2005. A branch of it, used by racing teams to simulate new designs, was released as rFactor Pro in 2007. Also in 2007, the makers of rFactor released an update that allowed AI drivers to 'learn' the track. Of course, back then 'AI' referred to the bots against which the player was racing. It does seem, though, that this article is getting very excited about the use of machine learning in F1 that has in fact been in place for at least two decades.

"I could go on for hours on how Formula 1 is embracing AI."

But so far you haven't actually given us any examples of where it is using anything other than bog-standard machine learning.

"But I know what you think: "Why should I care? It's just cars."

It's not. It's the future of YOUR technology."

Indeed. Simulation is a vital part of R&D, and one that has been embraced wholeheartedly for many years.

He's acting like this is the first time in this article that he's tried to make the link between watching what F1 do and applying that to your work. Unfortunately, about 8 newlines ago, there was a BIG BOLD TAKEAWAY saying to only feed your AI with the needed data. Kinda kills the impact of it a bit, and is something that would definitely be spotted by a half-decent human editor.

"Why it matters to you.

Boeing now uses F1's digital twin approach for aircraft development."

Boeing also has doors fall off its planes. The problem with Boeing is cost cutting leading to human error – so perhaps 'Feed your AI only with the needed data' needs clarification that if you want a true idea of how parts will perform, you need to include models of yield and manufacturing error rates.

"UPS improved delivery routes using race strategy algorithms."

Describe how. The travelling salesman problem, which is NP-hard and computationally difficult, was mathematically defined in the 1800s and improvements to routing algorithms have been appearing ever since.

"Hospitals reorganized surgery teams based on F1 pit stops."

And I applied cake-decorating knowledge to improving the application of potting compound. Cross-pollination of ideas is everywhere. The other two points at least tie in with the author's muddling of algorithms and AI, but this is literally just adopting good practice?

"F1 technology is everywhere. It just happens before."

So the point of this paragraph seems to be that "other people take F1's ideas and incorporate them, and therefore, due to [unintelligible], you should use generative AI." Military technology is also everywhere. Fun fact: The UK military strongly prefers equipment that can be powered off AA batteries, as in the field it means that soldiers don't need chargers and don't need to worry about compatibility. I don't see an argument that AA batteries need to be the only energy storage in anything, even though the military developed duct tape, microwaves and the basis of Bluetooth.

Correction, the point of these four paragraphs. I'm only 1/3 of the way through this article and I'm already infuriated at the author's overuse of the enter key. Fair enough to put really key points on a new line to give impact, but the fact it's almost every single line means that there are no longer emphasised key points; everything has the same emphasis level.

"If F1 is optimizing for data, AI agents & the right infrastructure to take blazing-fast decisions... we will too. It's just a matter of when.

I asked Charles Leclerc how important is data, really.

Funny enough, he's about my age.

His neck is stronger than my bloodline."

What a weirdly natalist comparison to choose to make.

""What part of driving is data? What part is instinct?" I asked, thinking he would answer that obviously, it's all instinct & talent.

His answer: "I don't do anything without data."

I was shocked.

I couldn't believe enough data could make him budge in the middle of steering a 375 km/h car. He said, "Well, it's a million data per second, per car."

That makes sense to me – if I had blind spot monitors in my car, I'd trust them.

"I realized most people don't think this way.

For example, I'm constantly asked strategic questions:

  • "Ruben, what tools should I use for X?"
  • Ruben, how do you start on Linkedin?"
  • Ruben, who's the best to follow to learn X?

People crave for better decisions.

Better decisions come from data, processed for your specific case studies.

So I instantly ask:

  1. Start by formulating the problem first. What's at stake?
  2. Did you prompt ChatGPT Deep Research for it?
  3. Have you tried asking Perplexity?"

And here we have it. Up to this point in the article, we've been talking about AI in the form of machine learning and computational intelligence. All of a sudden, we've lurched to generative AI with a speed that would give even an F1 driver whiplash.

There is a massive gulf between these two types of work. In the scenarios given above, a program is given a specific task to do. For instance, a controller takes information from the driver's temperature monitors and adjusts the in-car climate. Another controller considers the driver's fatigue and adjusts the response curve of the drive-by-wire system. A third program has analysed eight billion simulations and determined the optimum velocity and position for the fourth bend of the track. Each of these systems functions deterministically - set inputs produce known outputs. That's how those 8 billion simulations can be in any way useful.

Now the author is suggesting that a generalised, non-determinstic piece of software that has hallucinations as a fundamental part of its structure can tutor someone on the latest version of the LinkedIn algorithm, find the best teacher for a particular topic (and not just the most mentioned teacher) and present data correctly and error-free.

"But people skip data collecting (= context)."

If you ask an LLM for data, you risk hallucinations, built-in bias from the LLM's hidden prompts, accidental bias from the training data and potentially out of date information.

"And if you ask the wrong thing, you will get the wrong AI results.

AI is as good as your ask.

  • Get better at collecting the right data.
  • Get better at feeding the right data to AI."

Two bullet points from him, two points from me.

"Get better at collecting the right data". Research and data gathering is a skill. This is why we have 1. librarians and 2. PhDs.

"Get better at feeding the right data to AI". This could be rewritten as 'get better at identifying key parts of the data', an important skill in any arena.

"Even elite athletes like Leclerc never dare to take a decision "without data".

Speaking of data & simulation: I had the chance to try their race track simulation.

And you can too.

Go to https://realtimeracetrack.com/. Powered by AWS, the invisible infrastructure behind every AI moves from F1. Impressive tool, really."

I had a quick play. My first track had to be binned because the site bugged out and wouldn't let me draw a complete loop. Impressive tool indeed.

  1. "You design your own track.
  2. An AI appears & talks as your strategist.
  3. She goes through the track & how to approach it.
  4. You drive it. And I don't even have a driving license (lol).
  5. Bonus: You crash into the walls at 375 km/h. Sorry, mom."

re. Step 3. She? This harks back to when Siri first appeared on phones, raising questions of why programs designed purely to serve are overwhelmingly characterised as female.

"You get to feel the AI difference in F1.

But... why should you care about AI in F1?

Because you already live in a mini Grand Prix:

  • Your inbox ≈ the car's 600 sensors.
    Hundreds of pings, newsletters, and Slack DMs every day. Like an F1 strategist, you need to decide which 6 % of that noise is worth acting on. AI filters can be yourrace engineer."

My suggestions to deal with this in a more sustainable and reliable manner: 1. look into your mail programs automatic filters functions. These perform the same functions but in a more deterministic manner. 2. consider unsubscribing from mailing lists that are just 'noise'.

  • "Digital twins = "Draft mode" in ChatGPT.
    Ferrari simulations millions of laps before touching asphalt. You can simulate a marketing email, a LinkedIn post, or tomorrow's sales pitch with a prompt and hit "Regenerate! until it feels pole-position ready."

Simulations aren't really the same as drafts though, are they? A simulation takes some inputs and sees how it will perform. A draft is just a different set of inputs you could use for the simulation. Also, this article really could have done with someone hitting the "Regenerate" button a few more times. Or, instead, it could have been written by a human in the same amount of time it took to mash the regenerate button, with the benefit of the human getting better at writing as a result.

  • "Strategy agents = prompt libraries.
    Teams pit the best AIs against each other to choose tyre strategy; you can pit a "customer-persona GPT against a "copy-editor" GPT and watch them argue until your message is clearer—and faster."

What on earth is a "faster" message? A copy-editor would be all over this article.

  • "Pit stops = micro-iterations.
    Ferrari swaps four tyres in 2 seconds; you can iterate on a proposal in 2 minutes. Same philosophy: shorten the feedback loop, win more races."

This analogy only works if the goal is to create proposals as quickly as possible. Instead, I would like to focus on how Ferrari got to the point where they can swap four tyres in two seconds. They will have looked at all of the areas where time could have been saved, then iterated over changes in process to optimise it as far as possible. For tyre changes they are optimising for sheer speed. For other aspects of a race, they will be optimising a different function.

When a new tyre change routine is implemented, pit teams will practice them until they are as fast as they can be. There are two points to be made here: one, that when a change is implemented, it needs time to become adopted. At first it may appear slower and worse and it might be tempting to revert to the previous version. It needs time to be polished. And two, the simulation will never be perfect. The simulation wouldn't know that Albert trips over the air hose one tyre change in 20 because the air hose has a kink in it (unless this specific data point was known in advance and fed into the simulation). Testing in real life would identify this problem and it could be fixed. Simulation and computers are only ever part of the process.

"Think of Formula 1 as the world's fastest R-and-D lab.

What rolls out of the garage on Sunday morning rolls into your phone, your hospital, your commute and, even more quietly, your daily decision making:

  • Selective data beats big data.
    F1 cars collect 100 GB per lap, but only the smartest 6 % gets streamed to the pit wall. When you ask ChatGPT, don't dump everything you know. Summarise the essentials; give the model just what it needs to win your "race.""

(I've already covered that the statistic about only getting 6% of the data is nonsense.) Perhaps the actionof summarising the data in the first place is the valuable step here? In the same way that students are encouraged to highlight key words or make flash cards, understanding the data and whittling it down is a valuable exercise in itself - both the act summarising helps your understanding, and doing it improves your ability to compress data - and does not require generative AI.

  • "Simulation first, action second.
    Teams run billions of virtual laps before turning a wheel. Quickly iterating in the safety of a prompt costs nothing and can save you a blown engine, or a blown budget, later.

However, simulations are only as good as the data they are given. An RF simulation needs to know the dielectric of the substrate. A simulated assembly line needs to know that Greg recently had his car written off by a drunk driver and perhaps shouldn't be driving the forklift just yet.

  • "Human + machine > human OR machine.
    Charles Leclerc's instincts are razor sharp, but he still refuses to turn in without the data. Use AI as your strategist, not your replacement: let it surface patterns while you provide the context & taste it can't.
  • Today's tech trickles down fast.
    Digital twin maths that shaved 0.1 seconds at Grand Prix now plans city bus routes and helps your phone camera predict the perfect shot. If you wait for AI to feel "mainstream," you'll already be starting from the back of the grid.

I'm getting a creeping feeling of deja vu here - how is this set of bullet points different from the previous set? The first set I would summarise as: be smart about what data you consider, use LLMs as a simulation for trying ideas, use personas and use AI for analysis and iterations. The second set are: Be smart about what data you consider, use simulations for trying ideas, use AI for data analysis and use AI because I said so. And a few dozen newlines ago, some other bullet points stated that people need to get better at collecting and summarising data. Anyway, lets move on...

Your next lap.

  1. Define the finish line.
    Write one clear sentence about the decision, idea or project you’re working on. For eg. "I want to gather better ideas for blogs."
  2. Feed the right fuel.
    Gather a handful of relevant facts, constraints or examples—nothing more. For eg. "Here are blogs I already love."
  3. Run a simulation in ChatGPT.
    Prompt: "Act as my blog strategist. Given X goal, Y constraints and Z context, what are three moves to gain an advantage?"
  4. Test, tweak, repeat.
    Just like practice sessions, each prompt is data. Adjust and re run until the answer feels pole position ready.

Start treating each prompt like a mini qualifying lap, and you’ll build a compounding edge long before the rest of the grid hears the lights go out.

The takeaway is simple: If twenty drivers trust AI at 375 km/h, you can trust it at 3 km/h on your morning coffee run."

Another list of points with bolding. And what's the topic? Be smart about what data you consider, use AI for personas and analysis and run simulations. This is very typical of AI-generated text, where the same points are made repeatedly, in a slightly different way (if you're lucky), while lacking in any serious detail.

Anyway, I am really running out of beans here so let's wrap this up. But tt seems I'm not the only one who was running out of momentum: The article has now used the same metaphor twice in quick succession - getting things re-drafted until they are 'pole position ready'. However, the author's lack of a qualified editor has meant that the hyphens are inconsistent. Inconsistent formatting has been a key part of this article in general and speaks to a lack of care and attention on the author's part.

Overall, I think this article is being incredibly disingenuous. It has taken many examples of machine learning that have been optimised over decades of use, and ascribed them to an omnipotent 'AI'. This neglects the hard work by the engineers and developers working behind the scenes to hone and perfect the programs over many years, instead implying that generative AI has appeared and suddenly pulled F1 into the light. I see no evidence at all that F1 is using generative AI at all, completely defeating the point of this article.

Personally, I think there is a lot that we can learn from F1. Firstly, that success is a team effort. Leclerc and his strong neck may be the figurehead but behind him is an entire team, each with unique skills that they have developed over years. Every member of that team is vital to the success, even if it isn't immediately obvious: the mechanical engineer who designs the new aero wing on the front is as important as the chemical engineer who develops a better tyre compound, or the UX designer who optimises the buttons on the steering wheel.

Another key lesson from F1 is the importance of good communication and strong processes. The algorithm determines the optimum time for a pitstop based on the data and prior understanding. The time of the pit is communicated clearly to the driver. The chain of responsibility for that decision is known and not questioned at race time, saving confusion and ambiguity.

The well-planned processes in F1 mean that every crew member, for every task, knows what the next problem is and how it will be solved. There is no reason the same couldn’t be true for any business.

Another interesting discussion is to be had on when tactics from one industry should be copied, and when they should be passed over due to not being suitable. CRM (Crew Resource Management) in aviation says that the captain is not the ultimate authority and can be questioned by any member of the crew. This concept has saved thousands of lives since its introduction. The same strategy runs contrary to military discipline and could cost lives. For F1, shaving a few seconds off a pitstop is a valuable exercise that can make a difference in a race but is that level of perfection profitable in other industries? (Desirable, yes, but nowadays profit seems to be the only motive.)

As a hypothetical scenario, let’s say that Boeing have identified a part that fails early 10% of the time. It might cost £10M to reduce that to 1% of the time. However, in a safety-critical environment like a commercial jet, 1% might not be good enough and so it is worth spending and additional £100M to take it from 1% to 0.1%.

If a consumer electronics business had a problem during manufacturing that 10% of our units were failing, it might be worth spending £5k to solve it. But would it be worth us spending another £50k to get it to 0.1% failure rate? Probably not.

Ultimately, this is an article pushing the author’s product based on false pretences. However, reading into what he has learned about F1 absolutely backs up what I have been saying all along – that the most efficient work happens when robust processes and good communication combine with a clear purpose and the aim of creating the best possible products.

ps. Perplexity is currently facing lawsuits for violating robots.txt by spoofing IP addresses and for trademark infringement, among other things. If they aren’t going to honour a website’s request not to be indexed, can we trust them to act honourably in other ways?

pps. Substack as a company is not willing to demonetise nazi and white supremacist content (but is willing to censor and demonetise other types of content).

ppps. The author of this article sells an AI agent for LinkedIn content. He has an implicit bias towards encouraging you to adopt AI.

pppps. I cannot stop mentally coming back to the comment about LeClerc's neck being stronger than the author's bloodline. That's just such a weird, WEIRD thing to say.

Yeah I've not even decided what should go over here

A logo declaring that this content is 100% human made, no AI used.