This week the Lineshift team attended a unique talk led by Luc Julia, co-creator of Siri and now Scientific Director of AI at Renault. A striking encounter with a pioneer in vocal artificial intelligence — both clear-headed and provocative.
The history of AI: birth in 1956?
He started his talk with that punchline. Since its beginnings in 1956, AI has alternated between over-hyped expectations and “AI winters.” Originally based on simple statistics, it has gone through phases:
- 1956: first use of the term “artificial intelligence” at Dartmouth (USA), with early ideas around cybernetics, information processing, automata theory, decision models.
- 1970-1990: invention of expert systems and logical decision trees. These tools were the first attempts to replicate expert human reasoning in specific domains.
- 1997: Deep Blue (IBM) defeats Garry Kasparov at chess — a global shock in a field usually so reserved.
- 2000s: machine learning takes off, thanks to the internet and big data.
- Since 2010: rise of deep learning and of generative AI.
Julia emphasizes that AI is not intelligent per se; it’s a highly specialized tool. In certain tasks it’s faster, more available, more reliable than humans — but not more “intelligent.” He compares it to tools throughout history: Blaise Pascal’s calculating machine (Pascaline) or modern calculators; GPS that guides us but doesn’t “think.”
Siri: the origin of a voice intelligence
- In 1997 Luc Julia and his team built one of the early conversational voice assistants. But it only correctly interpreted about 70% of words.
- They came up with what he called the “nightclub theory”: likening conversation in a loud nightclub — you miss many words, but still usually get by with humor or context. This “more human” idea appealed to Steve Jobs, who then acquired their company.
- At one point Siri had about 180,000 users. When it launched on the iPhone on October 4, 2011, it quickly jumped to 180 million users.
- Since then Siri hasn’t evolved into a full-dialogue assistant; most of its uses remain single-step commands (“Call mom,” “play music,” “what’s the weather”).
Why global generative AI is an ecological dead end
- Generative models like ChatGPT, Gemini, etc., are very powerful: for example, ChatGPT-4o uses over 1.2 trillion parameters drawn from all over the internet.
- Running these models requires huge resources: data centers consume large amounts of electricity and water.
- Julia argues older models were more efficient and “frugal.” Despite the huge resources of today’s models, they still make many errors (about 36% by his estimates).
- He believes the future lies in vertical, domain-specific agents rather than one general AI.
The future is with specialized agents
- Instead of a universal AI, he advocates for more targeted, efficient, precise agents specialized in specific domains — like Lineshift’s automotive voice assistant.
- For instance, at Renault he’s overseen integration of an onboard agent that is “lean and relevant” in its new R5 model.
- He ends with a message: don’t just learn programming languages (which change over time), learn logic and mathematics — skills that endure and matter for structuring, reasoning, problem solving. AI won’t replace those.
Final thoughts
Luc Julia urges that we shouldn’t be passive about AI; we should understand it, tame it, use it to build a more human future. Focus on what excites us, in the spirit of Steve Jobs’ famous “Stay hungry, stay foolish.”