OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step The ChatGPT maker reveals details of what’s officially known as OpenAI-o1, which shows that AI needs more than scale to advance........ OpenAI made the last big breakthrough in artificial intelligence by increasing the size of its models to dizzying proportions, when it introduced GPT-4 last year. The company today announced a new advance that signals a shift in approach—a model that can “reason” logically through many difficult problems and is significantly smarter than existing AI without a major scale-up. ......... Rather than summon up an answer in one step, as a large language model normally does, it reasons through the problem, effectively thinking out loud as a person might, before arriving at the right result......... “There are two paradigms,” Murati says. “The scaling paradigm and this new paradigm. We expect that we will bring them together.” ......... LLMs typically conjure their answers from huge neural networks fed vast quantities of training data. They can exhibit remarkable linguistic and logical abilities, but traditionally struggle with surprisingly simple problems such as rudimentary math questions that involve reasoning. ........ Reinforcement learning has enabled computers to play games with superhuman skill and do useful tasks like designing computer chips. The technique is also a key ingredient for turning an LLM into a useful and well-behaved chatbot. ....... “The [new] model is learning to think for itself, rather than kind of trying to imitate the way humans would think” .......... OpenAI says its new model performs markedly better on a number of problem sets, including ones focused on coding, math, physics, biology, and chemistry. ......... The new model is slower than GPT-4o, and OpenAI says it does not always perform better—in part because, unlike GPT-4o, it cannot search the web and it is not multimodal, meaning it cannot parse images or audio. .......... In July, Google announced AlphaProof, a project that combines language models with reinforcement learning for solving difficult math problems. ....... “I do think we have made some breakthroughs there; I think it is part of our edge,” Chen says. “It’s actually fairly good at reasoning across all domains.” ......... Noah Goodman, a professor at Stanford who has published work on improving the reasoning abilities of LLMs, says the key to more generalized training may involve using a “carefully prompted language model and handcrafted data” for training. He adds that being able to consistently trade the speed of results for greater accuracy would be a “nice advance.”
Hispanic household wealth has tripled over the last decade
........... Latinos lost up to two-thirds of their median household wealth in the wake of the Great Recession. ........... a forecast for the U.S. Latino economy through 2029. It shows the cohort’s economic output will surpass Japan’s by 2024 and Germany’s by 2027 ........ “A young Latino in the U.S. turns 18 every thirty seconds.”
Something New: On OpenAI's "Strawberry" and Reasoning It is amazing, still limited, and, perhaps most importantly, a signal of where things are heading......... In fact, it can now beat human PhD experts in solving extremely hard physics problems.......... The AI “thinks” about the problem first, for a full 108 seconds (most problems are solved in much shorter times). You can see its thoughts ........ GPT-o1 does things that would have been impossible without Strawberry, but it still isn’t flawless: errors and hallucinations still happen, and it is still limited by the “intelligence” of GPT-4o as the underlying model .....
Using GPT-o1 means confronting a paradigm change in AI.