«In early February Openai, the world’s most famous artificial-intelligence firm, released Deep Research, which is “designed to perform in-depth, multi-step research”. With a few strokes of a keyboard, the tool can produce a paper on any topic in minutes. Many academics love it. (...)
Should you shell out $200 a month for Deep Research? (...) To help you decide, your columnist has kicked the tyres of the new model. How good a research assistant is Deep Research, for economists and others?
The obvious conclusions first. Deep Research is unable to conduct primary research, from organising polls in Peru to getting a feel for the body language of a chief executive whose company you might short. Nor can it brew a coffee, making it a poor substitute for a human assistant. Another complaint is that Deep Research’s output is almost always leaden prose, even if you ask it to be more lively. Then again, most people were never good writers anyway, so will hardly care if their ai assistant is a bit dull. (...)
When it comes to data questions requiring more creativity, however, the model struggles. (...) The model has even greater difficulty with more complex questions, including those involving the analysis of source data produced by statistical agencies. For such questions, human assistants retain an edge.
The second issue is the tyranny of the majority. Deep Research is trained on an enormous range of public data. For many tasks, this is a plus. It is astonishingly good at producing detailed, sourced summaries. (...)
Yet the sheer volume of content used to train the model creates an intellectual problem. Deep Research tends to draw on ideas that are frequently discussed or published, rather than the best stuff. Information volume tyrannises information quality. It happens with statistics: Deep Research is prone to consulting sources that are easily available (such as newspapers), rather than better data that may be behind a paywall or are harder to find.
Something similar happens with ideas. (...) In other words, those using Deep Research as an assistant risk learning about the consensus view, not that of the cognoscenti. That is a huge risk for anyone who makes their income through individual creativity and thought, from public intellectuals to investors.
The idiot trap
A third problem with employing Deep Research as an assistant is the most serious. It is not an issue with the model itself, but how it is used. Ineluctably, you find yourself taking intellectual shortcuts. Paul Graham, a Silicon Valley investor, has noted that AI models, by offering to do people’s writing for them, risk making them stupid. “Writing is thinking,” he has said. “In fact there’s a kind of thinking that can only be done by writing.” The same is true for research. For many jobs, researching is thinking: noticing contradictions and gaps in the conventional wisdom. The risk of outsourcing all your research to a supergenius assistant is that you reduce the number of opportunities to have your best ideas.
With time, OpenAI may iron out its technical issues. At some point, Deep Research may also be able to come up with amazing ideas, turning it from an assistant to the lead researcher. Until then, use Deep Research, even at $200 a month. Just don’t expect it to replace research assistants any time soon. And make sure it doesn’t turn you stupid. »
Sem comentários:
Enviar um comentário