Unlock valuable insights with our tailored, regionally-focused research products, delivered by experienced professionals attuned to your needs. Reach out by clicking the button below for collaborations or more information.
December 3, 2024
November 20, 2024
March 7, 2024
Anthropomorphising is a problem as society debates AI. The term “hallucination” is used for when the AI is “just plain wrong”. You say this is better described as having the “qualities of a great bullshitter” (“Silicon dreamin’”, March 2nd). But such a person has, at heart, an intent, such as to evade criminal prosecution or to win an election (or both). Large language models do not have intent, but as you pointed out, merely produce the most probable next word for a certain input.
Our intent as users of AI may be to use a reliable tool, marketed as a useful assistant or alternative to reading Wikipedia. The AI boosters’ intent is to sell a product or support ever larger valuations of their companies. Who’s the real bullshitter there?
Seth Hays
Editor
Asia AI Policy Monitor
Taipei, Taiwan
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.