AI is good at input
Jeremy Keith published a transcript and the slides of his talk about Web3, AI and Design. The AI-part is helping me a lot to get a better understanding about the topic.
Use these tools for inputs, not outputs. I would never publish the output of one of these tools publicly. But I might use one of these tools at the beginning of the process to get over the blank page. If I want to get a bunch of mediocre ideas out of the way quickly, these tools can help.
Here is my compressed list of Jeremy´s statements:
- GPT: AI today means typically large language models and machine learning. A big amount of data is taken and numeric tokens are assigned to the data. A token represents a bigger item, like a phrase in a piece of text. The GPT part of ChatGPT stands for Generative Pre-trained Transformer. The pre-training are the big data tokens.
The generative part is about combining—or transforming—tokens in a way that should make probabalistic sense.
It is about applied statistics, because it works on implied correlations. You can understand it as a lossy compression of the big data, like a JPG image is a lossy compression of the real image.Large language models identify statistical regularities in text. Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing. If a large language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory?
- Overfitting: The output of a GPT is too close to the original data. This will be called plagiarism.
- Hallucinations: The output of a GPT strays to far from reality.
Another word for this is lying. Although the truth is that all of the output is a form of hallucination—that’s the generative part.
- Autocomplete: AI can be seen as a kind of advanced autocomplete. Large language models do this on a big scale.
- Transformation: LLMs are good at transforming. Text to speech, spech to text, text to images, long form to short form, short form to long form. Even coding can be seen as a kind of transformation.
- How to use: The AI tools will not help with understanding and defining the problem to solve. The tools are good at quantity, not quality. Use them for input during your design process, to get a bunch of mediocre ideas quickly, not for output.
Links
- “Web3” and “AI”, transcript of Jeremy´s talk
- Slides of Jeremy´s talk
- Machine Learning, tagging of developments related to the topic on Jeremy´s site
- ChatGPT Is a Blurry JPEG of the Web, by Ted Chiang
- THE INTELLIGENCE ILLUSION, A practical guide to the business risks of Generative AI, by Baldur Bjarnason
- GENERATIVE AI: WHAT YOU NEED TO KNOW, a free resource that will help you develop an AI-bullshit detector, by Baldur Bjarnason
Jeremy´s transcript, transformed by ChatGPT into a summary, not longer than 200 words
In a talk at the Design Business Association gathering in Brighton, the speaker discusses the concepts of „Web3“ and „AI“. Regarding „Web3“, the speaker explains that it is a meaningless term often used to sell products or solutions. The mention of blockchain in relation to Web3 is seen as a solution in search of a problem. Moving on to „AI“, the speaker describes it as a term that encompasses various technologies like large language models and machine learning. These models work by compressing data into tokens, which can generate outputs that may or may not match objective reality. The speaker suggests that AI can be seen as an advanced autocomplete tool, useful for transformations and generating quick ideas, but not as a problem-defining tool. The speaker warns against buying into the hype surrounding AI, driven by venture capital investments, and encourages businesses to focus on practical applications of the technology. The talk concludes with recommendations for further reading on AI and machine learning.
Comments