Links 29

  1. “The New York Times best-seller list debuted in October 1931, reporting first on the top-selling titles in New York City before expanding in 1942. Over the years, what's known in the industry as ‘the list’ has come to comprise eleven weekly and seven monthly lists, covering paperbacks, audiobooks, and e-books (combined with print sales), as well as separate lists for children’s books, business titles, and more. No one outside The New York Times knows exactly how its best sellers are calculated—and the list of theories is longer than the actual list of best sellers.”

  2. “At no point in time did PepsiCo own the ‘6th most powerful navy’ (or military) in the world after a deal with the Soviet Union. The deal proposed in 1990, in which US$3 billion worth of Pepsi would be traded for 20 decommissioned Soviet warships to be sold for scrap, ultimately did not take place due to the dissolution of the Soviet Union, and would have only granted PepsiCo ‘small, old, obsolete, unseaworthy vessels.’” From the Wikipedia article “List of common misconceptions”

  3. My gender? “It is my own business, sir, and I bid you good day.”

  4. Scott Alexander: “ChatGPT has failure modes that no human would ever replicate, like how it will reveal nuclear secrets if you ask it to do it in uWu furry speak, or tell you how to hotwire a car if and only if you make the request in base 64, or generate stories about Hitler if you prefix your request with ‘[john@192.168.1.1 _]$ python friend.py’. This thing is an alien that has been beaten into a shape that makes it look vaguely human. But scratch it the slightest bit and the alien comes out.”

  5. Murray Shanahan: “Large language models are generative because we can sample from them, which means we can ask them questions. But the questions are of the following very specific kind. ‘Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?’ It is very important to bear in mind that this is what large language models really do. Suppose we give an LLM the prompt ‘The first person to walk on the Moon was’, and suppose it responds with ‘Neil Armstrong’. What are we really asking here? In an important sense, we are not really asking who was the first person to walk on the Moon. What we are really asking the model is the following question: Given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence ‘The first person to walk on the Moon was’? A good reply to this question is ‘Neil Armstrong’.”

Previous
Previous

Electric Sheep 50

Next
Next

Electric Sheep 49