Title: Reprogramming the American Dream: From Rural America to Silicon Valley – Making AI Serve Us All
Author: Kevin Scott
Completed: March 2021
Overview: Book club read with PSESD and computer science teachers across the state. The book promised to be about AI and spent the first several chapters as an autobiography without covering AI. Then quickly got into models of AI which other teachers (particularly elementary school teachers with less background in CS) couldn’t follow. It ended with some of this political views but didn’t provide much more insight than you might get in a Wired article. There are some interesting points made, but it’s not one I’d highly recommend.
Highlights: (My comments below each highlight)
- I used to watch my grandfather when he worked, not realizing then that his approach was very much that of an engineer or scientist. He would examine a broken piece of consumer technology like a toaster or blender and, through a process of elimination, begin to diagnose what was still functioning and what was broken. Like a computer scientist, he used abstraction to suppress the more complex details that were not relevant to the problem he was addressing. There was no need to work inside the electric motor or heating element, for example, if the problem was one level above that. A broken toaster or blender was just a black box—completely opaque—to the stymied customer who brought it to my grandfather, but for him it was a puzzle to be solved. He could take the problem all the way down through the layers of complexity to bare metal if he had to. For him there was no abstraction boundary. He would just punch through it. There were always new components and functionality to be discovered.
- Maker culture isn’t new
- He was unflappably confident that he could figure things out, that each exploration of a problem, no matter how frustrating, helped him come to a better understanding of himself and his relationship with the world around him, and that human need itself is what makes a problem worth solving. It’s this understanding of problem solving and my own humanity that has never once made me wonder what my role is in a world with increasingly capable AI. I’m a human with curiosity about the fascinating canvas of nature and people and our complex creations, with compassion for the problems other humans have, and with a desire to help solve them. To varying extents, this is something we all share. AI is a useful tool for exploring that curiosity and for solving human problems. Neither it, nor anything else, will ever take away that curiosity and compassion.
- I found myself reading random bits of content that wasn’t delivering long-term value. I decided to do an experiment where I dramatically increased my consumption of high-quality information. I implemented a 70/25/5 rule where 70 percent of my media consumption time needed to be spent on high-standards, editorially and/or peer-reviewed, long-form content related to my work and professional interests. That covers a lot of ground, given my job. Twenty-five percent of my media consumption time was to learn something new and different, not necessarily related to my job, and not necessarily from one of the media sources I tend to favor. And 5 percent is everything else, which is enough to scan through blog headlines a couple of times a day, and maybe spend a few minutes a week checking out what friends and family might have posted to Instagram.
- This meant that, once again, I started reading more peer-reviewed computer science papers and textbooks. I started reading one nonfiction book a week. I subscribed to Nature and Science, two high-quality weekly journals for the natural sciences that contain both rigorously peer-reviewed articles and content authored with high standards of editorial review. I subscribed to the Economist, the Financial Times, the New Yorker, and the Atlantic. I curated a high-quality set of podcasts to listen to while commuting. I surrounded myself with high-quality, genuinely informative content. And in my 25 percent bucket I might watch a YouTube video from This Old Tony or Jimmy DiResta to learn how to make something, or, make myself read something thoughtful that is outside of my comfort zone.
- How can we make this the default? With Instagram, Facebook, Twitter, Reddit, etc in our pocket, how do we made “quality media” the default?
- Never turn the community development process over to any agency that does not involve the people of the community.
- Important to keep in mind many AI tools are being built as open source software, which means that communities of developers can easily form around the tools and participate in their creation. Having lots of people developing these tools makes it more likely that they will serve a broader set of purposes than if individual companies develop them. Perhaps more important, the open part of open source means that the tools are available for anyone to use who can clear what, for professional developers, are modest hurdles.
- Importance of Open Source is often overlooked in education
- Despite an already fuzzy, ill-defined name, the use of the word intelligence in the name for this collection of technologies further lends to the confusion, particularly because we still don’t have a crisp, universally accepted definition of the human intelligence AI seeks to mimic.
- What is AI? We’re still not in agreement
- TV technology can’t in any meaningful way help improve itself, nor does it serve as a platform for building other forms of technology. It consumes tech versus empowering other technology. And if Grandma Ischer were still alive today, she might get a chance to witness another shift as people’s consumption of this technology is replaced by newer tech like streaming.
- I would argue that platform technologies—like electricity and refrigeration—tend to have bigger and more durable impacts on society than non-platform technologies.
- Dropout turns out to be one of the most effective approaches to preventing overfitting in DNNs. A simple way to think about dropout is that it’s like selective forgetting. If the network remembers everything that it has seen, then it may never learn to generalize beyond its memory, which is a bad thing. The goal of any machine learning system is to be able to generalize beyond the data on which it has been trained, to be able to respond with good answers when it is asked a question about data that it has never seen before. It’s the same as wanting your child to be able to eventually do arithmetic more complicated than what they’ve memorized from flash cards.
- Plenty, a fully autonomous farm in San Carlos, California, backed by venture capital. They are doing the equivalent of thirty acres of farming within a fraction of an acre indoors within a robotic farm. All of that production happens with less than one-eighth of an acre footprint for the grow room, and about two times that if you include the seedling room and thermal systems for climate creation. Not only is it more efficient, but AI helps to ensure they don’t lose a single plant. Furthermore, so-called fresh produce that is actually shipped two thousand miles from farm to market can now be supplied much closer to where the food will be consumed.
- I’m a huge believer in every individual having the opportunity to support themselves and their family through their work, and for a robust social safety net for those who can’t. While I admire the spirit of what Senator Sanders and Representative Khanna are trying to accomplish for workers through this legislation, I worry that the incentive for businesses might not be the ones intended. When the cost of human labor increases—particularly that of low- and mid-skill repetitive labor—businesses with access to automation technology will do the math, and when projected labor costs exceed total cost of ownership for automation, automation will displace labor. We’ve seen this pattern play out over and over again, most recently with fast food restaurants replacing cashiers with kiosks in markets with high cost for low-skill labor. If the workers whom politicians are attempting to help are those in a logistics operation, for instance, the legislation may be pushing in the wrong direction on efforts already underway to fully automate those jobs. By making low- and mid-skill jobs more expensive and creating controversy around them, politicians may be introducing an additional incentive to accelerate automation efforts.
- This goes against his whole argument that AI will cause jobs to change but not be eliminated. He states earlier that AI will lead to more job and specifically mentions grocery stores. Now he states that forcing employers to pay workers a living wage or be taxed is going to eliminate jobs. How can it be both?