To train the system, you show the computer metrics attributed to a lot of different objects. A human might be able to do this, but a computer could do it more quickly. When thinking about “machine learning” tools (machine learning is a type of artificial intelligence), it’s better to think about the idea of “training.” This involves exposing a computer to a bunch of data - any kind of data - and then that computer learns to make judgments, or predictions, about the information it processes based on the patterns it notices.įor instance, in a very simplified example, let’s say you wanted to train your computer system to recognize whether an object is a book, based on a few factors, like its texture, weight, and dimensions. Machine learning-based systems are trained on data. Did you get the job? Did you see that Donald Trump ad on your Facebook timeline? Did a facial recognition system identify you? That makes addressing the biases of artificial intelligence tricky, but even more important to understand. Typically, you only know the end result: how it has affected you, if you’re even aware that AI or an algorithm was used in the first place. We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works. It’s tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box.
This is commonly known as algorithmic bias. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your home’s risk of fire.īut these systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used.
Humans are error-prone and biased, but that doesn’t mean that algorithms are necessarily better.