The brain as a neural network: this is why we can’t get along

We’ll imagine that each of these networks is responsible for making (at least) one type of prediction.Now let’s also imagine that there’s a city election, and that we need to choose which candidate we’re going to support for Mayor. According to the model we’re using, there’s going to be a neural network responsible for making decisions about politics.That neural network is going to take in a variety of inputs: our friends’ opinions, any tweets we’ve seen, campaign ads we’ve watched, podcasts we’ve listened to, etc. And it’s going to give us our political opinions as its output: which party we think is the “good” one, which candidates we expect to perform best when they’re elected, how high our tax rates should be, etc., etc. Essentially, this neural network encodes our political “model of the world”.For simplicity, let’s imagine that it only has two outputs: 1) whether or not our favourite candidate (we’ll call her Jane Smith) will do a good job if she’s elected Mayor, and 2) which political party is the “good” one.Here’s a sketch of what that might look like:*Aside: Notice that both of the outputs of this model are built on top of the same underlying model..This is an idealization that’s important in order to ensure the internal consistency of our worldview..If output 1 and output 2 were generated by completely independent neural networks, then we could hit points of blatant contradiction, where we apply different rules when making one kind of political prediction than when making another.Let’s say that we’d predicted Jane Smith would do a great job, but it later turns out that our prediction was completely incorrect..If we were perfectly rational, our next move would be pretty simple: we’d back-propagate through our model, tweaking its weights so that next time we come across a candidate like Jane, we’ll be more skeptical.But that backpropagation might have other consequences as well..Specifically, it might also affect the predictions we’d made about which party is right.Now, a perfectly rational person might say, “so what?.If I have to re-evaluate my party affiliation to accommodate the facts, then so be it.”But humans aren’t rational..We’re profoundly tribal, and many of us identify with our politics very deeply..In many cases, social pressure and conditioning directly influence the loss function associated with output 2 (our party affiliation), and require that output 2 takes on a certain specific value.For example, the statement “I’m a Republican” necessarily requires that the value of output 2 is “the Republican Party is right”..Likewise, “I’m a Democrat” is a commitment to ensuring that output 2 is always “the Democratic Party is right”.It’s worth taking a moment to think about the consequences of this constraint..As we’ll see, we can learn a lot about the math behind irrational human behaviour by doing it.Let’s assume that Jane Smith was a member of our favourite political party..In the interest of neutrality, we’ll call it the Purple Party..Let’s also assume that we’re a partisan junkie, and there’s enough social and other pressure on us to support the Purple Party that we’re unwilling to budge on our party affiliation.In that case, our response to Jane Smith’s performance in office is going to be a constrained optimization.. More details

Leave a Reply