This recent article in The Observer is interesting and at the same time I feel it contributes to a bit of a false narrative about the role of "theory" in our thinking lives.
The article concentrates on predictions - does this "machine learning system" (ML system) predict something more accurately than various humans who have come up with theories. This is a valid question, but it's not the whole story of why we need to come up with theories. The quality of prediction is a gold standard to judge theories by. If a theory predicts things wrongly it is unlikely to be useful. In that way we can see the basis of the article - if theories are producing less accurate predictions than ML systems, are theories obsolete?
However, this isn't all a theory is for, whether in science or in practical thinking. A theory is a model, not just a prediction, it's a set of ideas about how a system works, about how a set of starting conditions turns into some outcomes. The ML system creates predictions - "given these conditions, this will happen" and sometimes that is all we care about. A great example of this in daily life is the weather forecast - I don't need to know and really don't care how you come to predict that it is going to rain at lunchtime. If your predictions are reliable, then they are valuable - they stop me getting caught in the rain.
At the same time there's something important about the weather system (and my personal relationship with it) which isn't true of every system relationship. The weather is beyond me, I cannot influence it. Prediction is the limit of what we can usefully do. Yet for lots of systems, we want to know what might happen because we want to change it, or we want to fix it if it stops working the way we expect.
Then, we need a theory, a model or at least an idea of how things work. It's only with some idea of a mechanism, or flow of actions that we can seek to affect the outcome. For example "X happens because of Y, so change Y and see what happens."
If we believe the ML system accurately simulates the part of the world we are interested in, we can use it to test our theories. But that does not mean we are not engaging in making theories. Likewise, we can imagine that we could delegate to a computer varying all of the different input conditions for the ML system, one by one, until X is changed. And it might seem that this is "not using theory" but in fact that is just automating the testing of theories - it isn't the removal of theory from the process. A theory, a legible understanding of how different factors come together to make something happen is essential to changing outcomes.
P.S. - there are other conceptual problems with ML systems and "explainability" or the lack of theory. One example is that as an abstract set of operations on the inputs, it can be right most of the time, but fail in unpredictable ways. The problem a lack of theory presents is that there's no way beyond "test and learn" to find out if this will happen and if this has happened, how do you fix it? I may come back to some others in a future piece.
Mind Atelier Newsletter
Join the newsletter to receive the latest updates in your inbox.