In short, the users of the language “know about” the world, not the language itself. Baking particular sorts of data into the language does not help it “learn” or “understand.” It is not able to generalize or deal with exceptions to its rules. Wolfram has shown how the language can grab the flags of different countries and treat them as a dataset that can be manipulated for geographical or aesthetic analysis—but that’s only because the code for Wolfram has specific handling for associating specific visual images of flags with specific country names. But if the nation of Davidstan decides to change its flag to an animated, rotating sphere with ultraviolet paint on it, the Wolfram Language won’t be able to handle that without modification.
Bottling all this real-world data together with the language has another, more pernicious side effect. If I want to use a different knowledge base with Wolfram, or use Wolfram’s knowledge base with a different underlying language, I can’t do that unless Wolfram gives the go-ahead, because he has locked all the pieces together. Given Wolfram’s penchant for suppressive legal threats around intellectual property, I recommend against building anything in the Wolfram Language, lest Wolfram decide that he owns it.
Certainly, there are conveniences to be gained from the tight integration that Wolfram offers, because there’s no longer a need to link together disparate data sources, possibly via multiple languages and platforms. The widespread use and usefulness of Mathematica is itself evidence in support of the Wolfram Language’s capabilities. And the language itself does have some interesting and potentially powerful technical features, some, such as parametric polymorphism, which allows for elegant pattern matching in functions borrowed from programming languages like ML and Haskell. But the cost in terms of flexibility is prohibitive: If any part of Wolfram can’t do what you want (or do it where you want it), you’re out of luck. And Wolfram holds all the cards.
The deceptive claim that the Wolfram Language has “a model of the world built in” is reminiscent of the inflated assertions about artificial intelligence—some here in Slate!—declaring that, say, IBM’s Watson computer is already artificially intelligent, just not intelligent like us. In truth, any meaningful definition of “artificial intelligence” will revolve around the ability to get by in our world. For one, an intelligent machine needs to be able to converse easily and convincingly with people about arbitrary topics—the challenge set by Alan Turing over 60 years ago in what became known as the Turing Test. An accurate and working “model of the world” would allow for such conversation. Here, no computer has yet succeeded, and the Wolfram Language is certainly not that milestone. Wolfram’s supposed “model of the world” is nothing but a piecemeal heap of data, neatly organized but utterly meaningless to the underlying machinery. The only way its “model of the world” could be viable is if the Wolfram Language really did take over the entire planet, so that our everyday lives were constrained and dictated by its logic. This is evidently Wolfram’s goal, and I hope he never reaches it.
The intellectual dishonesty in the presentation of the Wolfram Language, whether intentional or unintentional, disturbs me, as I’m sure it does many other computer science professionals. There is little that is genuinely new or different in Wolfram as compared to, for example, the Urbit language project, which aims to allow for integrated ad-hoc networks of computation across arbitrary numbers of machines and devices, big and small. To be sure, Urbit is perplexing—the intended use case seems to be for some postapocalyptic libertarian wasteland. But it’s far more visionary than Wolfram’s rehashed snake oil. Computer scientists should police his claims vigilantly in the public sphere. Otherwise, we will all look bad.