There are two types of people left in the tech industry: Those who make AI, and those who use it.
That may be the saddest sentence I’ve ever written to open a post, for a couple of reasons.
First, as predicted some years ago, the AI hype machine in 2025 has eaten all the lunches and sucked all the oxygen out of the room. As a result, it has “erased” a lot of jobs.
Second, the very purpose of the AI machine has been to create a completely opaque wall between the user interface and the processing behind it. This aspect is critical, and I’ll expand on it in a minute.
Third, most people have no fixed idea what “AI” really means in different contexts. Sure, there’s the technical definition, but even that has become a mishmash of computer science and brain science that tries to encompass literally thousands of unique use cases, from self-driving cars to virtual reality to mobile and browser chat apps that act as invisible, knowledgeable, always-on sidekicks.
You know, it’s a co-pilot. But I ask you this: Speaking of “co-pilot,” would you trust today’s AI to land your plane?
Ha! I’m messing with you. It already does!
Anyway, the first step in future proofing your career against AI is to understand what it is and isn’t.
What Is the Most Important Skill for AI?
If you want to work with AI, you need to understand the machine learning technology that allows it to work. You also need to understand the data that goes into the system, without which an AI solution is useless. Both of these require concrete math skills.
AI Can’t Take Your Job, But It Might Erase It
Note that above I said that AI “erased” a lot of jobs, not that AI “has taken” a lot of jobs. Robots aren’t storming cubicle mazes and manhandling tech employees and pushing them out of chairs. But the AI hype machine, with all its lofty promises, is influencing leadership at companies of all stripes to make room for the new robot overlords.
This is not uncommon in any tech cycle, and this did not happen overnight.
The first dystopian appearances of hyperventilating “AI is going to take your job” articles started appearing in mainstream media in the early 2010s. I know this because I was working on one of the first generative AI platforms back then, and some of those articles were written about us.
I say “some” and not “a lot” because we did everything in our power to make it clear that we weren’t coming for jobs. Rather, we were assisting the people doing those jobs. Our platform wasn’t a thinking entity; it was a tool, and a data-crunching tool, to dumb it down. We were just being honest and transparent, which, in the tech world, has become an increasingly dangerous proposition.
Still, those doomsday articles got written, and they proliferated with each advancement in LLMs and processing power and the sheer human ingenuity of AI developers.
Fast-forward to today, and yeah, AI is erasing jobs at an alarming rate. But again, this is not the actions of robots armed with lasers. It’s humans selling technology to humans, with company leaders cutting payroll to make way for the more efficient and less expensive promise as whispered by the AI hype machine.
How do you future-proof your skill set against this onslaught?
You Won’t Win at AI by Becoming a Prompt Expert
Let’s talk about what AI, in 2025, is not.
It’s not artificial intelligence, but it is. And that’s the root of the problem. We’re all saying the same two words to mean very different things, and we also all accept the act of calling whatever we want “artificial intelligence.” We do this even when it can result in a catastrophic level of misunderstanding and mismatched expectations.
I do it. I sling those two words with abandon. I’m doing it all over this article right now. Not even in a jokey way. I’ve just stopped caring.
AI doesn’t mean a computer will become a living thing. That’s science fiction, usually starring Scarlett Johannson. Artificial general intelligence (AGI), a concept that can best be described as “OK, robots are thinking for themselves now,” is also vague because the characteristics are elastic, so anyone can make it mean whatever they want it to mean. Thinking? That’s not happening, and there is no clear path to when or even how it will happen.
I can state that emphatically because if it does happen for real, I’m out of the game. See you in hell.
The best way to describe AI in 2025 is still the simple definition I came up with in 2010. It’s If-This-Then-That (IFTTT), but really, really fast. It’s decision-making at hyperspeed. But it does not introduce human ingenuity, intuition or unlearned experience, like when we copy the actions of a mentor without totally understanding why we should. It just fakes those things. With math.
Thus, AI prompts are not the key to befriending this thinking machine and learning its ways better than everyone else. Those prompts are just a user interface — that thick, opaque user interface I described in the opening of this post. And when everyone is locked into the same opaque user interface, it’s really easy to richly monetize the processing behind it. Like Apple’s ability to take a percentage of all the money transacted through iOS apps, for instance.
Although Apple just found out the hard way that maybe that moat can only get dug so deep before the courts get involved.
Anyway, because prompts are just a user interface, that makes “prompt engineers” no more than superusers. And one thing technology is very good at — especially in the business arena and mostly in enterprises — is evolving in a way that makes its own superusers obsolete.
There used to be very highly paid people whose job was “word processing” or “data entry,” and that has evolved into today’s highly endangered jobs like “business intelligence analyst” or “SEO expert.”
So what do you do to future proof? Well, if you don’t like math, I’ve got bad news.
Data Is Back, and Math Came With It
When everyone on the AI hype train started messing around with getting good at prompts, I started dusting off my data skills.
See, what we were producing as AI back in 2010 was just the distillation of insights and knowledge at a very niche level like sports, finance, or marketing. We already knew the arena of context, i.e. “baseball,” so we codified that context arena and its rules and decision trees, wrote what I call “small language models” and then did a boatload of data processing.
The evolution of AI that started in the late 2010s sprung up around the goal of “general intelligence.” In other words, this meant having answers to any question that anyone might ask. The database for that intelligence is the internet, which is the worst database in the history of data. Much of its data is inaccurate, way out of context and/or locked behind a secure door. The AI developers said “hold my beer” and went ahead with the data processing anyway.
The prompt exists to narrow that massive, infinite context arena, which is why simple Q&A text prompts are so bad at returning accurate and in-context results.
Now, when you shove an entire database into that “prompt” structure — and I love the imagery of that — you get far more reliable, far more actionable insights. But the accuracy and context of those results depends solely on the accuracy, context and structure of the database you’re shoving. And this is the only control you have over the results you’re going to get.
I don’t know about you, but I don’t like not being in control of my own results. Because I especially don’t like not being able to back up the reasons for my actions with nothing more than “Well, the AI said it was cool.”
We’re already starting to see that premise explode in people’s faces.
So, unfortunately — believe me, I’m on your side — the most important skill is math.
Machine Learning Is the Root of AI
Machine learning, the science behind the learning part of AI, is also evolving in amazing and spectacular ways. Where AI can be described as “magic,” the folks doing the heavy math are the wizards behind the curtain.
When you think about “If-This-Then-That,” the rules behind the “then that” part are all about predicting which possible action has the highest probability of resulting in the most desirable outcome.
That’s math, baby. In words. We’re numerically quantifying how “possible” the action, how “probable” the result, and how “desirable” the outcome. Really, really fast.
I’m not saying you have to go get your doctorate in ML, although there are far too many job postings where that’s a “desirable” skill these days. All I’m recommending is that when your back is up against the wall skill-wise, you need the hone the skills to be able to understand the data and math that “drive” AI, not the superuser skills to “prompt” it.
link