Skip to main content
Aotokitsuruya
Aotokitsuruya
Senior Software Developer
Published at

AI Skills Worth Investing in 2026

This article is translated by AI, if have any corrections please let me know.

In 2025, we experienced a year of rapid advancement in LLMs (Large Language Models). Tools in the form of AI Agents have become increasingly common, and Coding Agents for software development have matured to the point of becoming standard equipment, rather than just one of many means to quickly modify files.

However, for software engineers, I believe the most worthwhile investment in 2026 isn’t learning to use tools like Coding Agents, but rather developing more fundamental capabilities.

Learning How to Learn Again

After a year of extensive experimentation with these AI tools, I’ve noticed that “learning” itself is the most important skill, and we can’t learn in the same way we did in the past.

So why should we go back to relearning “learning” itself rather than starting with Coding Agents or other AI tools? This is because LLMs can essentially be viewed as a form of information compression—you can extract highly relevant information simply through prompts.

Of course, since model training data contains a mix of correct, incorrect, and various types of information, and since models aren’t updated after training is complete, they produce hallucinations and other erroneous information.

This makes learning in the traditional way seem quite inadequate. Take software engineers as an example: in the past, we would start by memorizing programming languages, software ecosystems, and domain-specific knowledge. We learned from information within defined boundaries (language, ecosystem, domain) and developed our own understanding and corresponding context.

In the AI era, the information needed to make code run has already been memorized by AI. Ecosystem and domain knowledge no longer need to be retained through long periods of exposure—you just need to distinguish whether the extracted information qualifies as “knowledge” or is merely an error produced by random characteristics.

At this stage, what determines whether an engineer is good or bad at software development is no longer how many languages or solutions they can proficiently recall, but their ability to make quick judgments and determine which approach is more appropriate in any given moment. That requires a different type of training.

I deliberately use “information” to describe LLM output because “knowledge” isn’t simply information—it also includes each person’s subjective perspective on that information.

Interactive Learning

Not focusing on learning AI tools doesn’t mean avoiding learning or using them—it means shifting the focus to “digesting information.”

In the past, the basic method of learning was to go to school and learn from teachers. Looking at it essentially, a teacher’s job comes down to:

  • Filtering information
  • Helping with digestion
  • Verifying results

Filtering information is basically lesson preparation. Teachers select information “appropriate for the students’ level,” leaving only digestible portions for students. This is why schools, cram schools, and private tutoring represent different levels of “customization.” In public schools, teachers must consider everyone’s level, so they can often only provide the most easily digestible content—which feels like “no help at all” for more advanced students. Private tutoring can provide fully customized options, so the amount and quality is usually just right.

If you’re a self-learner, you play both the teacher and student roles. The advantage is that the filtered content is usually what you need. The disadvantage is that without properly allocating different types of information, it’s easy to become nutritionally unbalanced (for example, only learning what you enjoy).

The quality of information filtering also affects digestibility. If you jump to very difficult material without building a sufficient foundation, you’ll struggle to absorb that information. With math, for example, if you don’t understand addition and subtraction, you’ll find multiplication and division difficult to grasp. But if you understand multiplication as “adding a number multiple times,” it becomes much easier to comprehend. Advanced concepts are usually compressions of beginner concepts—if you can’t understand the expanded information, highly compressed, abstract concepts become even harder to grasp.

Being able to digest information doesn’t mean you’ve absorbed it. In school, this is verified through exams that present different variations to test whether you can recognize the same concepts in different contexts. If you can, it at least confirms that you’ve successfully formed an understanding of that information and transformed it into usable knowledge.

Returning to learning in the AI era: when filtering information no longer requires humans (teachers), the question becomes how to use AI to solve the two major challenges of digestion and verification.

The simplest approach is using “learning mode.” When you enable it in ChatGPT or Gemini, AI can help you digest and verify. The entire process, which originally relied on value provided by teachers, can now be provided by AI. Calculated at private tutor hourly rates, subscribing to any AI service still leaves you with plenty of savings.

So do we still need teachers? Yes, we do. As I mentioned earlier, even self-learners who can filter information themselves may not be able to maintain nutritional balance or ensure easy digestion. But for teachers, future instruction will probably gradually move toward highly customized formats.

To take it further, you’ll need to use features like Canvas. Learning mode is adequate for simple, easily digestible knowledge, but for professionals who have left school and entered the workforce, this level is completely insufficient.

Therefore, the ability to filter sufficiently broad information based on professional needs, and then use canvas mechanisms to transform it into interactive, visual simulation environments after initially grasping the “digestion method,” becomes quite important for digging deeper into details that learning mode doesn’t cover—and this process is highly customized.

For example, a colleague recently mentioned AWS certification exams. They had no experience with VPC (Virtual Private Cloud) usage, and just reading documentation and practice questions wasn’t enough to get a good grasp of it.

However, through Canvas, we can create simulated sub-networks with visualizations of traffic and routing tables. This makes it easy to quickly understand the design and operation of previously abstract concepts like public and private subnet configurations.

Therefore, the learning techniques mastered in the AI era aren’t simply about “digesting information.” Beyond traditional learning, you can learn faster and more clearly.

Foundational Abilities

Why shouldn’t you focus on learning how to use AI tools themselves? Because they’re not foundational abilities. No matter how quickly or how much you learn, if you don’t master the knowledge that makes these tools work, the emergence of new tools will still trap you in a cycle of repeated learning. Therefore, mastering the ability to filter what to learn is also extremely important.

Because the cost of obtaining information has become very low, we should reflect on those foundational abilities we couldn’t learn in the past and figure out what caused the “indigestion.” Abilities like reading and mathematics can help us make better and faster judgments about large amounts of information.

When I studied multimedia in university, I finally understood the applications of trigonometry—it can be used to create wave-like animations or repetitive motions. But in high school, the only subject I failed was math during the year we covered trigonometry, because the teacher started the class by having everyone memorize formulas by rote, then continued to pile on more formulas. This wasn’t a good digestion method for everyone.

I always felt I had no interest in Machine Learning and Deep Learning. Even after attending internal company training and learning how to train models, I couldn’t understand the process and principles, so I maintained an “it’s not relevant to me” attitude.

After being impacted by AI this year, I joined a new team using AI. With my manager’s help, I absorbed many articles and videos related to mathematics and statistics. Going back to look at machine learning content, I found I could understand even more. Furthermore, using my company’s Google Workspace subscription, I had Gemini help me create interactive learning tools, which made understanding this field even faster and easier.

It wasn’t until then that I realized many things I thought I “couldn’t learn” or “wasn’t interested in” were likely due to not having the right environment or tools. But now these AI tools can help break through barriers, keeping the learning pace at an interesting level, which eliminates many biases.

So why are foundational abilities so important?

Take software development as an example. Suppose we need to handle a performance issue, but after looking at a bunch of charts, we can’t find the cause. For someone like me who didn’t major in computer science, without a statistics background, it’s naturally difficult to see the statistical significance in charts. I’d need to spend more effort on judgment and analysis, and there’s a high probability of being wrong.

As you master more foundational knowledge, you gradually develop something like intuition (or perhaps judgment based on experience) that lets you quickly spot suspicious areas.

With this knowledge, when using Coding Agents, you can think in terms of “reviewing proposals”—determining whether the AI-generated implementation is good or bad, reasonable or unreasonable. Rather than thinking about how to make the Coding Agent work perfectly, it’s better to be an excellent guide.

Returning to the opening question: What is the most worthwhile AI skill to invest in for 2026? My answer is to relearn “learning” itself and fill in those foundational abilities you couldn’t learn well in the past. These abilities won’t become obsolete as tools update—instead, they’ll make you more effective at using any new tools.

Can you now better understand where the saying “AI will replace certain professions” comes from? Strictly speaking, some work is a form of mental labor or “dirty work,” and such tasks are very easy for AI. Arranging and combining correct information isn’t difficult—what’s missing is recognition of the problem itself. We need humans to define what “correct” means in any given moment.