Where Do Things Stand with AI

Things are evolving. Current trends feel like the Internet days around late 1990’s and early 2000’s. That was 20+ years ago. So, we still have quite a ways to go when we talk about “where do we stand with AI”. Like all technology, however, it can be used for good or not. Without a doubt, this will be used for both. With regards to the positive side of AI, it can help make research, understanding, learning, creating (music, art, manufacturing, etc.) very efficient, rewarding, and extremely fun. One simply has to dedicate time and effort to learn the methods and best practices on how to use to one’s advantage.

The ultimate outcome of how AI helps us really depends… on us. At Xsertive, we are convinced that in time, once the dust, legal framework, and noise settles, all things AI will make the value of human capital skyrocket. This will not happen overnight, but it will happen. Overall, AI and robotics will be of incredible value caring for tasks we thought only possible by humans. However, what this will do is make those things that people gravitate towards even more valuable as the human element and touch, will be in demand more than ever before.

You can’t ignore it.

Let's talk about AI... again. Overall, I remain optimistic about the potential of AI. It is something that cannot be ignored.

The socioeconomic feel of AI's trajectory in early 2025 is a mix of cautious optimism, growing accessibility, and concerns about inequality and control. Whether a person feels or not, the are being affected by it. Think about your last or next visit to the Doctor's office. If they have a laptop with them, your conversation is being recorded so that the AI tool can provide a summary of the visit. Gone are the days where the Doctor had to summarize or have someone transcribe his notes to your records.

AI's democratization, through free or low-cost tools like Grok 3 on platforms like x.com or mobile apps, has helped small businesses, creators, and individuals. For example, AI-generated art and writing tools are enabling small scale entrapreneurs to compete with bigger players. In education, AI tutors are bridging gaps for students in under resourced areas. Developing nations are also leaping forward, using AI for agriculture optimization or healthcare diagnostics, bypassing traditional infrastructure barriers.

AI is increasingly embedded in daily life. It is being fully integrated into industries like logistics and healthcare. This is fueled by widespread adoption: over 50% of U.S. companies with 5,000+ employees used AI in 2024, and global AI market revenue is projected to hit $420 billion by 2027, up from $184 billion in 2023.

But there is an undercurrent of fears about job displacement. These fears are real. Studies estimate 30% of current jobs could be automated by 2030, hitting low skill workers hardest. White collar roles like accounting and legal research aren’t immune either. This stokes anxiety about income inequality, especially as AI wealth concentrates among tech giants and elite developers.

Regulation is also a wildcard. The EU’s AI Act, fully enforceable by mid-2025, sets strict rules on high-risk AI, while the U.S. lags with patchwork policies. Public sentiment, for the most part, leans toward transparency. People want to know how AI decisions are made, especially in finance and hiring. Makes sense, right? The danger, however, is that it might create more red tape to keep innovation from flowing.

In short, AI feels like a double-edged sword. It could be a tool for empowerment and efficiency, but also a disruptor amplifying inequality and uncertainty. The mood swings between excitement for what’s possible and unease about who controls the future. But as with anything new that comes down the pike, it can be used for good or for bad. It is really up to us (the humans) to determine where it goes and to protect ourselves from any impactful harm.