10 min read | 2744 words | 6 views | 0 comments
The rise of AI-assisted or agentic software development has led to a visible split amongst software developers. On the one hand, companies are eager to embrace the technology for its "cost efficiency", and some developers also say it has made them more productive. On the other hand, many developers are mourning the unhinged velocity of development (often with not fully understood changes being merged and shipped), the overall decrease in code quality, changing development lifecycle (e.g. spending more time reviewing code than writing it), and hampering in ability for junior developers to effectively work on large software projects.
I was initially hesitant to post this at all, since it lends credence to AI being subjectworthy more than it is — the topic already gets a proportionally overblown amount of "air time". However, most discussion seems to focus on how this phenomenon is either good or bad, rather than the motivations behind the adoption or lack thereof, and understanding that sheds some light on why AI development can be both a "revolution" and a "tragedy" at the same time, depending on whom you ask.
Why AI adoption has traction, even though collectively, nobody wants it
I'll preface this by saying up front that I believe AI is a deeply problematic and troubling technology for many reasons. There are some of the usual concerns held by the general public, such as job displacement, though this subject is more nuanced than an overall reduction or augmentation in the nature of work. The excuse frequently thrown around is that "work will change" and people will find "more meaningful work". Elevator operators, for example, have been frequently cited as an example of a job that disappeared due to advances in technology. While I'm sure there were many elevator operators who enjoyed their work, I suspect these were more due to qualities of the work, such as the socialization with building occupants it afforded, rather than intrinsic details of elevator operation itself. It is possible many elevator operators found new work with similar benefits, though likely some did not.
In contrast, many software developers genuinely have a passion for their work and like what they do. These sorts of people tend to value the process — the journey, if you will, not the destination (More on this later.) Why would these people want to automate away something they want to do? The same way many classic car enthusiasts enjoy driving cars with manual transmissions, and shudder at the thought of an automatic (or worse, a self-driving car), it is not hard to see why many developers view AI as a scourge infringing on their domain and reducing its enjoyability, rather than through the lens of a raw technical advance.
I like the analogy I heard at a conference I attended recently: people do art for the process, the journey, not the destination. Yes, finished paintings are interesting, but it is the craft itself that makes an artist. For some developers, programming is a form of art. I would count myself among these, for much of the work that I do, where the code is, in many respects, a manifestation of me — of my thought processes, my values, etc. In such cases, having AI work on parts of all of it erodes the art process, and by extension, the art itself. The end result, after all, may look pretty, but you can't look back at it and take comfort in knowing it was your own work and ingenuity.
Craft is one thing. However, we live in a results-oriented world today, a reflection of a capitalist economy which demands growth and results. This has been parodied for generations now (anyone remember Oliver Wendell Douglas from "Green Acres" back in the 1960s, trying to get away from the "rat race" and enjoy the "craft" of farming and enjoying one's own work?). While there will always be nostalgia for "the before times", that doesn't diminish the philosophical argument that as humans, we should find value in what we do, and many developers rightly feel that AI is reducing their value.
Then there is the environmental impact, something which is not discussed nearly enough, to everyone's detriment. AI is simply atrocious for the environment. From the water consumption to the energy usage to the materials required (there isn't a hard drive shortage for nothing), it is a disaster on all fronts. These impacts are usually tossed under the rug by the companies that benefit from it, often under the guise of "economic productivity" (or even more callously, admitting that their market advantage of having gobbled up the entire supply will coerce more companies to rent computing resources from them). But economic productivity is an arbitrary measure that doesn't necessarily translate to a higher quality of life (indeed, the U.S., the most advanced country by many economic measures, ranks last or near last in many quality of life rankings amongst developed nations).
What exactly does the economy measure? Derrick Jensen put it best: the economy "turns living communities into dead commodities". Economic productivity, therefore, is a measure of the velocity at which living communities are turned into dead commodities. From mining, to manufacturing, to energy, to production, to consumption, every pillar of the economy requires destroying and contaminating natural resources, biomes, and decimating living communities — often to the point of extinction. This is not just something that is "out there". Humans, too, are suffering — lifespans are now declining, and we are getting sicker, with more chronic diseases and illnesses. Cancer, once virtually unheard of, is now commonplace. Yes, "it's the economy, stupid!" But really, it's the environment, stupid!
Unsurprisingly, most people view AI more negatively than positively. What is the real reason for AI adoption? "Efficiency". It's a loaded term that carries a lot of meaning, but it really explains why everyone is individually opposed to AI, even though some institutions are pushing it forward with everything they've got. For a long time, efficiency translated to real material improvements for everyday people. These days, however, "efficiency", particularly in the context of corporate operations, generally means consumers will end up with worse experiences. Self-checkouts? Efficient for the company, but a worse experience for customers. Automated IVRs and auto-attendants? Successful at increasing deflection metrics and boosting "efficiency", much to the frustration of every caller that encounters it, now that another barrier has been imposed to talking to a human. Companies have already been using automation to replace real customer service interactions for some time, and replacing customer service with "robots that lie" is only the latest arc in that trend.
At the end of the day, it's all for one reason — to save money and boost the bottom line. Companies are not in business to "improve the customer experience" (even the companies that claim to be). They are all in business to make money, and will pursue whatever gets them there. This is why customer experience only seems to get worse, despite being promised "personalized" and "efficient" support for years now. Labor is expensive, and relatively, only continues to get more expensive. AI is worse than humans, but often "good enough" to satisfy the companies using them, giving AI a comparative advantage.
But the real world exists for humans, for individuals and communities, not corporations (well, really, it exists for all species, but just focusing on the human bit here). At the end of the day, corporate profits don't matter. They simply have no tangible meaning in the real world. The stock market is just an illusion of funny money that we choose to legitimize as a society. What really matters are the collective lives of the human and non-human animals that inhabit it. And for actual everyday people, AI is often doing much more harm than good.
Most companies forcing AI onto consumers are not AI companies, providing consumers an "AI service" that the consumer requested. Rather, they are using it to make their serivce more cost-effective (and more profitable) for them. Business plans are pursued, not based on the value to consumers or society, but rather the profits to the company (of which cost is a direct component). Therefore, it is easy to see in a capitalist economy, where companies must remain cost-competitive simply to stay in business, AI is resulting in the transformation of the already-mediocre experiences that exist and making them even worse in an endless race to the bottom, all to cut costs and boost profits. That is the driver behind corporate adoption of AI. In a capitalist economy, that is almost a guaranteed result. If company B doesn't adopt, A will and undercut B's costs. Most consumers today don't care if B has a better product (perhaps because they can't afford anything but the cheapest product). This is one reason we have so much cheap crap from China that breaks after five or ten years. Quality is largely nonexistent anymore.
An exception might be businesses not adopting AI are often doing it because not adopting is their business, providing real human-powered experiences and services. In the long run, we will almost certainly end up paying more for "normality", for normal AI-free (and more generally, automation-free) customer experiences that we all used to take for granted. A simple, if quaint example: operators used to connect your calls for free; operator assistance now costs several dollars per call, as does Directory Assistance (which also used to be free).
It's the Nash equilibrium of technology. One can reasonably say, if I adopted this technology (AI, or anything else), I'll use it only in ways that provide value to me, and life would be better. But that's never what happens — everyone adopts it for their selfish interests, including corporations, and everyone ends up worse off in aggregate as a result. Intuitively, you know. You spend more time dealing with "customer service" than ever before, and yet, somehow, this is "efficient". (Well, it is efficient, just not for you — only for those that have succeeded in leveraging the technology more than everyone else.)
Hobbyists
The above seems to be a reasonable explanation for corporate motivations for technology adoption. But wait! There are hobbyists using AI too — and surely, their motivations aren't "profit-driven", are they?
Certainly, there is a slightly different dynamic with AI usage in the corporate world and amongst individual developers in a "hobbyist" capacity. First, we need to define what we mean by "hobbyist". A good conception of a hobbyist is someone interested in doing something, even without being paid for it. Think of it as a "job" that you don't get paid for but want to do anyways. Sometimes, hobbyists pursue things that are "unimportant". I don't mean this in a derogatory manner, more so to describe people that like to tinker with stuff, where nothing is at stake if nothing ends up working, for example. I think "hobbyist" is often unfairly misconstrued to mean things that don't matter, and while that is sometimes true, I want to emphatically state that is not necessarily the case.
Hobbyists generally care deeply about their work, often in ways that employees don't. Employees working on a software project at a company may produce the code, but fairly rarely do they also directly consume it. Therefore, if their boss asks them to use AI to become more "efficient", they may give in and use it, even if they wouldn't have in a hobbyist capacity, simply because they personally don't directly care about the quality of the resulting software (certainly, they may care inasmuch as they don't want to be fired, but here, money is the motivation, not the actual quality of the product). The path of least resistance is often to use AI to appear to be a more efficient employee, even if product quality suffers, and accordingly get rewarded by the company's focus on "results". Hobbyist developers, in contrast, are often their own customers, directly creating things that solve their own problems. As such, they are likely to be more cautious about using AI to do something; if it doesn't work properly, it's their own experience on the line, after all.
Understandably, process is huge for hobbyists. Some hobbyists may do something simply for the sake of doing it. Think of a developer building a website in his or her spare time, simply because "it's cool" (this is an example of something "unimportant"). However, many hobbyists do work on things that matter. Open source developers are a classic example of this, in the context of software development. Most open source maintainers are not paid for what they do, but they do what they do because they enjoy it. At the same time, there are end goals to be achieve, e.g. the software needs to work as described to achieve some kind of goal, any discovered bugs need to be fixed, etc.
Here we see the two sometimes-competing objectives of a hobbyist: process and results, journey and destination. Some hobbyists may be concerned more so with only one or the other, while some focus on both (though I would argue if neither is at stake, then it's no longer hobbyist, but time wasting). I find myself somewhat in the middle. I like the process, and probably wouldn't be doing the things I do if I didn't, but at the same time, everything that I do has some kind of end goal. I generally don't do things simply because they are "cool" (though they may well be), but because they accomplish something that (typically) makes my life better or easier in some way.
The tension here matters. If you really value process, and see your craft as an art, then you are more prone to wanting to do stuff yourself. After all, you deeply value the process of doing something, and want to make sure it's done properly. You wouldn't want another human to do it, let alone AI. You want it to be done exactly the way you would want it done and know that you did everything you could to make sure it was done right.
Companies, on the other hand, are very much on the extreme of results/destination-oriented. Companies do not care about process. They may have processes to the extent that they facilitate reaching the destination optimally, but the idea of "process" insofar as hobbyists care does not exist. Companies do not care if their employees find their work maximally fulfilling. Their HR department may say they do, and they may not begrudge this sort of process, at least inasmuch as it refrains from interfering with the destination. Corporations have a fiduciary duty to maximize profits, i.e. results; hobbyists are not similarly constrained, and may value process as much as they desire.
Not all hobbyists are equally interested in process. Some may "just want something to work", in which case they are likely to behave much more like a company would than a process-valuing hobbyist, perhaps even using AI to "do something" without caring so much about how it gets done or learning and internalizing how to do something and spending the time doing it. And it isn't an either/or — there is a continuum between the two, and the same person may end up at different points along at it, depending on the task. But there is a stark difference in mentality that matters. Large companies are eager to increase sheer velocity as much as they can using AI, while open source projects find themselves overburdened with low-quality "slop" AI issues and PRs, with many actively constraining or rejecting the influence of AI on their projects. They are not being promoted for adopting AI in their organization or delivering dubious-quality results; they are deeply invested in their software and often find AI to threaten both process and results (of the quality desired).
Future Direction
Practically speaking, the cat is out of the bag and it seems unlikely the technology would simply disappear in the constraints of the modern self-imposed economic structures that exist today. Nonetheless, the future seems bleak for AI, any way that this plays out. Further adopting will likely continue eroding the quality of life of real people, all in the name of "efficiency", and even a normalizing market correction would have tangible effects on real people's lives. There are many reasons to oppose AI — environmental, social, philosophical, quality assurance, and hobbyists, operating at human scale with care for their craft — with no allegiance to "corporate efficiency" — recognize this.
Log in to leave a comment!