Sunday, August 10, 2025

The Unacceptable Downgrade: Why GPT-5 Forced Me to Cancel My OpenAI Subscription

xAI's Grok-3 might not be perfect but it happily generated this image for me.

For quite some time now, OpenAI's GPT-4o mini model has been an indispensable tool in my daily workflow. Its consistent reliability and impressive efficiency made it the go-to resource for a multitude of personal and professional projects, ranging from meticulous content review to rapid information retrieval. For months, this lightweight yet powerful iteration of their flagship model served its purpose admirably, providing swift and accurate responses that significantly streamlined my tasks.

However, after several frustrating days of being forcibly transitioned to GPT-5 without any option to revert, my reliance on OpenAI abruptly ended; I cancelled my subscription. This decision was not made lightly, given the prior utility of the service, but it became an unavoidable consequence of a fundamental misstep in product deployment.

Let me be clear: I am confident that for the average user, who primarily seeks a conversational AI chatbot for general inquiries, GPT-5 might indeed offer a perfectly serviceable experience. Nevertheless, for power users like myself—individuals who integrate AI deeply into complex workflows encompassing coding, extensive writing, data analysis, and diverse problem-solving scenarios—GPT-5 proves to be profoundly unsuitable. The primary grievances stem not only from its forced imposition without any graceful migration path or fallback option, but also from its inherent performance characteristics. Its processing time, particularly when it frequently transitions into its "thinking" mode, represents a significant and unacceptable bottleneck.

And while OpenAI has introduced an account setting to "enable legacy models," which purportedly re-enables GPT-4o, this offers little solace, as it specifically excludes the faster 4o-mini model that formed the cornerstone of my optimized workflows. Consequently, the considerable time and effort I invested in refining prompts for 4o-mini to yield precise, rapid results have been rendered entirely useless. GPT-5, by its very design, is a more deliberate, reasoning-focused model, engineered for deeper analysis and comprehensive output. While this architectural choice may serve certain advanced computational tasks, it is entirely antithetical to my frequent need for quick, direct analysis and immediate output based on specific inputs. The abrupt removal of 4o-mini is not only counter to sound application design—stripping users of a critical feature with no transition window—but it has also proven profoundly disruptive to my established professional tasks. This unforeseen change prompted an immediate re-evaluation of my AI vendor strategy, leading me to discover that the same prompts previously tailored for 4o-mini could be adapted with only minor adjustments for lightweight models offered by other providers, facilitating an immediate and seamless switch.

Beyond the performance degradation, GPT-5 also exhibits a marked decline in its ability to adapt to nuanced stylistic instructions, particularly concerning writing tasks. Even when provided with highly specific writing style guidelines, its output often feels less personal and more overtly robotic, departing significantly from the organic quality achievable with 4o-mini. One might speculate if this more generic, less adaptable tone is an intentional design choice, perhaps aimed at mitigating concerns around academic integrity or sophisticated content generation. Regardless of the underlying motive, as an individual who relies on AI for refined textual output, I am profoundly disappointed by this stylistic regression.

This sentiment of frustration and disappointment is far from isolated. After enduring sufficient frustration with GPT-5, despite meticulously crafting my prompts to coax optimal performance, I sought validation and solutions within the broader user community—specifically, the often-unfiltered forums of Reddit. The collective criticism there is remarkably sharp and consistent. For instance, Reddit user "larrybudmel" succinctly captured the prevailing sentiment, commenting, "The tone of mine is abrupt and sharp. Like it’s an overworked secretary. a disastrous first impression." Another user, "syntaxjosie," offered a particularly incisive observation, stating, "The only reason I can figure that they would deprecate the other models the day of release is because they know 5 is inferior and don't want people comparing them side by side." Furthermore, "Potato3445" encapsulated the widespread disillusionment: "Can’t believe we waited 2 years and took a step backwards. The creative writing is worse, it’s adopted a corporate personality, and it rarely bothers to follow instructions or incorporate your preferences without you having to ask. I hope the coders are happy atleast."

The forced adoption of GPT-5 by OpenAI serves as a critical cautionary tale for all technology companies. It underscores a fundamental principle: never presume that an upgrade, regardless of your internal conviction that it is "better," will be universally welcomed or even functional for your entire user base. Users, particularly those deeply embedded within an ecosystem, often possess distinct needs and established workflows that can be severely disrupted by unilateral, non-optional changes. The decision to compel users onto a new, less suitable model without offering alternatives or a clear migration path is not merely inconvenient; it is a profound misjudgment of user expectations and loyalty, inevitably leading to churn.

Ken is a cybersecurity and IT professional with over 15 years experience. All opinions are his own and do not reflect those of his employer or clients.

Wednesday, February 5, 2025

Age Discrimination is Wrong, and So is Attacking Federal Employees Because of Their Age

You've probably seen the posts by now, shared on social media criticizing six young software engineers for being young and working for the Department of Government Efficiency (DOGE). And quite honestly, these posts make me sick to my stomach.

As someone with experience in the federal contracting world, working for various Federal agencies under four different administrations, including Bush, Obama, Trump, and Biden, I find it deeply troubling to see young professionals working for DOGE being thrust into the public spotlight and targeted simply because of their age. These are engineers, not even federal appointees but regular everyday employees, who are trying to do their jobs to the best of their ability, yet they are being unfairly scrutinized and harassed. This is not okay.

Some of the best software engineers I’ve worked with have been much younger than myself. Their ability to write efficient code and analyze complex data often surpassed that of more experienced engineers with decades in the field. Age is not a measure of competence, and dismissing someone’s qualifications based solely on how young they are is a disservice to the entire profession.

Do you have any idea how hard it is to find talented software engineers? And now, we're going to begin attacking them fresh out of college for collecting a paycheck? I'm so glad I left Federal contracting, or I'd be very afraid I could be next just because somebody doesn't like whatever agency I started working for.

Publicly sharing names, photos, ages, and employers of individuals—especially when they have done nothing wrong—is a deliberate act of harassment and intimidation. These young professionals did not seek out public attention; they simply accepted jobs within a federal unit that happens to be in the political spotlight. That should not make them targets. We have no idea their political affiliations, voting record, or if they even like Elon Musk or Donald Trump? I worked under four different Presidential administrations. Do you know how often that affected my willingness to do my job to the best of my ability? Never.

Even if there were any questions about the legitimacy of the agency they were hired by (which was in fact setup by the Obama administration as the United States Data Service - USDS), should their careers suffer because they took a job they believed to be legitimate? That would make them victims—not individuals who deserve public attacks.

When did it become acceptable to single out people for harassment just because of their employment? This kind of behavior is not just unfair—it borders on age discrimination. The post in question doesn’t even attempt to evaluate their qualifications. It simply highlights their names, ages, and photos, with the clear intent to stir outrage rather than foster any meaningful discussion.

It’s time to recognize that these are real people with careers and futures ahead of them. They deserve to be able to work without being subjected to this kind of public targeting. If we truly value fairness and professionalism, we must call out this kind of behavior for what it is—unacceptable.

Shame on every media outlet which has published a story with the intent of harassing these young men, and shame on every single person who has shared posts criticizing them for simply being "young." There is a line between reporting and harassment, a line between proper disclosure of Federal employees and intentional targeting with intent to harass. Attacking six young men for only being young and working for a government unit you disagree with crosses that line.

I don't like paying taxes, but I certainly don't go around posting the name, age, and photos of IRS agents. That's harassment, and not okay.

Ken Buckler was a Federal cybersecurity contractor for over 15 years. All opinions are his own, and do not reflect those of his employer or clients.

The Unacceptable Downgrade: Why GPT-5 Forced Me to Cancel My OpenAI Subscription

xAI's Grok-3 might not be perfect but it happily generated this image for me. For quite some time now, OpenAI's GPT-4o mini model ha...