Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it. When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc. If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality? Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”! If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments. tagged under: programming

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it. When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc. If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality? Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”! If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments. tagged under: programming

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it. When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc. If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality? Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”! If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments.

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it.

Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”!

If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments.

When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.

If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality?

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it. When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc. If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality? Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”! If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments. tagged under: programming

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from “I’ll be here for life” to “Time to switch careers?”. And this post isn’t alone, there is a deep and dark pattern to it. When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc. If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality? Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt – though the larger narrative or news headlines next day would still be, “AI is eating jobs”! If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments. tagged under: programming

This is the xdefiance Online Web Shop.

A True Shop for You and Your Higher, Enlightnened Self…

Welcome to the xdefiance website, which is my cozy corner of the internet that is dedicated to all things homemade and found delightful to share with many others online and offline.

You can book with Jeffrey, who is the Founder of the xdefiance store, by following this link found here.

Visit the paid digital downloads products page to see what is all available for immediate purchase & download to your computer or cellphone by clicking this link here.

Find out more by reading the FAQ Page for any questions that you may have surrounding the website and online sop and get answers to common questions. Read the Returns & Exchanges Policy if you need to make a return on a recent order. You can check out the updated Privacy Policy for xdefiance.com here,

If you have any unanswered questions, please do not hesitate to contact a staff member during office business hours:

Monday-Friday 9am-5pm, Saturday 10am-5pm, Sun. Closed

You can reach someone from xdefiance.online directly at 1(419)-318-9089 via phone or text.

If you have a question, send an email to contact@xdefiance.com for a reply & response that will be given usually within 72 hours of receiving your message.

Browse the shop selection of products now!

Reaching Outwards