Is anyone else experience this thing where your fellow senior engineers seem to be lobotomised by AI?
I've had 4 different senior engineers in the last week come up with absolutely insane changes or code, that they were instructed to do by AI. Things that if you used your brain for a few minutes you should realise just don't work.
They also rarely can explain why they make these changes or what the code actually does.
I feel like I'm absolutely going insane, and it also makes me not able to trust anyones answers or analysis' because I /know/ there is a high chance they should asked AI and wrote it off as their own.
I think the effect AI has had on our industry's knowledge is really significant, and it's honestly very scary.
reshared this
Kitcat
in reply to Purple • • •One of the biggest things the AI stuff has taught me is that so many programmers just... don't actually want to program.
I never would have expected that, but it's like the whole game industry here (for example) just doesn't actually want to make games. They want to HAVE MADE games, but absolutely despise the process. And I just don't get it. I LOVE the process of making things!
like this
Unus Nemo and Radio Free Trumpistan like this.
Purple
in reply to Kitcat • • •@kitcat
To be honest the making process sometimes frustrates me, but I don't see how you could be proud of what you have made if it's just hastily thrown together by a chatbot!
I'm proud of the things I've made, because I've put my own brain to use to create something I had in mind. I thought it worked this way for others too, but like you say I think we might be in the minority here π
like this
Unus Nemo and Radio Free Trumpistan like this.
mkj
in reply to Purple • • •@kitcat To say nothing of when you've faced a problem, figured out a fix, and can *actually explain why that fix is correct*, and *apply the same reasoning in other situations*. Not just the same fix, but the same *reasoning*.
Maybe I'm old-fashioned like that, but actually having figured something out brings me joy. Even if it is stuff that lots of other people know. Learning how the pieces fit together to bring the result I get out of the thing I made.
fluffy π
in reply to Purple • • •Iβve been feeling like the whole industry has been going crazy for years, now, and the AI step is just the latest of many bananapants steps towards oblivion. But itβs certainly a big one.
Like, for ages the software industry has been high on its own farts about self-importance and trying to justify itself as the source of its self-made problems and a sinkhole of outsized valuation. And all for what?
LewdLewis
in reply to Purple • • •Pseudo Nym
in reply to LewdLewis • • •@LewdLewis
"As per my previous email...."
Folks don't read anything longer than a sentence.
My default mode of over communicating with a wall of text has been a challenge for me to overcome my entire career.
Lack of receptivity to a proper answer predates the current LLM problem, but is exacerbated by it.
"It depends" is almost always the right answer to a subtle technical question, but the asker wants a simple yes/no.
Radio Free Trumpistan likes this.
Romain Pouclet
in reply to Purple • • •Pseudo Nym
in reply to Romain Pouclet • • •The illusion of life and motion without thought? Yeah, checks out. No lies detected.
Schnellkatze (πS10E)
in reply to Purple • • •pancake β
in reply to Schnellkatze (πS10E) • • •@schrottkatze@catgirl.cloud@Purple@woof.tech I think that's the least of your concerns because the biggest hurdle is to actually get a response
So, references, overselling yourself, luck, experience, and more references
Schnellkatze (πS10E)
in reply to pancake β • • •wym references?
@Purple
pancake β
in reply to Schnellkatze (πS10E) • • •Schnellkatze (πS10E)
in reply to pancake β • • •lmao my only "previous employer" is a fucking rewe (discounter) i briefly worked at because my school forced me to do an unpaid internship and everywhere else ghosted me
@Purple
morgan
in reply to Purple • • •Swift
in reply to Purple • • •Purple
in reply to Swift • • •@swift I sometimes end up talking to them about it, and they seem to be aware AI is sometimes wrong... But yet they keep using it for literally everything.
It's almost like an addict who knows the addiction might be hurting them, but can't stop using.
I can see the damage it's doing to the long term maintainability of our environment and platform too π
Radio Free Trumpistan reshared this.
Octavia Con Amore Succubard's Library
in reply to Purple • • •Sensitive content
@swift you've actually nailed it
it looks like an addiction because prompting is like a slot machine: each prompt is another pull on the slot machine hoping for a good result, and maybe, just maybe, if they prompt just one more time, they'll get a good result 
<insert rant about needing human psych as a mandatory class during education here>
Radio Free Trumpistan likes this.
Unus Nemo
in reply to Purple • •@Purple
I do not have any fellow senior engineers I work with so I cannot speak directly to the question you asked. I do however have a lot of experience with AI. This is my take on it; for what that is worth.
I have been working on AI since the late 80's and there have been some drastic changes. Modern day artificial neural networks are a real game changer. It is amazing what we can achieve with them. With that said there are a lot of concerns to be had with the monetization of AI by large corporations. I personally only use local AI with models I train specifically for a task with high quality data that is public domain. Where corporate AI tends to train with anything they can scrape from the internet with little to no vetting. This produces inferior training that produces poor results with frequency.
People tend to forget that AI is more than just chat bots on the internet. We use visual AI for searching landmarks to help find lost people in the wilderness. W
... show more@Purple
I do not have any fellow senior engineers I work with so I cannot speak directly to the question you asked. I do however have a lot of experience with AI. This is my take on it; for what that is worth.
I have been working on AI since the late 80's and there have been some drastic changes. Modern day artificial neural networks are a real game changer. It is amazing what we can achieve with them. With that said there are a lot of concerns to be had with the monetization of AI by large corporations. I personally only use local AI with models I train specifically for a task with high quality data that is public domain. Where corporate AI tends to train with anything they can scrape from the internet with little to no vetting. This produces inferior training that produces poor results with frequency.
People tend to forget that AI is more than just chat bots on the internet. We use visual AI for searching landmarks to help find lost people in the wilderness. We use visual AI to provide adaptive cruise control, parallel parking assistance, etc. AI is a lot more than chat bots. Though people tend to look at corporate AI as AI when it is typically the worst quality AI you could choose to use. Now I get it, you cannot train a base model yourself without a significant investment in a high end GPU. Though, you can train a minimally trained model of smaller size with a RTX 3060 TI that is only around $200.00 - $300.00 USD. This is going to be painfully slow but doable.
I see such negative comments constantly on AI from people that really do not understand what AI actually is, demonstrated by their comments. Corporations setting up server farms that would require their own dedicated nuclear power plant? No, I am not with that. Local AI that can be tailored to very specific needs can indeed be a great help. We should always keep in mind that we are dealing with a neural network. Just like humans are not always correct AIs will not always be correct either. You have to vet the information you get from an AI.
AI is like fire. Fire has burned down communities and it has made food more nutritious (via cooking) and in some cases edible (many vegetables are toxic even to a lethal level when consumed raw). It has heated habitations to make them livable in colder months. To judge something only by its worst possible metric is not rational. AI is used in the Medical field and many others for a very long time. Saving the lives of many people. I wish that when people get upset at corporate AI they would direct their frustrations where they belong. On the corporations that are doing everything they can to monetize AI, including having ridiculous classes on writing prompts π.
Radio Free Trumpistan reshared this.
Radio Free Trumpistan
in reply to Unus Nemo • • •Unus Nemo likes this.
Purple
in reply to Unus Nemo • • •@unusnemo
I'm not entirely sure if your reply is in good faith, but I'll assume the best. When I mention "AI" in my post I'm very specifically talking about LLMs (like ChatGPT)
It is inherently a chatbot, LLMs work by outputting text and are trained on text.
There are various tasks computers can do much better than humans can, I'm not against those at all. What I am against is a prediction machine coming up with seemingly plausible information, or in my post, code that often is wrong, combined with people not verifying its output.
Its goal inherent to the technology behind it is not to provide correct information. Yes we would like it to be correct, yes it sometimes is correct, but there still is no actual thought process behind it other than pattern recognition.
Additionally switching to local models with similar performance requirements (as to not degrade the already questionable accuracy) does not resolve the power draw at all. It
... show more@unusnemo
I'm not entirely sure if your reply is in good faith, but I'll assume the best. When I mention "AI" in my post I'm very specifically talking about LLMs (like ChatGPT)
It is inherently a chatbot, LLMs work by outputting text and are trained on text.
There are various tasks computers can do much better than humans can, I'm not against those at all. What I am against is a prediction machine coming up with seemingly plausible information, or in my post, code that often is wrong, combined with people not verifying its output.
Its goal inherent to the technology behind it is not to provide correct information. Yes we would like it to be correct, yes it sometimes is correct, but there still is no actual thought process behind it other than pattern recognition.
Additionally switching to local models with similar performance requirements (as to not degrade the already questionable accuracy) does not resolve the power draw at all. It just moves it away from centralised (datacenters) to individual GPUs at peoples residence.
Surely you'd agree on the harm LLMs are doing to the world right now, both intellectually and ecologically. (Post-AI bubble likely also economically)
This is dangerous technology, weaponised for short-term profit by large mega corporations
Unus Nemo likes this.
Unus Nemo
in reply to Purple • •@Purple
My reply is in good faith. I would agree with most of what you said except that local AI creates a significant power draw and regarding the accuracy. The power draw from local AI draws no more power than playing a game on your computer and in most cases far less. It is only when you setup thousand to hundreds of thousands of GPUs in a data center that we see significant power draw and is a noticeable drain on the grid. Someone playing a game for hours on end uses considerably more power than my training or using an LLM and we should not forget that most people would not be training only using local AI for feedback. So their draw would be insignificant and far less than your average gamer. In relation to accuracy. The accuracy of any LLM is going to be directly related to the quality of the data it was trained on. Take into consideration the medical field that trains their AI only on high quality data that is produced by doctors and medical professionals. Their reliability,
... show more@Purple
My reply is in good faith. I would agree with most of what you said except that local AI creates a significant power draw and regarding the accuracy. The power draw from local AI draws no more power than playing a game on your computer and in most cases far less. It is only when you setup thousand to hundreds of thousands of GPUs in a data center that we see significant power draw and is a noticeable drain on the grid. Someone playing a game for hours on end uses considerably more power than my training or using an LLM and we should not forget that most people would not be training only using local AI for feedback. So their draw would be insignificant and far less than your average gamer. In relation to accuracy. The accuracy of any LLM is going to be directly related to the quality of the data it was trained on. Take into consideration the medical field that trains their AI only on high quality data that is produced by doctors and medical professionals. Their reliability, based on far less false positive results, is far better than say chatgpt or the like. Because these corporate LLMs are trained on anything and everything scraped with no vetting. It is no different than teaching a human with bad information. The sheer volume of information they are training with is what causes the issue. They count on the good information overriding the bad, which we can see is not working.
Thanks for your reply and I hope you have a great day! I am as much against professionals using corporate monetized AI as you are as a complete solution rather than a tool that they then vet as you are. We are on the same page in that respect. It is also sad when the general public thinks that they can count on the information they get from these corporate AI LLMs as they do not apply reason. If human intelligence is fallible, and it is, then it stands to reason that we should not take it for granted that artificial intelligence is infallible. In fact we should vet all information we get that is going to be used in more than an entertainment fashion.
Not really significant but it should be noted that chatgpt and other corporate LLMs are often trained on Text and Visual data and can do stable diffusion as well as give text response. They are not just text in text out. Which is why they can identify a plant, insect, random parts you have found, etc for you from a photo or draw you a picture. I have actually found to be interestingly accurate in the case of Gemini on identifying edible plants and insects as well as some electronic parts, not sure how accurate chatgpt is in that regards as I have never used chatgpt or any other corporate AI other than Gemini for that purpose.
Unus Nemo
in reply to Purple • •@Purple
I would also like to mention that I am a software architect and developer and love to program. I have noticed that a lot of people in the industry have trained in development just for a paycheck and have no passion for the process. That is sad, and I can see how they would turn to AI instead of getting the enjoyment that I get from solving the problem myself. I will bet that most developers that became developers because they truly enjoy research and solving problems do not use AI for more than a tool and do not rely on it for writing code or at least I would hope not π.
Radio Free Trumpistan likes this.
Maimu:ponetonguewink:
in reply to Purple • • •I think people are burned out and are just find AI easier to just continue earning money and do less.
source: me, coding makes me wanna kms
Maimu:ponetonguewink:
in reply to Maimu:ponetonguewink: • • •