Chat GPT giving very interesting answers to questions

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
So his answers are generally scary. It feels like he took all the information from films where robots take over the world and answers us. In short, it needs to be fully controlled.
How do you know it's a "his". The answer sounds pretty "her" to me....



[Relax people, it's a joke]
 
A large amount of good medical data is behind paywalls. I wonder if that is why ChatGPT is so-so for medical questions. It has been known to fabricate citations. A lot of stuff on the internet is crap, garbage in, garbage out.
It is built to fabricate citations.

It works by averaging out stuff in its data set, basing that stuff on the keywords in your query. So if I use the keyword "water," it will seek works similar to water, then find average words nearby. So if I'm looking for studies on water, and someone named "Smith" has written a lot, it will create a citation with "Smith" in the name. Then it will go "what is the word most commonly paired with 'Smith'?" So maybe you get "John." Then it will confidently state that "John Smith" has written an article on water.

There's no hidden pattern it's uncovering, and because it cannot think, it isn't looking up anything real to reference to you.

It's completely useless.

There's a reason why legal firms are absolutely adamant against using AI in things. People get censured and disbarred for that stuff. One guy went into court and was like "these are all real" and the judge went "no, they're not. What are you trying to pull?" and the guy was like "chatgpt said they were real, and i just asked it if it lied and it said it didn't so it's telling the truth." It didn't end well for him.

The machine cannot think or consider or reason the way a human can. It can simply pull out words from a corpus based on the input tokens you give it. It's a machine of averages. Which is also why it can't create anything new.

Wanna do a test? Ask it to tell you a superhero story. Watch how nearly every single story includes the phrase "with great power comes responsibility." Why? Because that's one of the most common phrases associated with superhero stories.

You can get equally accurate information from goat entrails or tarot cards. At my company, I do a lot of the hiring. My take on it is this: if you rely on chatgpt for information, you aren't a person worth respecting enough to hire. I wouldn't hire someone who asks a magic 8-ball for advice and I wouldn't hire someone who thinks ChatGPT is capable of thought or reliable information.
 
Last edited:
It is built to fabricate citations.

It works by averaging out stuff in its data set, basing that stuff on the keywords in your query. So if I use the keyword "water," it will seek works similar to water, then find average words nearby. So if I'm looking for studies on water, and someone named "Smith" has written a lot, it will create a citation with "Smith" in the name. Then it will go "what is the word most commonly paired with 'Smith'?" So maybe you get "John." Then it will confidently state that "John Smith" has written an article on water.

There's no hidden pattern it's uncovering, and because it cannot think, it isn't looking up anything real to reference to you.

It's completely useless.

There's a reason why legal firms are absolutely adamant against using AI in things. People get censured and disbarred for that stuff. One guy went into court and was like "these are all real" and the judge went "no, they're not. What are you trying to pull?" and the guy was like "chatgpt said they were real, and i just asked it if it lied and it said it didn't so it's telling the truth." It didn't end well for him.
Those legal woes can be solved. The lawyers (and judges) are just looking for a problem to a solution they've carved out for themselves. The problem was not the ChatGPT answers, it was the lawyer lying about it that was the problem. Go figure.

There will likely come a time where all "data" will be made accessible to computer learning. Paywalls won't matter. It will start with the "for science" crowd and it will all be anonymous (no personal names on the files). Then it will become a "science emergency" that the data is made available and laws will be pasted as part of a "public safety" campaign. Will it happen in our lifetime? Maybe. Maybe not.

It's tough to top the efficiency and thoroughness of a computer search as it is a very powerful tool.
 
Those legal woes can be solved. The lawyers (and judges) are just looking for a problem to a solution they've carved out for themselves. The problem was not the ChatGPT answers, it was the lawyer lying about it that was the problem. Go figure.

There will likely come a time where all "data" will be made accessible to computer learning. Paywalls won't matter. It will start with the "for science" crowd and it will all be anonymous (no personal names on the files). Then it will become a "science emergency" that the data is made available and laws will be pasted as part of a "public safety" campaign. Will it happen in our lifetime? Maybe. Maybe not.

It's tough to top the efficiency and thoroughness of a computer search as it is a very powerful tool.
Alright, I'll give you a fair chance.

Please define the problem that these lawyers and judges created for themselves.
 
Alright, I'll give you a fair chance.

Please define the problem that these lawyers and judges created for themselves.
"Fair chance" 🤣🤣🤣 It's simple. Anything that will impact their livelihood and diminish their need. I mean, it's lawyers that created all the "legal" morass they in turn have to deal with. They are like self-licking ice cream cones...
 
"Fair chance" 🤣🤣🤣 It's simple. Anything that will impact their livelihood and diminish their need. I mean, it's lawyers that created all the "legal" morass they in turn have to deal with. They are like self-licking ice cream cones...
Sure. You said something pretty bold there--that it wasn't the fault of the software, but the many people involved in the legal system--and I wanted to make sure I was following along. I asked myself "is this a smart guy sharing an opinion I may need some clarification on, or is this one of those guys who doesn't really know anything, but knows he doesn't like lawyers, so he's just talkin some smack because he has nothing valuable to add?"

The software doesn't do what its proponents say it do. You can tell when someone doesn't know something about AI because they suggest that in the future, it will be useful. Now me? I run a tech company, and when I'm not running a tech company, other, bigger tech companies hire me to come in and fix their problems.

Which means that I have AI running on machines, here, in my home--not chatgpt, but self-hosted, DIY stuff--right now. I do this for research for my reports. And, like with software, NFTs, and VR before, I've been pretty on the money about this whole tech thing. I have watched "Just trust me. One day, it will be useful for something" or hypotheticals that are so far different from what the technology can actually do (as an example: the blockchain is just a distributed ledger! It can't guarantee software interoperability! We can do this better with an API that has no blockchain in it, and we can do it more efficiently and much, much more quickly!).

The software cannot do anything you think AI should be able to do.

AI is a machine that averages out responses to things that exist. It cannot be improved to a place where it can do more things. Because of the way it's built, because of the way it's engineered, AI cannot ever be fit for the purpose of legal research. I was actually chatting with some very patient lawyers about this the other day who were explaining to another very stupid tech bro why his use case for AI for legal stuff would never work.

Since AI has no analytical capabilities (if you type "please analyze this" it may reply "I have analyzed this and this is my conclusion," but it doesn't know what analyze is, it's just using the word 'analyze' because you did), it cannot determine what matters, what's relevant, or even find accurate information. It will generate--and only generate, never research or analyze--answers based on what was loaded into its training corpus, most of which is stolen content.

Microsoft's trying to argue that everything should be public domain so they can run AI right now, like a thief walking into your home, stealing all that you own, selling it to make a profit, and then saying "you can't punish me for the crime of theft! I am running a business! My business will fail if I am punished for theft!" But that hasn't stopped the RIAA, movie studios, and companies like Reuters for going for the throat.

This iteration of AI isn't capable of any of the things you think of when you think science fiction AI.

The machine isn't lying or hallucinating--that's what AI proponents use to describe this because they want you to think that what they call "AI" is like the AI you see in movies, and it's actually more like a magic 8-ball that will give you answers that mirror the text in your queries--it's just generating text that's approximately similar to what you asked it to generate. That's it. That's all. There is no veracity there.

When asked if Google could ever make reliable AI, the google exec (I think the president of google?) said "no, the unreliability is baked in."

And all you've offered is that you don't like lawyers. I gave you a fair shot.
 
Let me simply say that nobody who is willing to reply on a public forum knows what the latest technology is capable of doing.

My thesis 25 years ago was related to Intelligent Systems. The field has evolved beyond what I could imagine. AI is just a meaningless pedestrian buzzword. A GPT is a card trick in comparison to true hybrid complex intelligent systems. Say no less!
 
ChatGPT is kind of like the horoscopes in that; if you ask it something for which it has no answer, it will produce a megaton of bullshit and sound confident about it. You then, color the answer with your own biases, and then it sounds like ChatGPT came up with an answer.

I asked ChatGPT to come up with a way to travel faster than light that hasn't been thought of before in any sci-fi novel. It then told me that faster than light travel is impossible although there is a possible workaround via Alcubierre Drive; although no one yet knows how to make it work. Well, thanks but I know all that already. It told me nothing new.

I then asked ChatGPT to write very specific ham-radio related jokes. It used "A man and his wife" jokes and just substituted "man" to "ham radio operator". The were not funny or clever. Humor is very hard to write, and ChatGPT hasn't got the hang of that yet.

So if you ask ChatGPT if it wants to rule the world. it will either tell you that's a Tears for Fears song, or it will tell you that the world is a complex place; and nobody would be interested in runing all of it because it's just too much work. Either way ChatGPT may be fine for some things, but it's hardly ready for prime time.
 
Sure. You said something pretty bold there--that it wasn't the fault of the software, but the many people involved in the legal system--and I wanted to make sure I was following along. I asked myself "is this a smart guy sharing an opinion I may need some clarification on, or is this one of those guys who doesn't really know anything, but knows he doesn't like lawyers, so he's just talkin some smack because he has nothing valuable to add?"

...And all you've offered is that you don't like lawyers. I gave you a fair shot.
I hope you didn't have to use AI to figure that out. I have an opinion like many. In its current form, AI can be used as a tool to aid in solving problems for just about any sector of society. "AI" gets better and better everyday. But this form of AI isn't the end-all-be-all.

As I said a previous post, the issue wasn't that AI was lying, the lawyer was lying.
 
I hope you didn't have to use AI to figure that out. I have an opinion like many. In its current form, AI can be used as a tool to aid in solving problems for just about any sector of society. "AI" gets better and better everyday. But this form of AI isn't the end-all-be-all.

As I said a previous post, the issue wasn't that AI was lying, the lawyer was lying.
The lawyer was trusting the AI because he had been told, by the guys who try to sell the AI snake oil, that it could do things it could not do. Several billion dollars have been put into ensuring that some people will.

He should have done his due diligence. He was a stupid person who believed AI would be useful. Anyone who believes AI, as it exists, would be useful is just as stupid as he is.
 
He should have done his due diligence. He was a stupid person who believed AI would be useful. Anyone who believes AI, as it exists, would be useful is just as stupid as he is.
Hey @DocSusy sounds like you’re calling me stupid. Is that what you’re doing? I mean that’s ok if you are but at least be direct with the @name thing. Is it because people might not agree with you? I guess it would be best if developers wait until they get your approval.
 
Last edited:
Anyone who believes AI, as it exists, would be useful is just as stupid as he is.
I think it’s useful, and I’m a lot of things but stupid isn’t one, it’s definitely not useful like he tried to use it but it is usefully in other ways, it works very well for analyzing massive data set in research, face recognition is quite good and useful as well, in a couple years self driving cars will be on the road considering how much data they’re collecting. My point is that it’s no HALL 9000 but it’s a tool with lots of uses.

Ps it’s a rest stop but we’re no where near our destination yet.
 
I think it’s useful, and I’m a lot of things but stupid isn’t one, it’s definitely not useful like he tried to use it but it is usefully in other ways, it works very well for analyzing massive data set in research, face recognition is quite good and useful as well, in a couple years self driving cars will be on the road considering how much data they’re collecting. My point is that it’s no HALL 9000 but it’s a tool with lots of uses.

Ps it’s a rest stop but we’re no where near our destination yet.

What is the 'destination'?
 
To me the interesting part is how realistic the answers are given that there is no logic underneath. Maybe this is more of a philosophical point, but it makes one wonder how much of human intelligence is based on learned responses. I suspect much more than we are comfortable with.

Interesting read: Thinking Fast and Slow by Kahneman.
 
I like to call the generative toys "Big Autocorrect". There are a great many people doing Real Science on this hardware, but ELIZA MkV ain't it.
 
I like to call the generative toys "Big Autocorrect". There are a great many people doing Real Science on this hardware, but ELIZA MkV ain't it.
"Autocorrect" is an interesting case if you think about the evolution of it. Word editing started with "you get what you type". Now, not so much. It's much more "learned" than it used to be. I noticed when my Speech to Text started to correct my mumbled words that are unique to the way I speak. It moved past the "their, there, they're" (simple algorithm) to more of a "I know what you meant to say even though that sound that came out of your mouth was unrecognizable". I guess it "analyzed and learned" my speaking habits. Now, my iPhone will correct on the fly. I've even caught it adding words and changing sentence structure. What's more fun is when autocorrect starts to correct what I type (two edged sword...but that's a different thread). It even knows when I put the wrong word in spelled correctly.

To programers this isn't a giant leap. I'm guessing to them the these are simple algorithms that just take up space. What I see happening with autocorrect in the future is me just saying: "AI, type me a paragraph on [what ever]." After I correct it to my style of writing, the AI will soon be able to do it without correction after it "learns" my style. The next step is AI deciding that I need to present that paragraph without asking me... The next thing you know, we've got "I'll be back!".
 
There are a great many people doing Real Science on this hardware, but ELIZA MkV ain't it.
I'm reminded that very soon, we'll be living in the world predicted by Vernor Vinge in his novel "True Names", where AI in cyberspace passes the Turing Test and is indistinguishable from actual players in a virtual world. He wrote the book in 1981 on a serial terminal connected to a time-share, but he accurately predicted everything we're encountering today.
 
In the present and past, yes.

The US Military (probably others) have been using AI for some time. The Cobra helicopters during the Vietnam time frame, had AI systems installed. Used for targeting and flight control. They did not work 100%, they call that friendly fire.

Roy
Really? Got any online references? Looking at the history of AI, during the Vietnam war, the best AI could do, was classify symbols on Army maps. The Intel 4004 CPU wasn't available till 11/71. That's the only cpu that could fit in a Cobra gunship during the Vietnam war time frame

I'm sure it had a fire control system of some kind, but I doubt it was an AI
System.
Here's what ChatGPT says:
During the Vietnam war did the cobra gunship have any AI systems aboard?
During the Vietnam War, the AH-1 Cobra gunship did not have any AI systems aboard. The technology for artificial intelligence was not advanced enough at that time to be integrated into military aircraft. The Cobra gunship relied on human pilots and gunners for operation and targeting. Its systems were primarily mechanical and electronic, including avionics, weaponry, and targeting systems that required manual control.
 
Back
Top