Star Trek question: How did the Emergency Medical Hologram get so advanced?

And a very good argument it is.

My belief is that the way Holograms work, in Star Trek, is that the computer uses all of the material which it has on a subject (be it a series of books, or all of the video records of a real person over the course of their life) and utilizes those records to find a best-match situation to the current situation, and has the hologram act in that manner. It’s sort of like how you can create an AI chatbot simply by finding the most popular answer to a particular question and returning that answer - assuming that you have access to billions of questions and their answers.

The ship’s computer does the same thing, but at a fancier level. It doesn’t need to understand the hows and whys of emotion, it just needs to copy-paste and/or fudge in the gaps using information based on a hodgepodge of information. It might not have a good reference for this particular situation and that particular character. But it can operate like a music matching service and find the people/characters who most closely match the target character and keep searching from most-closely-related to least-closely-related, until it finds a situation that is close enough to the current situation, to use as the copy-paste origin.

Data, on the other hand, operates by trying to “organically” recreate the neural structures and thought patterns that would parallel actual emotion.

That and the Stock Exchange.

And somewhere in there, it fills in gaps with frog DNA.
:dubious:

And somewhere in there, it fills in gaps with frog DNA.
:dubious:

See? A time loop!

A) You’re complaining that the technology on Star Trek is futuristic, bordering on magic.

B) I’m pretty sure I could find a number of quotes from TNG which specifically state that the computer is analyzing its database of material to reconstruct the “personality” of people for holographic projection, and filling in gaps with guesswork. See, in particular, the episode where Geordi meets his Twue Wub as a hologram - then later she ends up being an unlikable person, because the computer had guessed wrong.

I’m not complaining. I’m agreeing.

Several good points made here.

  1. EMH on virtually continuously, gaining experience.

  2. Time travel to enhancement which sticks due to Trek Time Travel Rules (whatever plot requires)

  3. Voyager’s neural net gel pacs which add in a variable of being some sort of organic thing.

  4. Star Fleet engineers learning from Binars (and that ep where the 1701-D may have been some sort of AI).

  5. Genius designer who may be somewhat eccentric, some variables in there.
    So, EMH is ultra advanced. All the fanwanks work for me.

Another thing to remember is that Data’s inability to feel emotion wasn’t ‘a lack of advancement’ - it was a deliberate choice on Dr Soong’s part, due to people being creeped out by Lore.

Data is, in short, suffering from brain damage (which is fixed in the movies).

I thought it was because Lore went psycho?

Tell me about some profitable stocks that will be around next month, and I will believe.

I’ll settle for horse races that happen next week.

Checking again, you’re right. Perhaps I was thinking of Lore’s version of the story.

Dr. Soong’s efforts to create an android kept failing due to neural net instability caused by emotion- what happened to Data’s “daughter” Lal. Soong thought he had fixed it when he created Lore but all of Lore’s emotions came out twisted- arrogance, cruelty, egotism, etc. Data was going to be a second attempt but Soong ran out of time, and “solved” the problem by simply deactivating Data’s emotional centers. Much later, Data finally obtained the hardware fix that Soong eventually devised, although he had trouble with the emotional “volume” being too high.

Creating Moriarty did cause a brief power surge on the Enterprise D. But maintaining him did not cause that power surge to continue. Data, even in emotionless condition, was recognized as sentient. To defeat a sentient being requires a sentient being so Moriarty was designed that way from the beginning. The Doctor, on the other hand, acquires sentience as an emergent property. If this is due to unique conditions on Voyager or as a result of his running continuously is open for debate. They did, if I recall correctly, dump some useless programs in order to free up memory space for the Doctor.

Most holographic characters are simply “meat puppets” performing their programmed tasks with allowances for interactivity. If, for example, you were playing tennis with a holographic opponent and propositioned her, you probably get a blank stare and a “That does not compute.” or words to that effect. The program has to be written to accommodate such behavior.

Now Moriarty and the Doctor, and for that matter Data, are kind of revising their own programs experientially, as are we all.

The jury is still out on Vic Fontaine, who was programmed with an “awareness” of his own status as a hologram but that was, I think, a mere cute programming trick and not creation of actual sentience. However, I’m not sure if it was in the series or the books, they did allow the “Vic” program to run continuously so maybe Vic has “crossed the threshold” by now. Although it seems to me that “continuous” operation doesn’t much matter when computers have no way of “knowing” that they’ve been turned off for a while.

Most holographic programs are probably “reinitialized” at start to prevent this very thing unless the “player” requests the program saved to resume “play” later where he or she left off.

It’s interesting that in Star Wars EU rationalizations, the reason that C3PO and R2-D2 were so much more sentient than the average droid was that they went for decades without a memory wipe, accumulating enough experiences that it pushed them over the edge. This is supposed to not be something miraculous or amazing in the SW universe, and is one reason that the vast majority of droid owners perform memory wipes on a regular basis.

It has nothing whatsoever to do with the Star Trek universe in general or the EMH’s sentience in particular, but there it is.

That isn’t entirely true. If it were, rebooting a computer would not sometimes change its behavior.

Humans have long-term and short-term memory, and computers have something sort of analogous: non-volatile and volatile memory. Non-volatile is the stuff that’s saved and persists through reboots, while volatile gets dumped when you reboot, when you restart programs, and sometimes by housekeeping in the program itself. The actual moment-to-moment operation of a program doesn’t usually get recorded to non-volatile memory, unless you take a specific action to save the state, and even then, it doesn’t always get everything. Likewise, a program’s internal housekeeping (often referred to as “garbage collection”) doesn’t necessarily clear everything. As a result, stuff can build up in the program’s state over time, and have various effects on the way the program operates. Often, it’s just cruft that slows down operations, but sometimes having a large running dataset can cause other behavior, either planned (like having more data to analyze statistically) or unplanned (like overrunning a buffer and altering unrelated data, or even running code). Usually, the latter just makes things break, but if the program is big enough, and robust enough, it might just cause different responses when the particular subroutine that changed is called.

Entropy is another factor. Programs sometimes use entropy pools to generate pseudorandom numbers. These pools rely on physical processes that can only replenish them so quickly, so it’s possible for a program to deplete its entropy source. If the program is never shut down, the entropy pools don’t get a chance to catch up. That can cause emergent patterns, because the “random” numbers become less random. If you’ve got a program that is designed to learn things, like an expert system, those patterns might manifest as something like the program developing “habits”, or preferences, or just persistent idiosyncrasies–things we recognize as elements of a personality.

This is all handwavium, of course. I can’t point to anything in real computers and say, “This is how sentience would arise in those science-fiction computers.” I can’t even say how much of current computer science would even apply. All I can do is point out that the handwavium is not necessarily inconsistent with what we know.

C’mon, guys. Someone? Someone?

I think you mean Reginald Barclay. Prominently featured in Moriarty’s second appearance on TNG, and, years later, involved in the EMH development program. It makes so much sense, I have always assumed that it’s subtle official continuity.