Coco the gorilla might have been but turned out to be far removed (largely because she faked knowing sign language and her handlers didn’t know sign well enough to see she was faking it.)
Language comprehension is often mistaken for sentience, but everything communicates on some level. Developing a means for communication doesn’t necessarily imply sentience. By default, Mammals are conscious because of the way they propagate themselves through different means in complex environments. OTOH, bacteria are not conscious because their propagation is mostly driven by chemical responses. For example, indv. bacteria are not engaging in game theoretic interactions with other bacteria over resources for self-propagation.
Heck, we can’t be certain the person who wakes up every morning is the same as the person who went to sleep.
Yes, we can on many levels. I am not sure who says these things, any links?
Yes, we can on many levels. I am not sure who says these things
Never heard of the “universe started Thursday” theory?
Essentially, there is no proof that the universe didn’t start last Thursday. All of your memories, your experiences, your tangible progress, could be planted and you would never know.
So how do you know you are “you” as you think you are, or if you’re just a week old construct that believes you are “you”?
Also, I think that the whole “we can’t be certain we are the same person who wakes up every morning” is based on the ship of theseus concept they were building on.
You wouldn’t consider yourself the exact same person you were when you were 5 for obvious reasons. So it stands to reason that that change happened at some point. How would you know that you did not change over night? And if you did, are you the same person as yesterday? And if you answer yes, where’s the line? Are you the same person as last year? 5 years ago? Obviously not, so how can you know that that caliber of change hasn’t happened to you in a night, or that any amount of change makes you someone else?
Also, they could be referring to the broken consciousness theory, where consciousness is destroyed when you fall asleep, created when you wake, and dreams are an illusion.
In that scenario, if your stream of consciousness actually is broken, can you say you are the same person as yesterday? If the breaking of consciousness doesn’t matter in that question, would a perfect copy of you with all of your memories also be you? Or not, because you can’t experience their perspective?
I think the break here is whether or not you can define consciousness as “you”. For your supposition to be true, the answer would necessarily have to be no, as you said you can prove that you are yourself in many other ways.
But without a point of perspective experiencing the universe, what are we?
Ah rationalism, my mind blocks out unpleasant things all the time. There’s no proof the Universe wasn’t farted out by God. In this case, I guess I’d treat it as any other fantastical statement: extraordinary claims require extraordinary evidence.
I think people who liken consciousness to a collection of properties make the same mistake as people attributing language comprehension as a property of consciousness. You cannot put discrete elements together and call something a ship any more than you can put discrete elements together and call it conscious. A translation app can see and translate images from one language to another, but the app is not conscious.
Regarding the changes a conscious being experiences over time: you can change on a chemical level (as one does over time), you can change on a genetic level (this also happens for any living thing over time), and you can change over an organism level, but you remain the same person (even after you get up from sleeping) because you maintain some internally directed sense of self. Internal self-direction is a key property of consciousness.
In this case, I guess I’d treat it as any other fantastical statement: extraordinary claims require extraordinary evidence.
Ah, so this conversation doesn’t matter. You made up your mind even before you even asked for explanation.
By design, philosophical concepts neither require nor can produce proof. If they could, they literally wouldn’t be philosophy. If your idea of arguing how “you” exists includes the line of reasoning that you need proof, then the truth to you is that “you” don’t exist, because you cannot prove your consciousness to someone else either. Just the same as I cannot empirically prove my consciousness to you. You are an amalgamation of chemicals and genetics, as you said.
So really, one taking your stance doesn’t have the conversational authority to even ask what proof is there. The hard evidence is just chemical reactions and genetics all the way down.
In any case, all three of the concepts I listed are not my ideas. They are debated topics, some for literally centuries, in the philosophical world. If you suppose yourself better than the likes of Plato or Socrates because you think you can label a fundamental aspect of the universe as a “mistake” people make when they think about it, then there’s really no honest way you can even approach theories like those without immediately discrediting them.
I guess have fun with that. But for me, there’s no point in contemplating with someone who supposes that proof precedes basic concepts of philosophy in a question inherently about philosophy.
By design, philosophical concepts neither require nor can produce proof.
Hmm, well-reasoned thought experiments apply their logic in a way that provides a structural basis for maintaining their arguments, which serves as the proof for the thought experiment.
If your idea of arguing how “you” exists includes the line of reasoning that you need proof, then the truth to you is that “you” don’t exist, because you cannot prove your consciousness to someone else either. Just the same as I cannot empirically prove my consciousness to you.
As a conscious being I prove my existence by engaging with external stimuli (like other people) while maintaining an internally motivated and directed sense of self for my self-propagation. Absent any need for self-propagation and being only something that reacts or responds, l would be a humble bot. An amalgamation of chemicals and genetics can react and respond, but it’s not necessarily motivated for self propagation.
Viruses are interesting in that they behave like self propagating organisms, but at an indv. level they lack the capacity for adjusting their responses strategically to external stimuli. That is, the adaptive response for virii is left up to random genetics (same as for bacteria), which precludes consciousness.
I cannot speak to you being a conscious being in case you’re a bot. Bots can pass the Turing test, but passing the Turing test doesn’t necessarily guarantee consciousness.
All this to say, feel free to share your thoughts. I am not close minded, even if I am strongly biased towards some ideas vs others. Arguments should be judged based on their strength, I think.
Evidenced-based discussion is only tangentially related to philosophy. There’s no point in sharing my thoughts if the crux of your counterpoint essentially boils down to “prove it or go home”
In the meantime, if I can present three separate, historical philosophical ideas to you and you can shoot them all down with one phrase demanding proof and a supposition that everyone else is just mistaken, you may want to reexamine your idea of an open mind.
You have engaged a philosophical topic with evidence-based expectations. I recognize the futility of continuing this conversation, and so I won’t. Making a point and being countered with “maybe you’re just wrong” is literally a waste of my time.
I did more than enough to clarify the original person’s point. I don’t owe you a scientific explanation for that which you refuse to consider.
Again, philosophy is only tangentially related to proof. You can’t examine a theory like the ship of theseus with any of those methods and come out with a conclusive answer. If you could, it wouldn’t be a philosophical topic.
You don’t understand that, and I’m not going to attempt the impossible to prove it to you. That’s why this conversation is meaningless and I don’t really wish to continue it.
Okey doke, do as you wish! FYI, though I wasn’t asking for a “proof of the Ship of Theseus”, more about how one derives that you’re not the same conscious entity before and after going to sleep. I think I’ll go do some reading, I am sure someone’s said something somewhere about this.
Realistically, I am just going to look at more meemees and go to sleep.
Again, philosophy is only tangentially related to proof.
Edit: I disagree with this again based on previously stated reasons. Philosophy has never been without reason or logic :)
As a conscious being I prove my existence by engaging with external stimuli…
…Bots can pass the Turing test, but passing the Turing test doesn’t necessarily guarantee consciousness.
This is part of the problem. We don’t have a consistent definition for consciousness anymore than we have a definition for AGI. (AGI can, by reading the instructions, build flat-packed furniture, or make coffee, but would a bot that could do these things be AGI?)
We assume the people we talk to are conscious, but then they could be Turing complete bots, or a Chinese room, or a p-zombie. You’ve essentially argued that you cannot demonstrate to us that you are actually conscious, only that you seem so convincingly.
Similarly, if I were to argue that I’m not conscious, but an advanced iteration of an AI program practicing speaking from a private lab in Sacramento California, and in fact, have no life beyond going online and pretending to be a person, you’d have no way of establishing this as true or false.
So appealing to consciousness is useless on account that we can’t actually say what it is. Again, we don’t have any edge cases of anything that is nearly conscious and appears to be, but isn’t, or something that is conscious but only barely. We assume that anything we can engage as human is, often leading to peculiar results like Sophia, the Robot-yet-Saudi-citizen that isn’t even convincingly sophisticated.
I’d argue that we want to be more than a material chain reaction, to the point that we’re frightened of considering the bare minimums that we would need to be to be convincingly ourselves.
We assume the people we talk to are conscious, but then they could be Turing complete bots, or a Chinese room, or a p-zombie. You’ve essentially argued that you cannot demonstrate to us that you are actually conscious, only that you seem so convincingly.
Right, except non digital beings have different modes of interactions with universe. In fact, more than that a key difference is that conscious beings impose themselves on the universe in an effort to self propagate. While we’re just interacting digitally, I cannot ensure anything, but that by itself doesn’t mean anything.
I am not sure of anything, but I dislike reading the same ideas that don’t seem to add up as far as I am concerned shrug
Again, we don’t have any edge cases of anything that is nearly conscious and appears to be, but isn’t, or something that is conscious but only barely
But we do, people in comas for example. They maintain personality and memory after persistent unconscious periods, which differentiates them from both unconscious organisms, and also precludes the reconstruction hypothesis.
I’d be interested in an elaboration on how you assert the person who wakes up every morning can be certain they’re the same person who went to sleep the night before.
The notion that we might not be comes up in multiple places, but is largely an extrapolation of the Transporter Paradox, in which continuity is the only known link we have between some things in two states (as per the Ship of Theseus). AI programmers contemplate it when they have to reboot their test subject (which are related to, but not the same as LLMs or Generative AI projects, rather are efforts towards creating AGI). When an AI is rebooted, is it the same entity as it was beforehand? In the webcomic Freefall this is considered by robots, and while a large bloc of robots are not keen on upgrades. Mark Stanley gets deep into the discussion within the comic
CGP Grey noted in his Transporter Paradox video that sleep might be the same as a transporter event since the brain’s cerebellum shuts down to a state of unconsciousness in NREM sleep (SWS sleep) and in fact, as old people approach death they experience increasing amounts of NREM sleep until, if they are lucky, they just don’t wake up.
exurb1a’s video Sleep is just death being shy is a philosophical look at this phenomenon.
So yeah, without any kind of established spiritual phenomenon (for which there is absolute zero evidence – we’ve checked at length) the only thing linking who you are when you wake up, and who you were when you went to sleep is the consistency of the material world matching (more or less) the memories of the person waking, which gets weirder when unconsciousness extends longer than a night’s rest (such as going unconscious due to anesthetic or a coma state).
Who we are is a very ephemeral state, a quasi-stable event. And we exist longer than a day of consciousness only because we define our narratives that way. And some creators like Phillip K. Dick have notoriously raised challenges to this by offering narratives in which continuity and identity are unreliable.
The notion that we might not be comes up in multiple places, but is largely an extrapolation of the Transporter Paradox, in which continuity is the only known link we have between some things in two states (as per the Ship of Theseus). AI programmers contemplate it when they have to reboot their test subject (which are related to, but not the same as LLMs or Generative AI projects, rather are efforts towards creating AGI).
The first idea we need to separate is between being conscious (awake) and conscious (maintaining an internal and external state for enabling both self reflection and reflecting on the environment). Digital beings certainly do the first type of conscious, and AI development wants to move them to the second state of conscious. The reason current LLMs are parrots is they’re incapable of self reflection to a persistent and longterm degree. Think of a human or complex biological organism: these types of beings maintain an internal and external state of some kind of memory persistence which enables all manner of reflection across different nodes of interactions. In fact, memory issues are often associated with personality changes or degradation in interacting with external stimuli. The consciousness of non-digital beings is therefore dependent on the duration and persistence of their memory and capability to learn and adapt to external and internal stimuli. Note: an individual memory is imperfect, but an emergent property of memory persistence is creating a culminating state that averages to some degree of consistency over time for organisms without memory issues.
Why sleep doesn’t affect the state of the conscious being: while sleep does induce dreaming (memory reformatting), conscious beings can differentiate between a dream and non-dream states upon waking, depending on the complexity of the organism. In fact, wakefulness has a unique ability in which reformatted memory from sleep is reconsidered or reflected upon towards a prior existing state. I think the first misguided idea is thinking of NREM sleep as a state of unconsciousness that impacts overall memory persistence during an awake state. We know from our own experience as people who go to sleep and wake up everyday, we’re able to rewrite what we dreamt to reconcile and sense make with our persistence internal and external state maintenance. However, again this depends on the cognitive capacity of the organism, and at the very least personality aspects are maintained for different types of organisms (like your dog or cat etc).
Digital beings also maintain persistent memories after sleep states, though a key difference is that reformatted memory is inaccessible in digital beings. Here is where AI development wants to make progress, that is enable a machine state that can acquire memory longterm persistence despite reformatting. This type of long term memory persistence and reconfiguration ability is key in learning and adaptation processes for complex biological organisms.
How can we ensure either a human or digital being is the same after it goes to sleep: if you plant false memories during sleep, how does that change the interaction of the being with its external environment? Secondly, does the being have the capacity to reconcile incongruent memories? We know human and complex organism memory is susceptible to false memories during sleep: https://amp.theguardian.com/science/neurophilosophy/2015/mar/09/false-memories-implanted-into-the-brains-of-sleeping-mice
However one key aspect is that we have some understanding of differentiating between true and false memories on a neurological level, even if the conscious being (for any number of complex reasons) may fool itself : https://www.biorxiv.org/content/10.1101/2020.10.21.349530v1.full
Being able to retain and maintain long term memory pertinence (which enable internal and external state maintenance), and using that to reconcile interactions with internal and external stimuli ensures a conscious being maintains its state after sleep.
Okay now: how do you you’re not simply reconstructed by something? How do you know it’s the you that went to sleep or got reconstructed etc. First, a key property of the universe we inhabit is its tendency towards an entropy maximizing state. Second, stochastic processes exist in any system. So a perfect reconstruct system does not exist (no matter how tantalizing it is to think about).
Therefore, a transporter or transfomer that continually constructs something will inevitably introduce changes over time that differentiate the organism each time it is used. What is the timescale of this? Well, if you’re being reconstructed every time you sleep, including short naps or forced closures because of tiredness, then you’re introducing changes with each sleep, and these changes will add up quickly, to the point that you may develop a different state of memory persistence going from day to day. But more importantly, we would experience this on a population level and at a perceptible degree to affect interactions with other organisms. Like your dog wakes up to find you an enemy, but not just your dog it’s every dog every that wakes up different each day, and becomes different to distinguish itself completely from its past self to the point of alienation (ie not a simple personality change as expected from aging or interactions with the environment).
Who we are is a very ephemeral state, a quasi-stable event. And we exist longer than a day of consciousness only because we define our narratives that way.
I don’t know, feel free to act differently than you usually do, see how you feel about it. It’s not just “narrative”. Internal state maintenance and interaction with external stimuli are a consequence emergent biological processes that are more than our self storytelling capabilities.
Feel free to write this off as uninformed rambling, I am not anyone special enough for anyone to listen to me anyways lol
Language comprehension is often mistaken for sentience, but everything communicates on some level. Developing a means for communication doesn’t necessarily imply sentience. By default, Mammals are conscious because of the way they propagate themselves through different means in complex environments. OTOH, bacteria are not conscious because their propagation is mostly driven by chemical responses. For example, indv. bacteria are not engaging in game theoretic interactions with other bacteria over resources for self-propagation.
Yes, we can on many levels. I am not sure who says these things, any links?
Never heard of the “universe started Thursday” theory?
Essentially, there is no proof that the universe didn’t start last Thursday. All of your memories, your experiences, your tangible progress, could be planted and you would never know.
So how do you know you are “you” as you think you are, or if you’re just a week old construct that believes you are “you”?
Also, I think that the whole “we can’t be certain we are the same person who wakes up every morning” is based on the ship of theseus concept they were building on.
You wouldn’t consider yourself the exact same person you were when you were 5 for obvious reasons. So it stands to reason that that change happened at some point. How would you know that you did not change over night? And if you did, are you the same person as yesterday? And if you answer yes, where’s the line? Are you the same person as last year? 5 years ago? Obviously not, so how can you know that that caliber of change hasn’t happened to you in a night, or that any amount of change makes you someone else?
Also, they could be referring to the broken consciousness theory, where consciousness is destroyed when you fall asleep, created when you wake, and dreams are an illusion.
In that scenario, if your stream of consciousness actually is broken, can you say you are the same person as yesterday? If the breaking of consciousness doesn’t matter in that question, would a perfect copy of you with all of your memories also be you? Or not, because you can’t experience their perspective?
I think the break here is whether or not you can define consciousness as “you”. For your supposition to be true, the answer would necessarily have to be no, as you said you can prove that you are yourself in many other ways.
But without a point of perspective experiencing the universe, what are we?
Ah rationalism, my mind blocks out unpleasant things all the time. There’s no proof the Universe wasn’t farted out by God. In this case, I guess I’d treat it as any other fantastical statement: extraordinary claims require extraordinary evidence.
I think people who liken consciousness to a collection of properties make the same mistake as people attributing language comprehension as a property of consciousness. You cannot put discrete elements together and call something a ship any more than you can put discrete elements together and call it conscious. A translation app can see and translate images from one language to another, but the app is not conscious.
Regarding the changes a conscious being experiences over time: you can change on a chemical level (as one does over time), you can change on a genetic level (this also happens for any living thing over time), and you can change over an organism level, but you remain the same person (even after you get up from sleeping) because you maintain some internally directed sense of self. Internal self-direction is a key property of consciousness.
Ah, so this conversation doesn’t matter. You made up your mind even before you even asked for explanation.
By design, philosophical concepts neither require nor can produce proof. If they could, they literally wouldn’t be philosophy. If your idea of arguing how “you” exists includes the line of reasoning that you need proof, then the truth to you is that “you” don’t exist, because you cannot prove your consciousness to someone else either. Just the same as I cannot empirically prove my consciousness to you. You are an amalgamation of chemicals and genetics, as you said.
So really, one taking your stance doesn’t have the conversational authority to even ask what proof is there. The hard evidence is just chemical reactions and genetics all the way down.
In any case, all three of the concepts I listed are not my ideas. They are debated topics, some for literally centuries, in the philosophical world. If you suppose yourself better than the likes of Plato or Socrates because you think you can label a fundamental aspect of the universe as a “mistake” people make when they think about it, then there’s really no honest way you can even approach theories like those without immediately discrediting them.
I guess have fun with that. But for me, there’s no point in contemplating with someone who supposes that proof precedes basic concepts of philosophy in a question inherently about philosophy.
Hmm, well-reasoned thought experiments apply their logic in a way that provides a structural basis for maintaining their arguments, which serves as the proof for the thought experiment.
As a conscious being I prove my existence by engaging with external stimuli (like other people) while maintaining an internally motivated and directed sense of self for my self-propagation. Absent any need for self-propagation and being only something that reacts or responds, l would be a humble bot. An amalgamation of chemicals and genetics can react and respond, but it’s not necessarily motivated for self propagation.
Viruses are interesting in that they behave like self propagating organisms, but at an indv. level they lack the capacity for adjusting their responses strategically to external stimuli. That is, the adaptive response for virii is left up to random genetics (same as for bacteria), which precludes consciousness.
I cannot speak to you being a conscious being in case you’re a bot. Bots can pass the Turing test, but passing the Turing test doesn’t necessarily guarantee consciousness.
All this to say, feel free to share your thoughts. I am not close minded, even if I am strongly biased towards some ideas vs others. Arguments should be judged based on their strength, I think.
Evidenced-based discussion is only tangentially related to philosophy. There’s no point in sharing my thoughts if the crux of your counterpoint essentially boils down to “prove it or go home”
In the meantime, if I can present three separate, historical philosophical ideas to you and you can shoot them all down with one phrase demanding proof and a supposition that everyone else is just mistaken, you may want to reexamine your idea of an open mind.
You have engaged a philosophical topic with evidence-based expectations. I recognize the futility of continuing this conversation, and so I won’t. Making a point and being countered with “maybe you’re just wrong” is literally a waste of my time.
I did more than enough to clarify the original person’s point. I don’t owe you a scientific explanation for that which you refuse to consider.
Later.
I don’t know what you think is happening here, sorry I am confused.
Anyway don’t worry about it! When I say proof, I mean something like this: https://milnepublishing.geneseo.edu/concise-introduction-to-logic/chapter/4-proofs/
Again, philosophy is only tangentially related to proof. You can’t examine a theory like the ship of theseus with any of those methods and come out with a conclusive answer. If you could, it wouldn’t be a philosophical topic.
You don’t understand that, and I’m not going to attempt the impossible to prove it to you. That’s why this conversation is meaningless and I don’t really wish to continue it.
Have a good night
Okey doke, do as you wish! FYI, though I wasn’t asking for a “proof of the Ship of Theseus”, more about how one derives that you’re not the same conscious entity before and after going to sleep. I think I’ll go do some reading, I am sure someone’s said something somewhere about this.
Realistically, I am just going to look at more meemees and go to sleep.
Edit: I disagree with this again based on previously stated reasons. Philosophy has never been without reason or logic :)
As a conscious being I prove my existence by engaging with external stimuli…
…Bots can pass the Turing test, but passing the Turing test doesn’t necessarily guarantee consciousness.
This is part of the problem. We don’t have a consistent definition for consciousness anymore than we have a definition for AGI. (AGI can, by reading the instructions, build flat-packed furniture, or make coffee, but would a bot that could do these things be AGI?)
We assume the people we talk to are conscious, but then they could be Turing complete bots, or a Chinese room, or a p-zombie. You’ve essentially argued that you cannot demonstrate to us that you are actually conscious, only that you seem so convincingly.
Similarly, if I were to argue that I’m not conscious, but an advanced iteration of an AI program practicing speaking from a private lab in Sacramento California, and in fact, have no life beyond going online and pretending to be a person, you’d have no way of establishing this as true or false.
So appealing to consciousness is useless on account that we can’t actually say what it is. Again, we don’t have any edge cases of anything that is nearly conscious and appears to be, but isn’t, or something that is conscious but only barely. We assume that anything we can engage as human is, often leading to peculiar results like Sophia, the Robot-yet-Saudi-citizen that isn’t even convincingly sophisticated.
I’d argue that we want to be more than a material chain reaction, to the point that we’re frightened of considering the bare minimums that we would need to be to be convincingly ourselves.
Right, except non digital beings have different modes of interactions with universe. In fact, more than that a key difference is that conscious beings impose themselves on the universe in an effort to self propagate. While we’re just interacting digitally, I cannot ensure anything, but that by itself doesn’t mean anything.
I am not sure of anything, but I dislike reading the same ideas that don’t seem to add up as far as I am concerned shrug
But we do, people in comas for example. They maintain personality and memory after persistent unconscious periods, which differentiates them from both unconscious organisms, and also precludes the reconstruction hypothesis.
I’d be interested in an elaboration on how you assert the person who wakes up every morning can be certain they’re the same person who went to sleep the night before.
The notion that we might not be comes up in multiple places, but is largely an extrapolation of the Transporter Paradox, in which continuity is the only known link we have between some things in two states (as per the Ship of Theseus). AI programmers contemplate it when they have to reboot their test subject (which are related to, but not the same as LLMs or Generative AI projects, rather are efforts towards creating AGI). When an AI is rebooted, is it the same entity as it was beforehand? In the webcomic Freefall this is considered by robots, and while a large bloc of robots are not keen on upgrades. Mark Stanley gets deep into the discussion within the comic
CGP Grey noted in his Transporter Paradox video that sleep might be the same as a transporter event since the brain’s cerebellum shuts down to a state of unconsciousness in NREM sleep (SWS sleep) and in fact, as old people approach death they experience increasing amounts of NREM sleep until, if they are lucky, they just don’t wake up.
exurb1a’s video Sleep is just death being shy is a philosophical look at this phenomenon.
So yeah, without any kind of established spiritual phenomenon (for which there is absolute zero evidence – we’ve checked at length) the only thing linking who you are when you wake up, and who you were when you went to sleep is the consistency of the material world matching (more or less) the memories of the person waking, which gets weirder when unconsciousness extends longer than a night’s rest (such as going unconscious due to anesthetic or a coma state).
Who we are is a very ephemeral state, a quasi-stable event. And we exist longer than a day of consciousness only because we define our narratives that way. And some creators like Phillip K. Dick have notoriously raised challenges to this by offering narratives in which continuity and identity are unreliable.
The first idea we need to separate is between being conscious (awake) and conscious (maintaining an internal and external state for enabling both self reflection and reflecting on the environment). Digital beings certainly do the first type of conscious, and AI development wants to move them to the second state of conscious. The reason current LLMs are parrots is they’re incapable of self reflection to a persistent and longterm degree. Think of a human or complex biological organism: these types of beings maintain an internal and external state of some kind of memory persistence which enables all manner of reflection across different nodes of interactions. In fact, memory issues are often associated with personality changes or degradation in interacting with external stimuli. The consciousness of non-digital beings is therefore dependent on the duration and persistence of their memory and capability to learn and adapt to external and internal stimuli. Note: an individual memory is imperfect, but an emergent property of memory persistence is creating a culminating state that averages to some degree of consistency over time for organisms without memory issues.
Why sleep doesn’t affect the state of the conscious being: while sleep does induce dreaming (memory reformatting), conscious beings can differentiate between a dream and non-dream states upon waking, depending on the complexity of the organism. In fact, wakefulness has a unique ability in which reformatted memory from sleep is reconsidered or reflected upon towards a prior existing state. I think the first misguided idea is thinking of NREM sleep as a state of unconsciousness that impacts overall memory persistence during an awake state. We know from our own experience as people who go to sleep and wake up everyday, we’re able to rewrite what we dreamt to reconcile and sense make with our persistence internal and external state maintenance. However, again this depends on the cognitive capacity of the organism, and at the very least personality aspects are maintained for different types of organisms (like your dog or cat etc).
Digital beings also maintain persistent memories after sleep states, though a key difference is that reformatted memory is inaccessible in digital beings. Here is where AI development wants to make progress, that is enable a machine state that can acquire memory longterm persistence despite reformatting. This type of long term memory persistence and reconfiguration ability is key in learning and adaptation processes for complex biological organisms.
How can we ensure either a human or digital being is the same after it goes to sleep: if you plant false memories during sleep, how does that change the interaction of the being with its external environment? Secondly, does the being have the capacity to reconcile incongruent memories? We know human and complex organism memory is susceptible to false memories during sleep: https://amp.theguardian.com/science/neurophilosophy/2015/mar/09/false-memories-implanted-into-the-brains-of-sleeping-mice
However one key aspect is that we have some understanding of differentiating between true and false memories on a neurological level, even if the conscious being (for any number of complex reasons) may fool itself : https://www.biorxiv.org/content/10.1101/2020.10.21.349530v1.full
Being able to retain and maintain long term memory pertinence (which enable internal and external state maintenance), and using that to reconcile interactions with internal and external stimuli ensures a conscious being maintains its state after sleep.
Okay now: how do you you’re not simply reconstructed by something? How do you know it’s the you that went to sleep or got reconstructed etc. First, a key property of the universe we inhabit is its tendency towards an entropy maximizing state. Second, stochastic processes exist in any system. So a perfect reconstruct system does not exist (no matter how tantalizing it is to think about).
Therefore, a transporter or transfomer that continually constructs something will inevitably introduce changes over time that differentiate the organism each time it is used. What is the timescale of this? Well, if you’re being reconstructed every time you sleep, including short naps or forced closures because of tiredness, then you’re introducing changes with each sleep, and these changes will add up quickly, to the point that you may develop a different state of memory persistence going from day to day. But more importantly, we would experience this on a population level and at a perceptible degree to affect interactions with other organisms. Like your dog wakes up to find you an enemy, but not just your dog it’s every dog every that wakes up different each day, and becomes different to distinguish itself completely from its past self to the point of alienation (ie not a simple personality change as expected from aging or interactions with the environment).
I don’t know, feel free to act differently than you usually do, see how you feel about it. It’s not just “narrative”. Internal state maintenance and interaction with external stimuli are a consequence emergent biological processes that are more than our self storytelling capabilities.
Feel free to write this off as uninformed rambling, I am not anyone special enough for anyone to listen to me anyways lol