No internet connection
  1. Home
  2. Maps Meta

Fake Frames

By suspendedreason
    2021-03-02 23:33:58.383Z

    @hazard and I had a chat on Twitter yesterday/today about fake frames, and I wanted to open it up, in part because I articulated my objections so poorly. I think getting into actual examples will help us work this out better, come to an understanding

    Here's our Twitter transcript:

    Hazard

    was imagining a convo between past me and current me on the topic of fake-frameworks and generally being fluid with the type of knowledge you use

    i noticed a v interesting question substitution

    A: here's a dope framework for mind stuff
    B: yeah, but it doesnt have atoms
    A: we should judge models by their predictive power / utility, not ontological aesthetic
    B: well I'd way rather have everything awesome we've made with physics that what comes from your mental woo

    the last response is the substitution. "Is framework X better understanding the mind than your materialist/scientism frame?" get's swapped with "If you had to pick a single frame that you had to use everywhere, which would it be?"

    And that's just... not the choice I face. I have the option tap into the full power of the tradition of western STEM when writing code, and then a diff, vastly more effective frame for thinking about how my mind works. @suspendedreason thoughts?

    Suspended

    Yeah I guess I'd say predictive power + simulated annealing are the two interesting arguments I've seen for fake frameworks. I think they're both reasonable in the abstract—if the fake frames > randomness, then the simulated annealing argument seems to break down to a fox > hedgehog stance, which again seems very reasonable. On the predictive front, the pragmatist in me wants to say prediction is all that matters, and the "ontological aesthetics" are just words words words + aesthetics. But I'm maybe most skeptical here that you can make real predictions in this space, predictions that are empirically grounded and avoid being plagued by cybernetic feedback from adopting the frame in the first place. (Hyperstition-genre stuff)

    Not sure the precise usage fits the circumstance, but the general belief --> reality pipeline is what I'm gesturing toward. Part of ideology, classically, is that it manages to explain away all contradictions as, in fact, more evidence of the ideology's truth

    Also I forget who talks about this, but people unconsciously wave away parts of ideology they can't defend as "not important/central parts," while picking and choosing the bits that resonate. You see this with Bible stuff all the time. This is what's important / this isn't

    Hazard

    the thing on hyperstition made me think; would you be less wary of a "fake framework" that you or i made up "on our own" as opposed to picking one off the shelf?

    i think i'm trying to get at where you think the delusional pressure comes from

    like, if if i went into the woods, did a much of phenomenological investigation, and came out with a framework that didn't look v materialist, would you trust it more than if I was like "yo, i've been getting really into Buddhism"?

    Suspended

    Maybe that's an interesting suggestion, but I don't even have a problem with Buddhism, it's lindy

    My problem is probably more like naivete w/r/t how much phenomenological frames "confirm themselves"

    The way frames like trauma and adhd have "made themselves real"

    How much did the construction of these constructs lead to people over-patternmatching into these existing frames, vs. unveiling/allowing people to recognize? I think that's an open question.

    Hazard

    i bet surveys are going to be a big problem here. like, Buddhism has built lots of language to talk about mental events, and instructs people to do stuff, have new experiences, and then you can team up to try and point at the same things.

    but lots of things nowadays are probably left at the level of give someone a survey/questionaire that's lackluster even if you don't know Surveys Are Fake.

    i totally agree that i don't see nearly as much... effort? intent to precisely communicate? as i want when it comes to these things


    Link to hyperstition reading: http://xenopraxis.net/readings/carstens_hyperstition.pdf

    Some links on simulated annealing:

    • 11 replies

    There are 11 replies. Estimated reading time: 36 minutes

    1. suspendedreason
        2021-03-02 23:34:53.826Z

        @snav and I had actually had an earlier conversation, which led to his (really great IMO) blog post "On Introspection". (This is where we got into simulated annealing, linked above)

        [4:29 PM] snav: @suspended reason different angle to sell you on astrology: you're aware of cognitive biases, yeah? (lol). let's say you're looking to scramble your preconceptions about something. generally the only way to avoid cognitive biases when doing this is through injecting randomness
        [4:30 PM] snav: so you can view astrology as a divination tool that injects actual randomness which actually permits you to avoid the biases inherent in "lol im so random watch me type" speech, which is actually not random at all
        [4:31 PM] snav: put in psychology-specific terms: astrology can help you avoid "self-blindness", by giving you a certain combination of traits that you recognize in yourself, but might have avoided using common language because "im not like that"
        [4:31 PM] snav: e.g. i can tell my roommate something about themselves using astrology that they would be very upset if i told them using straightforward psychological language
        [4:31 PM] snav: ya follow?
        [4:35 PM] suspended reason: I'm perfectly hip to simulated annealing, but "as good as random" is hardly high praise for a framework
        [4:59 PM] snav: random is hard with psychological phenomena especially
        [4:59 PM] snav: and social
        [4:59 PM] snav: that's why most astrology is either psychological or event-based
        [5:00 PM] snav: randomness is hard: a lesson from the blockchain and OS world
        [5:00 PM] snav: true randomness, at least
        [5:00 PM] snav: that's why astrology takes truly random seeds
        [5:01 PM] snav: computers actually need specialized hardware to produce true randomness
        [5:01 PM] snav: https://en.wikipedia.org/wiki/Random_number_generation#Generation_methods
        [5:02 PM] snav: legitimately hard problem
        [11:34 AM] suspended reason: Hmmm curious what other people think
        [11:35 AM] suspended reason: Still, you're literally the only person I know who thinks of astrology as "injected randomness"
        [11:35 AM] suspended reason: My encounters are consistently with [REDACTED] types, who are either bad-faith or insane

        1. hazard
            2021-03-03 15:51:01.214Z

            There are others besides snav. Vaniver on LW has written about this here

          • In reply tosuspendedreason:
            suspendedreason
              2021-03-02 23:43:21.509Z

              Last bit and I'll let other stake over: Hazard had sent me a Michael Ashcroft thread on the Alexander Technique, to get us digging into object-level examples.

              I liked it as an overview of the Alexander Technique (AT), it seems potentially valuable as a cognitive tool, I might even give it a spin. But this is the pattern of thought I see from those kids that I'm skeptical of

              Michael Ashcroft (@m_ashcroft)
              11/ Two. – It's your early teens and the idea of 'cool' enters your mind. You want to look cool, perhaps a cool walk? How does a cool person walk?
              Your conscious mind intervenes in your walking. 'You' start 'doing' walking, where before it was natural.
              A new habit is formed.

              Ashcroft has a story about himself, which he extrapolates to others—that in his early teens he changed his gait to look cool, so now he walks the wrong way and he has to fix it. Here's another story (NYTMag):

              By around age 7, a child has already developed the specific style of walking she will have into adulthood. In fact, gait is such a distinctive biometric that the United States Department of Defense has researched technologies that identify individuals at a distance by just their walk. Changing the way your body ambulates won’t be easy. “Be very vigilant,” Rose says. “Don’t go back to your old pattern.”

              My recollection from an espionage phase is that the CIA will only try to change their agents' gait and voices through physical prostheses—because it's so damn difficult to alter, and the agents inevitably slip up into their natural gait/voice during high-stress moments, risking giving themselves away.

              Similarly, while his story about posture being purely performative, and about how attention isn't inherently (just socially) linked to body tension, is plausible—I can certainly imagine that kind of training, can imagine that opticratic substitution. But at the same time, the opposite story also sounds plausible, and behaviorism has been more or less abandoned as a theory of human behavior, so who knows.

              To Ashcroft's frame, this is "unnatural" social trauma that needs to be corrected. Maybe! He says "Your system knows how to walk, sit, stand, talk, and breathe without 'you' getting involved." Maybe! His plea for "letting the body do the right thing on its own," without micromanagement, is probably a reasonable nugget of wisdom—from Sarah Perry I learned the mantra "body is driving," which I think more or less serves the same purpose, reminding yourself "you're overthinking it" and loosening your grip. I'm all for that! But the more serious claims, I guess "ontological" you called them in your fake frames thread in Op, about the body and what's natural/unnatural and what's conditioned vs chosen vs "just known" by the body.... well.

              Kids take years to learn how to walk. Why would the body "just know" this? What does that even mean? Even if we cede that all mammals "just know" how to walk (still pure speculation), humans have their own set of complex cognitive tradeoffs to fit through the birth canal, so they're basically still fetuses until like age 3. Who knows in that developmental process how much is "natural" or "unnatural," or how much conditioning sneaks in, whether conditioning actually helps or hurts us. Hell, humans aren't even finished evolving into upright walkers, it's only been a few hundred thousand years and our backs are all fucked up because they're not meant to go vertical.

              So this nice hippy ontology about how "the body just knows" and "do it the natural way"—it just seems shoddy to me. It seems like as a larger worldview, it might lead people astray. If you wanna say the fake frame is valuable because it contains nice morsels like "body is driving," I might cede that—but it's not clear why all the fake speculative stuff about human bodies and brains is necessary to get to "body is driving." (This is a different circumstance than, say, a Christian worldview helping you get to Christian morality—there, you might reasonably argue the edifice is necessary for the lesson to be taken seriously.)

              1. In reply tosuspendedreason:
                hazard
                  2021-03-04 02:35:06.396Z

                  I'm going to use this comment to start looking at "does attention necessitate tension?" on it's own. Seems like a fairly concrete thing and we can probs make some headway on it, then see if there's anything useful that happened to propagate back into the main discussion.

                  How would we figure this out? What are we even trying ti figure out? I'll piece apart as many unique interesting claims as I can find.

                  Distinct Claim #1: "It's possible to pay attention with tensing your body, despite the fact that people normally do." That's a easier claim to test. There's also lots of ways it could be true. This claim is agnostic to the mechanism, and just says it can be changed.

                  This claim is interesting to me in it's own right. If it's possible to pay attention without straining, and if it's possible to do that without... meta-straining(?), then 1) I want to be able to do that, and 2) it's gives me serious questions about how people came to strain in the first place.

                  Distinct Claim #2: "the process of not straining while paying attention is a subtractive process. There's something that you are doing and you need to stop doing it. If it wasn't for this extra thing, you would have no tension.

                  Distinct Claim #3 (which sort of combines with the last one): "The extra/unnecessary thing that you do which causes tension is not something that you have always done. At some point in your life you didn't do it, you payed attention without straining or tensing, and at some point you learned to do this extra thing."

                  Distinct Claim #4 (building on the last two): "You learned to do this extra thing because of strategic representation concerns, or because schools are awful and teachers didn't believe you were working hard unless you looked like you were in pain, or something like that."

                  Also, it should be said that a sanity check hypothesis beneath all of these is claim #5: "it is in fact the case that many/most/all people hold tension when paying attention." We've been taking that as a given, but we've been wrong about wilder things before :)

                  Investigating #1 is just investigating if there are people who can pay attention without straining or holding tension. Seems straightforward. I'd trust self-reports from people who have convinced me they have some base-line body awareness, and I'd also trust the evidence of "someone observed people and looked for signs of tension in diff conditions" (I expect tension and straining should be visible to observers). Also, doing it ourselves, seeing if we can have this experience.

                  Investigating #2 would seem to involve deciding if we trust the self-reports of people who we think can not hold tension, or doing it ourselves and seeing if it feels accurate.

                  Investigating #3 starts to get harder. Most reliable source would probably from people who actually spend time with young kids of various ages. If we say that youngins weren't holding tension, and then eventually they do hold tension, seems like a dead give away.

                  Investigating #4, hmmmm. Exercise for the reader, I'm getting sleepy.

                  So yeah, those are some diving points, I'll follow up later with what I find looking into all of them.

                  1. In reply tosuspendedreason:
                    suspendedreason
                      2021-03-04 18:25:45.487Z

                      @hazard Correct me if I'm wrong, but I take Ashcroft to be making a couple additional, albeit somewhat implicit, claims: that the strategic self-representation motivations for body tension condition you into having bad, tense posture the rest of your life. However, your body "just knows" already how to have correct posture, and if you lose your bad conditioning, you can escape back pain and body tension and expand your awareness. (This is sorta like #3 with some extra bits.) I agree #1 is probably true, #2 is possible but problematic (for the kind of authenticity reasons about natural "knowing" vs unnatural "conditioning" previously brought up), but a lot of the big-picture claims are being abridged here.

                      Maybe you disagree but this doesn't seem like a super productive line of inquiry, like it won't actually crux us or get us to some understanding relevant to the larger subject, fake frames. To my eyes, the core problems to get into, based on our original Twitter conversation, are 1) whether some of the self-narratives that people in this Twitter sphere build can be called in some meaningful way "fake" and 2) if they are fake, whether they are still doing important work, that is, the fake stuff itself is load-bearing, and not just decoration/accompaniment.

                      I think I'm prepared to cede that a lot of people, on their own, or working through existing material, will come to self-understandings and self-narratives that, while detached "ontologically" from reality (e.g. the concepts and factual claims they rest on are falsifiable), are functionally and predictively "true"—they legitimately solve many of their hosts' problems, and it would be silly to discard them. This is the pragmatist approach to truth that I'd like to take generally, except I have reservations...

                      My reservations are that this seems like a way to add crap to the pile. A fake frame might be load-bearing for one purpose, but when taken as truth literal, a descriptive claim (e.g. we distort our "natural" gait in order to impress people) goes off into other contexts and leads wrong predictions because, it's well, "ontologically" wrong. I'm being really crude and imprecise here, and I'm struggling to talk about this coherently, but I think this means the burden is very high when arguing for a fake frame—one must show that not only is it functionally (predictively) right in one context, but also demonstrate that it's not problematic when applied to other contexts, or that the context at-hand is so important to human flourishing that it's worth taking a hit to your epistemology. Because, at the end of the day, epistemology builds off itself—we want our worldview to be consistent, so whatever we hold as true and constant will, downstream, shape our assessments, engender new beliefs and claims, etc. No single belief is isolated, and a set of wrong beliefs that add up to an ideological bent can fundamentally destabilize your entire worldview. If you take it as a foundational prior that the Earth is 5,000 years old, you end up doing a lot of epistemological gymnastics to justify all the other contradictory evidence.

                      So, this is a point maybe for a theory of knowledge that's not just pragmatic/functional/predictive. I'm pretty muddled and I know @crispy is a big fan of the pragmatic frame, so hopefully he can come in and clear some of the confusion up.

                      1. In reply tosuspendedreason:
                        suspendedreason
                          2021-03-07 20:06:03.129Z

                          @hazard A relevant post from Romeo Stevens here, I think:

                          To paraphrase Culadasa: awakening is a set of special insights that lead to drastically reduced suffering. This seems straightforward enough, and might lead one to question, if this is the case, why the vast landscape of teachers and practitioners making what seem to be some fairly wild claims about reality? Even if it is the case that these claims are some combination of mistaken, pedagogical in intention, reframes of more mundane points using unfortunate language etc, it would still raise the concern that these practices are, de facto, making their practitioners less connected with reality and decent epistemic standards in their mental models and communication with others. What gives?

                          Depending on where a person starts (existing linkages between beliefs and values) they may be led to come up with a variety of ideas about the 'true nature of reality' along the way as these linkages change.

                          Everything gets easier if you understand this to be an investigation of the map and not the territory. Making claims about reality based on the fact that your cartographic tools have changed is silly. In polishing the lens of our perception we see that it has a lot more scratches than we thought. And notice that we introduce new scratches on a regular basis, including in our efforts to polish it

                          1. In reply tosuspendedreason:
                            hazard
                              2021-03-07 23:09:05.458Z

                              (thought I sent this two days ago, turns out it was still a draft)

                              Aight, I was intending to go the examples route, but it seems like one of your big cruxes is at a higher level of abstraction.

                              1. whether some of the self-narratives that people in this Twitter sphere build can be called in some meaningful way "fake"

                              This is actually something I wanted to mention. I think if we clarified "fake", we'd make a lot of ground. One way to ground "fake" is to say "that woo stuff that we've both seen". But then "fake" isn't doing any work, we could use any other label.

                              I think there's two things we could be pointing at by saying "fake" frameworks. The first is "something that doesn't have a materialist ontology, and that doesn't have equations and math".

                              The other thing we could be pointing at with "fake"; we've both heard yogi's say some crazy shit. We've both heard a hippie say something that seems blatantly obviously false. Maybe they're onto something, but then they say bullshit that just can't be true.

                              I currently guess you're derefing "fake" to both of these things. Do you think so?

                              Also:

                              I think I'm prepared to cede that a lot of people, on their own, or working through existing material, will come to self-understandings and self-narratives that, while detached "ontologically" from reality (e.g. the concepts and factual claims they rest on are falsifiable)

                              BIG thoughts! Postulating at a dynamic: the "detached ontologically from reality" has a lot to do with the privileged interpretation of words in a materialist/scientism (m/s from now on) frame. Say you pick up a "fake framework", apply it, and empirically it does everything it says on the label. I think that unless you have Grade-A-Depleted-Uranium "words-are-fake" chops, you can be easily baited into interpreting your own framework through an m/s frame, and making ridiculous claims.

                              I think this happens when an m/s person Alice asks something like "okay, but like, do you really think there's this thing called prana that's flowing through your body and acts as a source of energy that you tap into through your breath?" The word really is doing a lot of work. Bob, who's been rigorously experimenting with breath work, takes the bait, and get's pushed into defending a weird materialist version of prana, even though they never cared about this in the first place.

                              Maybe my point is something like, a sharp user of a "fake framework" would/should/could never get baited into endorsing those claims the m/s version of those claims. They stay in conversation with you until yall were on the same page about what precisely they were claiming was true, what predictions they are making, and how you'd check the results yourself.

                              Related:

                              one must show that not only is it functionally (predictively) right in one context, but also demonstrate that it's not problematic when applied to other contexts, or that the context at-hand is so important to human flourishing that it's worth taking a hit to your epistemology.
                              I want to ponder how this does and doesn't apply to "not fake frameworks". What does it look like when people apply chemistry in the wrong context? Do I take an epistemological hit when I treat electrons as real? I really don't mean this in a "hah, your stuff sucks to! science is a hippocrit!" I genuinely think that we'd both learn something trying to answer this.

                              Here's my proposal for fruitful directions this convo could go. First, I think it's essential that we get back to looking at concrete examples and checking the truth value. Like, if Alexander technique/Buddhism/Coherence Therapy/Gnosticism says "when you do X, you'll see Y" we gotta test it. Not doing so feels like arguing about if this new fangled "chemistry" thing is legit, without ever bother to read others results, or experiment ourselves.

                              Second, I think we need to explore the motive forces for delusion. Where does the "epistemic hit" come from? I don't think the delusion is a given. You're suspicious at how often it seems to happen. The only way to move through that seems to be to get a more gearsy model of how delusion happens.

                              1. In reply tosuspendedreason:
                                suspendedreason
                                  2021-03-23 21:58:27.445Z

                                  Okay, trying to better factor what it means for a framework to be fake, I wanna approach this from a compression/indexicality angle.

                                  If it’s true that there’s a meaningful distinction between materially/ontologically robust and "real"ish frameworks, and materially/ontologically "fake" frameworks, then my bet is on the difference coming down to the first type of model being "fit" to a larger set of problems and interrogations than the latter kind. The "material and ontological" grounding is just a way of making sure that all the component parts testably relate to reality.

                                  Annie Dillard tells of an American Indian myth in which the moon goddess each month grabs a new moon disc. Every night, she shaves a little off the disc and then throws it across the sky, until nothing’s left, and then she goes off to grab a new moon disc for next month. The leftover shavings fall down to earth as silvery locust flowers.

                                  This myth (myth may be a better term than “fake frames,” and we might also that that all narratives have some mythology in them) is good at bundling some “real” (in the sense of reliable, robust, testable) patterns in it—the cycles of the moon most obviously. "What do you mean it's fake? It taught me the lunar cycle." But if you used this frame determine that there would be more of these locust flowers when the moon was waxing, and even made real consequential decisions based on it (e.g. not foraging because there’d be abundant locust flowers soon enough, or assuming that there were the same amount of locust flowers in winter as summer b/c the moon shavings remained constant), you would error deeply.

                                  Similarly, writers talk about how good metaphors are “robust”—the similarities explicitly nodded to in a text are meaningful, but so are other implicit ones that you might think up. In this way, the metaphor really fully helps reveal its relative.

                                  1. hazard
                                      2021-03-23 22:57:10.511Z

                                      If that's are criteria for a framework being fake, it's important to note that we haven't actually dealt with the issue of "are any of the things we've been calling fake frameworks actually fake?"

                                      Or, maybe more importantly, we have not established anything like "Here's this chunk of the world we understand (something related to human mental stuff probably), here are several frameworks for interacting with it, these ones are fake because [demonstration of poor fit] and these ones aren't fake because [demonstration of fit]."

                                      Unless, maybe you think we've done that but I don't?

                                      That aside, I agree that pragmatic "fit" is the criteria to judge "should I use X to understand Y?"

                                    • In reply tosuspendedreason:
                                      hazard
                                        2021-03-24 23:49:34.583Z

                                        Some good thought today kicked of by this dope twitter thread by @the_lagrangian

                                        In this comment @crispy describes an IFuckingLoveScience coworker who has too much faith in statistical methods.

                                        Crispy: <makes some objection to the way probability/statistics/specific->general reasoning is being used>
                                        I-Fucking-Love-Science (IFLS): ...but fundamentally that's all just statistics. We've proven the theorems we need to prove to use the tools this way.
                                        Crispy: Okay, but don't you think that our framing of the problem is ultimately what gives it applicability to the real world? Like, if we don't taxonomize two different phenomena that might be causing each other over time, but just take a high-level view of cause-and-effect that doesn't allow for such temporal processes we might end-up misunderstanding the entire system? These theorems don't provide for these possibilities, they expect to be applied in a place where certain assumptions hold that we can never really verify. [...]

                                        IFLS seems to be doing this thing where the confidence he has in the deductive aspect of stats gets smuggled into his assessment confidence of making the right assumptions to begin with. If you actually have no good reason to think that your irl situation fits the assumptions of your formalism, and you still want to apply the formalism, it should be from a place of "yes, i'm actually just throwing shit at the wall, who knows what'll happen."

                                        Talking about "priors" and "updating" is interesting, because the question "does your irl situation meet the requirements of the formalism?" is always "NO!" because the formalism requires doing uncomputable tasks. There are lots of optimality theorems for Bayesian stuff in general, but there aren't theorems about the optimality of approximations of bayesian updating. I'd love to see someone work on some though! Often Bayesian stuff is justified with Dutch book theorems (which I'm not actually familiar with) with the point being "If you're not Bayes, a smart bettor can milk money out of you". Dope, so given that we will never be Bayes, what do you have to say about various finite versions of Bayes?

                                        Also to reaffirm what Lagrangian points at, there's really good useful stuff that "trying to do bayesian updating":

                                        • that plausibility is not binary (just true or false)
                                        • the plausibility of a hypothesis is not just a function of the data you have in front of you, but also other information you might have
                                        • that competing hypotheses need to be compared to each other, not evaluated in a vacuum

                                        That's good shit!

                                        A point here is something like this: given that what you are doing when you "update your priors" is not literal Bayesian math, theorems of optimality of bayesian math should not make you confident in the thing you are doing. You should only be confident in "updating your priors" via the process of doing it and seeing that it helps.

                                        To rile @suspendedreason enough to write more about this, I'll say that bayesianism (applied to your's or others minds) is a "fake framework" in the same way the Alexander technique is a "fake framework". (I'm ignoring your fit definition for a sec). They are both fake in the sense that neither of them are actually good card caring materialist ontologies. "Bayesianism" seems like it is, because you use the name of a nice mathy formalism, but if you had to actually talk specify what you meant by "doing an update", you'd end up describing a process made of mental building blocks (not materialist!). This is okay and good and fine. Because the look doesn't matter. The ontology doesn't matter*. The aesthetic doesn't matter. At least not in terms of whether or not it's true.