pton_xd 7 hours ago

I checked your post history. Posts from 2017 to 2020 have between 0 to 3 em-dashes per post, with an average of 2.

This post has 52.

Interesting!

  • 0x6c75636964 7 hours ago

    Sharp eye, pton_xd! I’ve definitely developed a (probably excessive) fondness for em-dashes and lost track of how liberally I sprinkled them throughout the post… Hard to have fresh eyes after staring at my own words for too long.

    • jazzyjackson 5 hours ago

      How is it you end up sprinkling them? Have you key binded n dashes to type m dashes instead? You have an editor that auto replaces two hyphens with an m dash?

      • 0x6c75636964 4 hours ago

        option + shift + '-' and muscle memory :)

  • jwilber 6 hours ago

    A stat no doubt brought to us by genai-automated scraping.

    FWIW, this post seems longer than most of OPs usual posts.

    I’ll also add: as a longtime user of em-dashes, the constant low-effort dismissal of any writing using an em-dash as “must be genai!” is super annoying. So much so that I’ve made an effort to stop using them in my writing.

    There’s some poetic irony in using genai to dismiss someone else’s work for perceived use of genai.

0x6c75636964 4 days ago

I wrote this after noticing how generative AI tools seem to reflect more than just our queries—they often reveal something deeper about how we think, lead, and define ourselves. It’s not meant to promote a product or service, just to provoke reflection. Curious if others have felt a similar eerie alignment—or misalignment—with these tools.

  • krackers 9 hours ago

    >It’s not meant to promote a product or service

    Not yet, anyway. But they're a wonderful tool for exploring "idea space".

satisfice 6 hours ago

I am mystified by the apparently credulity of the author of this post. When I was young I found the experience of sex so overwhelming that I was certain there lay some great wisdom within it. Spoiler alert after many years of experience: there really isn’t. It’s a fun time for a little while. That’s about it.

Now we see people relating to their GPTs as if something profound is happening, but I suspect nothing is. This activity leads nowhere.

I work with and test these things. I find them creepy and I refuse to engage with them as if they were thinking beings. They are utterly unreliable narrators of their own “thoughts.”

  • 0x6c75636964 5 hours ago

    I think there might be a misunderstanding of my post. I don’t believe any magical profundity is arising from dialogues with LLMs that extend beyond their inherent technical capabilities/limitations. What is interesting to me, however, is that these exchanges can generate new insights about myself, especially given the recursive nature of my own thinking. Useful to me (like any tool), but certainly not an oracle.

alganet 7 hours ago

The issue of self doubt is quite interesting.

Are you trying to copy The Matrix? With some "know thyself" thing?

You know that it's a trick, right?

I can just not use AI. I don't have an inferiority complex about it. If it's better than me, it's better than me. I'm not measuring it though. Are you?

I don't spend time in philosophy to look at a mirror. I spent time to look inwards. It's quite different. AI can't do that.

Be cool, Mr. 0x6c7.

  • 0x6c75636964 7 hours ago

    Hey alganet, I appreciate your perspective. I agree that the difference between “looking in a mirror” and “gazing inward” is stark. My experiment is premised on the idea that AI can serve as a new kind of mirror—one that doesn’t replace introspection (which I do continuously, perhaps too often!) but catalyzes it by making implicit patterns—especially those hidden from my own introspective analysis—explicit through dialogic exchange. I wouldn’t claim it substitutes for direct phenomenological self-examination, but rather acts as a complementary tool—especially for those of us who find solo introspection limited by blind spots and cognitive loops.

    Regarding measuring: I’m not interested in “measuring” myself against AI as an adversary or competitor. Instead, I’m curious to see what emerges when AI functions as a partner in self-inquiry; one capable of sustaining recursive dialogue beyond what I could maintain alone.

    • metalman 6 hours ago

      "partner in self~inquiry" eh? That is impossible. The self is a solo ride.Any inner voice speaks, unbiden. Introspection by definition, rejects all externalialitys. That said, there is another practice that may be a better fit for what you are describing, and in certain cultures the ultimate expression of this is for one person to put there head on anothers shoulder, as a litteral expression of the idea of I see what you see, which is what friends do for each other, sometimes after great effort, to not just understand something together, but to understand it in the same way. Or you go the hard route, and ride the beast alone, and know, what you know. And then there is the test by fire, but even then and forever, to see a truth is one thing, to hold it is another, but to wake up some other day and have it gone and not know, is still possible, so in a way, it is best to know nothing :)

      • 0x6c75636964 4 hours ago

        That was really beautifully said. Thank you.

        I don't think I dispute anything you say. I deeply recognize the existential isolation you expressed so well. I approached this experiment from the perspective that these models were interesting and possibly useful tools in this (possibly foolish, but most definitely Sisyphean) endeavor, not as shepherds guiding me on the road to self-understanding.

    • alganet 7 hours ago

      If Sarah Connor doesn't know who's the doppelganger, would you hurt by being shot in the foot (where you stand)?

      I don't stand on AI. That's easy for me.

      • 0x6c75636964 7 hours ago

        Striking metaphor, alganet. You’re spot on—the uncertainty of who the “doppelganger” is remains ever-present in these dialogues. How much can we (or I) trust the mirrors we hold up to ourselves, especially when those mirrors might blur or reshape the boundaries between human and machine?

        As for being “shot in the foot,” I see that as a possible cost of inquiry. Sometimes discomfort or missteps are necessary steps toward new insight. Don't get me wrong, though, I’m not spending all day waxing philosophical with language models to “find myself.” This was simply something interesting that emerged along the way.

        I’m curious, though—how do you see this dynamic unfolding?

        • krackers 6 hours ago

          I can't tell if this is part of the bit, but is it intentional that your comment itself follows the classic chatgpt-ese structure of

          <praise>

          <elaboration>

          <follow-up>

          Assuming that the comment is truly written by a human, have you spent enough time with chatgpt that its cadence has been backpropagated into your mind?

        • alganet 6 hours ago

          I think you actually stand in a "moving enemy" narrative.

          Sometimes it's a celebrity, sometimes is a group, sometimes a concept. Spies, commies, AI, feminism. You like to feel like you're the one giving the cards, that you are important. If you fail doing that, you try to retcon it.

          I also think you're human, and you're out of "invisible enemies" to wear. I could list all of them. The fact that you're nitpicking small things is not a sign that you are close, instead, it's a sign that you are out of ideas.

          Did I make a correct profiling? (rethorical)

nice_byte 6 hours ago

watching people talking into these text boxes as if they were talking to a real person, and getting back these trite little bullet lists invokes feelings of sadness, second-hand embarrassment and mild disgust.

  • alganet 5 hours ago

    Calm down Hefestus. You are hammering too hard the punctuation anvil.