I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
PJ! Did you really forget? Think back to the Reply All episode about the podcast <99% Invisible>. It was the % sign there, so it had to be the & sign here!
My first thought that wasn’t addressed in the episode was: this should then be reproducible for _anything_ else that has an ‘&’ in it. Glad to see others in the comments have tried this to verify ✅ 😎
This episode follows the formula for the “classic” Search Engine we know and love! It’s got an interesting, whimsical, head-scratcher premise that affects a large group of people. PJ consulted and interviewed multiple people, gathering research from multiple (and credible!) sources. Then, at the end, we get not only the answer to our question, but we learn more about a hefty, relevant topic and gain awareness of important information and concepts that affects all of us in 2025. Unfortunately, some of the more recent episodes have fallen flat, providing unsatisfying answers to overly-broad questions (“does anyone like their job”), relying too much on external Podcasts (“what’s it like to fly overweight”), or just giving us a long-winded interview with a single source (“ayahuasca”). This is a refreshing return to the magical recipe that makes Search Engine great, and I hope we see more episodes like it!
I doubt this is really related to AI. I mean yes, an AI does generate the transcript, but I doubt the problem is inside the AI. It’s most likely when one process sends information to another process packaged up in JSON or XML and the & is interpreted as command in that meta-data when passing information between processes.
In other words, I speculate the AI perfectly transcribed everything, including the &and everything before, and after it. Then when the transcript is passed to another process, the passed data gets broken in half and causes an exception.
You also can’t voice memo about H&M or AT&T
Tried this too, it’s a fun thing to do to someone that doesn’t know any better…🤭…#Searchenginenation
Came here to say A&W won't send either. I was telling a friend about the ep and she immediately thought of it.
I was wondering the same thing (also an Android user) but thinking of Barnes & Noble or Crate & Barrel. I'm glad to see you tested it.
Android user here — I was just going to ask, is this also the case with other ampersanded companies? Thanks for the confirmation!
Tangerine. I’ll do anything you say. Please do some yes, yes no. Highly informative
Loved this one! It was just a perfectly mysterious & odd lil’ thing to investigate. Brought me back to Reply All days
Tag-team this with The Roman Mars Mazda Virus - E140, from April 2019. 🎙️😉🎧
I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
Haha yes, they are similar! That was actually my introduction to Roman Mars
I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
I thought about this the whole time, and REALLY wanted PJ to creatively test hypotheses the way they did in that 99% Invisible episode! That was what made that magic. Not saying you had to do that level of production, but some playful testing would have been great.
PJ! Did you really forget? Think back to the Reply All episode about the podcast <99% Invisible>. It was the % sign there, so it had to be the & sign here!
https://xkcd.com/327/
This great xkcd is about the same kind of issue 😊
My first thought that wasn’t addressed in the episode was: this should then be reproducible for _anything_ else that has an ‘&’ in it. Glad to see others in the comments have tried this to verify ✅ 😎
This episode follows the formula for the “classic” Search Engine we know and love! It’s got an interesting, whimsical, head-scratcher premise that affects a large group of people. PJ consulted and interviewed multiple people, gathering research from multiple (and credible!) sources. Then, at the end, we get not only the answer to our question, but we learn more about a hefty, relevant topic and gain awareness of important information and concepts that affects all of us in 2025. Unfortunately, some of the more recent episodes have fallen flat, providing unsatisfying answers to overly-broad questions (“does anyone like their job”), relying too much on external Podcasts (“what’s it like to fly overweight”), or just giving us a long-winded interview with a single source (“ayahuasca”). This is a refreshing return to the magical recipe that makes Search Engine great, and I hope we see more episodes like it!
this was so good. i want to know about the guy's wife's famous friend though.....
I was sure it was going to be closer to this: https://www.techdirt.com/2024/12/03/the-curious-case-of-chatgpts-banned-names-hard-coding-blocks-to-avoid-nuisance-threats/
Just here to say I tried Ben & Jerry’s. It also won’t send.
nobody else commented on how the search engine title for this episode emits the ampersand?
RIP A&W, Barnes & Noble, H&M, and all the other amperbrands out there nobody is talking about
I doubt this is really related to AI. I mean yes, an AI does generate the transcript, but I doubt the problem is inside the AI. It’s most likely when one process sends information to another process packaged up in JSON or XML and the & is interpreted as command in that meta-data when passing information between processes.
In other words, I speculate the AI perfectly transcribed everything, including the &and everything before, and after it. Then when the transcript is passed to another process, the passed data gets broken in half and causes an exception.
I think I’ll paint my front door a bright shade, maybe Tangerine!
If you like this episode then listen to this https://99percentinvisible.org/episode/the-roman-mars-mazda-virus/
Tangerine