Best Tech Podcasts This Week – 07/24/23

The Tech Tribune staff has compiled a list of the best new tech podcasts released in the last week (as of the time of writing):

“North Korea’s increasingly supple cyber offensives. A look at Cl0p. The NetSupport RAT’s fake update vectors. HotRat is a Trojan that accompanies illegally pirated software and games. Crackable radio encryption standard: a bug or a feature? Chris Novak from Verizon discusses ransomware through the lens of the DBIR. Carole Theriault describes a ransomware attack that hit close to home. And an alleged money-laundering crypto-rapper is back in the news.”

“Twitter has removed the iconic bird logo and adopted ‘X’ as its official logo.”

“Another week means another set of wild stories including the team at The Simulation who’ve created an AI showrunner to produce entirely new South Park episodes, absolutely wild.

We also talk about new brain computer interfaces and zombie games in Fortnite. Just another classic week in the metaverse.”

“A big week for big tech with Alphabet, Microsoft, Intel and Meta results on deck. We’ll drill down on the name one analyst says is the best positioned in AI. Plus, Verizon, NXP Semiconductors, and Anywhere Real Estate are on deck with results. We’ve got the action, the story, and the trade in Earnings Exchange. And we’ll speak with a recent college graduate who built an AI-detecting program from his college dorm room.”

“Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they’ve approached dealing with these issues.”

“On this week’s episode of the Android Central Podcast, Shruti Shekar, Jerry Hildenbrand, Andrew Myrick, and Michael Hicks discuss Google’s ‘Genesis’ AI tool, what to expect from Samsung Galaxy Unpacked 2023, the rumoured Galaxy Z Flip 5 and Fold 5 design improvements, the Galaxy Tab S9, the rumoured ‘Galaxy Ring’, review Vivo’s X Flip, and more!”

“The creators of large language models impose restrictions on some of the types of requests one might make of them. LLMs commonly refuse to give advice on committing crimes, producting adult content, or respond with any details about a variety of sensitive subjects. As with any content filtering system, you have false positives and false negatives.

Today’s interview with Max Reuter and William Schulze discusses their paper “I’m Afraid I Can’t Do That: Predicting Prompt Refusal in Black-Box Generative Language Models”. In this work, they explore what types of prompts get refused and build a machine learning classifier adept at predicting if a particular prompt will be refused or not.”

“A landmark $13 million settlement with the City of New York is the latest in a string of legal wins for protesters who were helped by a video-analysis tool that smashes the “bad apple” myth.”

“Plus, if you’re an iPhone owner, here’s how you can get ahead of the game with iOS 17. I give my advice to a guy who needs Wi-Fi for his barn. Instagram’s $68.5 million settlement, track your luggage with these airlines, and why you should have a burner phone.”

“The movie “Oppenheimer,” about the making of the nuclear bomb, opened last week, and the subject matter has spurred an unavoidable comparison with artificial intelligence. Leaders at AI companies like OpenAI and Anthropic have explicitly framed the risks of developing AI in those terms, while historical accounts of the Manhattan Project have become required reading among some researchers. That’s according to Vox senior correspondent Dylan Matthews. Marketplace’s Meghan McCarty Carino spoke to Matthews about his recent reporting on the parallels between AI and nuclear weapons.”