Jump to content

Ceacer

Administrator
  • Posts

    11,371
  • Joined

  • Last visited

  • Days Won

    196

Everything posted by Ceacer

  1. WhatsApp now has more than 3 billion people using it every month, Meta CEO Mark Zuckerberg noted during the company’s Q1 results conference call on Wednesday. Founded in 2009 and acquired by Facebook for $19 billion in 2014, WhatsApp remains free to use and doesn’t serve any ads. The app reached the 2 billion monthly active user mark back in 2020, but with the latest milestone, it’s now one of the few apps to cross the 3 billion user mark, besides Facebook. That humongous user base makes WhatsApp a key business for Meta, especially now as the company has bet the farm on its AI strategy. The company has previously said that the app is one of its biggest distribution platforms for AI services. “We see people engage with Meta AI from several different entry points. WhatsApp continues to see the strongest Meta AI usage across our family of apps,” Meta’s CFO, Susan Li, said during the conference call. She also noted that most WhatsApp users engage with Meta AI in one-on-one chats. Zuckerberg said that while WhatsApp provides easy access to AI features, Meta has had to take a different approach to spur adoption of its AI products in markets like the U.S., where the majority of people still prefer to use their phones’ stock messaging apps to text each other. That’s where the company’s newly released Meta AI app comes in. “We hope to become the leader over time [in the U.S. messaging market], but we’re in a different position there than we are in most of the rest of the world on WhatsApp. So I think that the Meta AI app as a stand-alone is going to be particularly important in the United States to establish leadership in — as the main personal AI that people use. But we’re going to keep on advancing the experiences across the board in all of these different areas,” he said. The company said the chat app’s business platform, WhatsApp Business, is growing, and accounted for a large portion of the $510 million in revenue brought in by its family of apps. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW Meta has been testing AI tools for WhatsApp Business, and Li said on Wednesday that the company is building a new AI agent management interface and dashboard that would let businesses train Meta’s AI on their information. That information could include a business’ website, WhatsApp profile, or their Instagram and Facebook page. It’s also testing letting businesses activate Meta’s AI chatbot in chats with customers.
  2. MoviePass, the startup that made its mark with its movie theater subscription service, has always been known for shaking things up, and its latest venture is no exception. The company announced on Thursday the beta launch of Mogul, a new daily fantasy entertainment platform designed specifically for the Hollywood industry. To understand what Mogul is, it’s important to first grasp the concept of daily fantasy sports. This subcategory of fantasy sports allows players to compete over short-term periods, rather than an entire season. Players assume the role of team managers, creating their own dream teams made up of real-world athletes and earning points based on how those athletes perform in actual games. Mogul takes this idea by allowing users, who are likely passionate movie enthusiasts interested in this sort of thing, to act as studio heads in the film industry. Players are provided with a budget and “studio credits” (in-game currency) to spend on selecting actors for their leagues. Users can update their lineup of movie actors each day. They then participate in fantasy-style tournaments that last about a week, plus one-on-one competitions and solo challenges. Participants make calls on the results of various things, such as box office results, audience turnout, critic ratings, and potential award winners. As users level up, they earn digital collectibles — think signed posters and memorabilia — that help them climb the leaderboard. Mogul is built on Sui, a layer 1 blockchain and smart contract platform developed by Mysten Labs. Beta testers will receive a digital wallet to securely store their in-game virtual currency, rewards, and collectibles. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW Image Credits:MoviePass/MogulMoviePass is taking a bold leap with the introduction of Mogul, as it has never really been done before. But CEO Stacy Spikes believes it’s a huge market waiting to be tapped. He said, “People can name more actors than they can probably name sports athletes. So I think there’s a really big market opportunity there.” Initially, when we first learned about Mogul, we didn’t anticipate that it would take off, at least not in the early stages. We wondered if there are many movie fans willing to compete with others about box office revenue or ratings. However, we may have underestimated its appeal. The company claims that more than 400,000 people have already signed up for the early-access waitlist. It remains to be seen whether it can maintain this level of interest leading up to the official launch, but it could become popular among niche film industry followers. Image Credits:MoviePass/MogulDuring our initial conversation with Spikes, he positioned Mogul as a predictive market platform. Later on, we were told that a more fitting description would be to classify Mogul as a daily fantasy sports platform, but it may evolve to include this functionality in the future. For now, though, Mogul operates exclusively with virtual currency. This distinction is important, especially considering the regulated nature of daily fantasy sports, as opposed to prediction market platforms, which currently exist in a legal gray area. Kalshi, for instance, has been in ongoing legal battles with state gambling regulators. “It’s murky what needs to be approved. There are different types of clearances, depending on the markets you want in the U.S. You have to go state by state. It literally is like a Chinese puzzle with stuff all over the place,” Spikes said. Mogul represents the initial phase of MoviePass’s long-term web3 strategy. The company has previously revealed its intention to provide on-chain rewards for attending movies. It’s also backed by Animoca Brands, a venture capital firm specializing in blockchain technology. Last year, MoviePass partnered with Sui to allow subscribers to make payments using USD coin.
  3. Epic Games’ mega-popular Fortnite is promising a return to the U.S. iOS App Store next week after a surprising ruling in a years-long legal battle with Apple. The dispute between Epic and Apple began in 2020, when Apple removed Epic Games from the iOS App Store. Because Apple takes 30% of all in-app purchases, Epic had introduced support for direct payments in Fortnite to bypass Apple’s fee. The following year, Judge Yvonne Gonzalez Rogers ruled that Apple could not prevent developers like Epic from adding links for customers to buy digital goods outside of the iOS ecosystem to avoid forking over the 30% fee. But nearly four years later, on Wednesday evening, the same judge said in a ruling that Apple was in “willful violation” of the injunction that allowed developers to refer customers to payment methods that aren’t subject to Apple’s fees. “That it thought this Court would tolerate such insubordination was a gross miscalculation. As always, the coverup made it worse. For this Court, there is no second bite at the apple,” Rogers said. NO FEES on web transactions. Game over for the Apple Tax. Apple’s 15-30% junk fees are now just as dead here in the United States of America as they are in Europe under the Digital Markets Act. Unlawful here, unlawful there. 4 years 4 months 17 days. https://t.co/RucrsX7Z4A pic.twitter.com/3kSYnt5pcI — Tim Sweeney (@TimSweeneyEpic) April 30, 2025 Even at the time of the 2021 ruling, Fortnite did not return to the iOS App Store. At the time, Epic CEO Tim Sweeney said the app would return when it could offer “in-app payment in fair competition with Apple in-app payment, passing along the savings to consumers.” But after Rogers’ unexpected ruling this week, Sweeney said that Fortnite will finally be available again for iOS users in the US. “Apple’s 15-30% junk fees are now just as dead here in the United States of America as they are in Europe under the Digital Markets Act. Unlawful here, unlawful there,” Sweeney wrote, noting that it had taken four years, four months, and seventeen days to get to this point. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW However, Sweeney is pushing Apple to extend the U.S. ruling worldwide before bringing Fortnite back to the App Store globally. That’s something that Apple seems unlikely to do, given it’s planning to appeal this ruling.
  4. Google’s Gemini chatbot app now lets you modify both AI-generated images and images uploaded from your phone or computer, the company said in a blog post on Wednesday. Native image editing in Gemini will start rolling out gradually today. The service will be expanded to people in most countries and get support for more than 45 languages in the coming weeks. The launch follows an AI image-editing model Google piloted in its AI Studio platform in March, which went viral for its controversial ability to remove watermarks from any image. Similar to ChatGPT’s recently upgraded image-editing tool, Gemini’s newfangled native image editor can, in theory, achieve better results than stand-alone AI image generators. Gemini now offers a “multi-step” editing flow that delivers what the company describes as “richer, more contextual” responses to each prompt with text and images integrated. You can change the background in images, replace objects, add elements, and more within Gemini. Editing an image using Gemini.Image Credits:Google“For example, you can upload a personal photo and prompt Gemini to generate an image of what you’d look like with different hair colors,” explains Google in a blog post. “[Or] you could ask Gemini to create a first draft of a bedtime story about dragons and provide images to go along with the story.” If this sounds like a deepfake risk, well, that’s reasonable. To allay fears, images created or edited with Gemini’s native image generation will include an invisible watermark, according to Google. The company is also “experimenting” with visible watermarks on all Gemini-generated images.
  5. An executive cautioned during Microsoft’s earnings call on Wednesday that customers might face AI service disruptions as demand outstrips the company’s ability to bring data centers online. Microsoft’s EVP and CFO Amy Hood said during the company’s fiscal 2025 third-quarter earnings call that the company may face AI capacity constraints as early as June. “We had hoped to be in balance by the end of Q4 but we did see some increased demand, as you saw through the quarter,” Hood said. “So we are going to be a little short, a little tight as we exit the year.” The timing of Hood’s statement is interesting because Microsoft has reportedly canceled multiple data center leases this year. In February, investment bank TD Cowen published a memo that Microsoft canceled multiple data center leases that equated to a “couple hundred megawatts” or the equivalent of two data centers. In the two months since, there have been multiple reports of additional data center lease cancellations. Microsoft says these two instances are not necessarily related. The company reiterated today that it is still committed to investing $80 billion into data centers this year — as it originally earmarked at the beginning of this year. Half of that figure is for U.S.-based data centers. Hood also added that demand today and demand tomorrow are not the same thing. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW “Just a reminder, these are very long lead time decisions; from land to build out, it can be, you know, lead times of five to seven years, two to three years,” Hood said. “So we’re constantly in a balancing position as we watch demand curves.” Microsoft CEO Satya Nadella said at the top of the earnings call that the company opened data centers across 10 new countries and four new continents during this past quarter.
  6. A new paper from AI lab Cohere, Stanford, MIT, and Ai2 accuses LM Arena, the organization behind the popular crowdsourced AI benchmark Chatbot Arena, of helping a select group of AI companies achieve better leaderboard scores at the expense of rivals. According to the authors, LM Arena allowed some industry-leading AI companies like Meta, OpenAI, Google, and Amazon to privately test several variants of AI models, then not publish the scores of the lowest performers. This made it easier for these companies to achieve a top spot on the platform’s leaderboard, though the opportunity was not afforded to every firm, the authors say. “Only a handful of [companies] were told that this private testing was available, and the amount of private testing that some [companies] received is just so much more than others,” said Cohere’s VP of AI research and co-author of the study, Sara Hooker, in an interview with TechCrunch. “This is gamification.” Created in 2023 as an academic research project out of UC Berkeley, Chatbot Arena has become a go-to benchmark for AI companies. It works by putting answers from two different AI models side-by-side in a “battle,” and asking users to choose the best one. It’s not uncommon to see unreleased models competing in the arena under a pseudonym. Votes over time contribute to a model’s score — and, consequently, its placement on the Chatbot Arena leaderboard. While many commercial actors participate in Chatbot Arena, LM Arena has long maintained that its benchmark is an impartial and fair one. However, that’s not what the paper’s authors say they uncovered. One AI company, Meta, was able to privately test 27 model variants on Chatbot Arena between January and March leading up to the tech giant’s Llama 4 release, the authors allege. At launch, Meta only publicly revealed the score of a single model — a model that happened to rank near the top of the Chatbot Arena leaderboard. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW A chart pulled from the study. (Credit: Singh et al.)In an email to TechCrunch, LM Arena Co-Founder and UC Berkeley Professor Ion Stoica said that the study was full of “inaccuracies” and “questionable analysis.” “We are committed to fair, community-driven evaluations, and invite all model providers to submit more models for testing and to improve their performance on human preference,” said LM Arena in a statement provided to TechCrunch. “If a model provider chooses to submit more tests than another model provider, this does not mean the second model provider is treated unfairly.” Supposedly favored labs The paper’s authors started conducting their research in November 2024 after learning that some AI companies were possibly being given preferential access to Chatbot Arena. In total, they measured more than 2.8 million Chatbot Arena battles over a five-month stretch. The authors say they found evidence that LM Arena allowed certain AI companies, including Meta, OpenAI, and Google, to collect more data from Chatbot Arena by having their models appear in a higher number of model “battles.” This increased sampling rate gave these companies an unfair advantage, the authors allege. Using additional data from LM Arena could improve a model’s performance on Arena Hard, another benchmark LM Arena maintains, by 112%. However, LM Arena said in a post on X that Arena Hard performance does not directly correlate to Chatbot Arena performance. Hooker said it’s unclear how certain AI companies might’ve received priority access, but that it’s incumbent on LM Arena to increase its transparency regardless. In a post on X, LM Arena said that several of the claims in the paper don’t reflect reality. The organization pointed to a blog post it published earlier this week indicating that models from non-major labs appear in more Chatbot Arena battles than the study suggests. One important limitation of the study is that it relied on “self-identification” to determine which AI models were in private testing on Chatbot Arena. The authors prompted AI models several times about their company of origin, and relied on the models’ answers to classify them — a method that isn’t foolproof. However, Hooker said that when the authors reached out to LM Arena to share their preliminary findings, the organization didn’t dispute them. TechCrunch reached out to Meta, Google, OpenAI, and Amazon — all of which were mentioned in the study — for comment. None immediately responded. LM Arena in hot water In the paper, the authors call on LM Arena to implement a number of changes aimed at making Chatbot Arena more “fair.” For example, the authors say, LM Arena could set a clear and transparent limit on the number of private tests AI labs can conduct, and publicly disclose scores from these tests. In a post on X, LM Arena rejected these suggestions, claiming it has published information on pre-release testing since March 2024. The benchmarking organization also said it “makes no sense to show scores for pre-release models which are not publicly available,” because the AI community cannot test the models for themselves. The researchers also say LM Arena could adjust Chatbot Arena’s sampling rate to ensure that all models in the arena appear in the same number of battles. LM Arena has been receptive to this recommendation publicly, and indicated that it’ll create a new sampling algorithm. The paper comes weeks after Meta was caught gaming benchmarks in Chatbot Arena around the launch of its above-mentioned Llama 4 models. Meta optimized one of the Llama 4 models for “conversationality,” which helped it achieve an impressive score on Chatbot Arena’s leaderboard. But the company never released the optimized model — and the vanilla version ended up performing much worse on Chatbot Arena. At the time, LM Arena said Meta should have been more transparent in its approach to benchmarking. Earlier this month, LM Arena announced it was launching a company, with plans to raise capital from investors. The study increases scrutiny on private benchmark organization’s — and whether they can be trusted to assess AI models without corporate influence clouding the process. Update on 4/30/25 at 9:35pm PT: A previous version of this story included comment from a Google DeepMind engineer who said part of Cohere’s study was inaccurate. The researcher did not dispute that Google sent 10 models to LM Arena for pre-release testing from January to March, as Cohere alleges, but simply noted the company’s open source team, which works on Gemma, only sent one.
  7. Amazon on Wednesday released what the company claims is the most capable AI model in its Nova family, Nova Premier. Nova Premier, which can process text, images, and videos (but not audio), is available in Amazon Bedrock, the company’s AI model development platform. Amazon says that Premier excels at “complex tasks” that “require deep understanding of context, multi-step planning, and precise execution across multiple tools and data sources.” Amazon announced its Nova lineup of models in December at its annual AWS re:Invent conference. Over the last few months, the company has expanded the collection with image- and video-generating models as well as with audio understanding and agentic, task-performing releases. Nova Premier, which has a context length of 1 million tokens, meaning it can analyze around 750,000 words in one go, is weaker on certain benchmarks than flagship models from rival AI companies such as Google. On SWE-Bench Verified, a coding test, Premier is behind Google’s Gemini 2.5 Pro, and it also performs poorly on benchmarks measuring math and science knowledge, GPQA Diamond and AIME 2025. However, in bright spots for Premier, the model does well on tests for knowledge retrieval and visual understanding, SimpleQA and MMMU, according to Amazon’s internal benchmarking. In Bedrock, Premier is priced at $2.50 per 1 million tokens fed into the model and $12.50 per 1 million tokens generated by the model. That’s around the same price as Gemini 2.5 Pro, which costs $2.50 per million input tokens and $15 per million output tokens. Importantly, Premier isn’t a “reasoning” model. As opposed to models like OpenAI’s o4-mini and DeepSeek’s R1, it can’t take additional time and computing to carefully consider and fact-check its answers to questions. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW Amazon is pitching Premier as best for “teaching” smaller models via distillation — in other words, transferring its capabilities for a specific use case into a faster, more efficient package. Amazon sees AI as increasingly core to its overall growth strategy. CEO Andy Jassy recently said the company is building more than 1,000 generative AI applications and that Amazon’s AI revenue is growing at “triple-digit” year-over-year percentages and represents a “multi-billion-dollar annual revenue run rate.”
  8. Meta made a prediction last year its generative AI products would rake in $2 billion to $3 billion in revenue in 2025, and between $460 billion and $1.4 trillion by 2035, according to court documents unsealed Wednesday. The documents, submitted by attorneys for book authors suing Meta for what they claim is unauthorized training of the company’s AI on their works, don’t indicate what exactly Meta considers to be a “generative AI product.” But it’s public knowledge that the tech giant makes money — and stands to make more money — from generative AI in a number of flavors. Meta has revenue-sharing agreements with certain companies that host its open Llama collection of models. The company recently launched an API for customizing and evaluating Llama models. And Meta AI, the company’s AI assistant, may eventually show ads and offer a subscription option with additional features, CEO Mark Zuckerberg said during the company’s Q1 earnings call Wednesday. The court documents also reveal Meta is spending an enormous amount on its AI product groups. In 2024, the company’s “GenAI” budget was over $900 million, and this year, it could exceed $1 billion, according to the documents. That’s not including the infrastructure needed to run and train AI models. Meta previously said it plans to spend $60 billion to $80 billion on capital expenditures in 2025, primarily on expansive new data centers. Those budgets might have been higher had they included deals to license books from the authors suing Meta. For instance, Meta discussed in 2023 spending upwards of $200 million to acquire training data for Llama, around $100 million of which would have gone toward books alone, per the documents. But the company allegedly decided to pursue other options: pirating ebooks on a massive scale. A Meta spokesperson sent TechCrunch the following statement: Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW “Meta has developed transformational [open] AI models that are powering incredible innovation, productivity, and creativity for individuals and companies. Fair use of copyrighted materials is vital to this. We disagree with [the authors’] assertions, and the full record tells a different story. We will continue to vigorously defend ourselves and to protect the development of generative AI for the benefit of all.”
  9. World, the biometric ID company best known for its eyeball-scanning Orb devices, on Wednesday announced several partnerships aimed at driving sign-ups and demonstrating the applications of its tech. World is partnering with Match Group, the dating app conglomerate, to verify the identities of Tinder users in Japan using World’s identity verification system. Additionally, World has established separate collaborations with both the prediction market startup Kalshi and the decentralized lending platform Morpho; these partnerships enable customers to sign in to these services using their IDs already registered with World. And World plans to team up with Visa to launch The World Card, a card that lets users spend digital assets anywhere Visa is accepted. Image Credits:WorldSince its founding in 2019, World, developed by San Francisco- and Berlin-based Tools for Humanity, has raised hundreds of millions of dollars in venture capital and created digital IDs for millions of users. But it has yet to breach the mainstream, in part because of its cumbersome approach to verifying IDs. With these new partnerships, World is going after a broader audience — one that previously might not have considered having their eyeballs scanned to verify their “humanness.” The World Card is perhaps the most interesting of the new projects. Expected to become available in the U.S. later this year, it’ll connect to World’s World App and allow users to transact with cryptocurrencies. The card will automatically exchange crypto to fiat when needed, and potentially offer certain rewards for specific “AI subscriptions and services.” World had one surprise in store at its Wednesday event: a collaboration with Stripe to allow users to pay with World on Stripe-enabled websites and apps. The company didn’t say when this might go live. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW
  10. Microsoft on Wednesday launched several new “open” AI models, the most capable of which is competitive with OpenAI’s o3-mini on at least one benchmark. As it says on the tin, all of the new permissively licensed models — Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus — are “reasoning” models, meaning they can spend more time fact-checking solutions to complex problems. They expand Microsoft’s Phi “small model” family, which the company launched a year ago to offer a foundation for AI developers building apps at the edge. Phi 4 mini reasoning was trained on roughly 1 million synthetic math problems generated by Chinese AI startup DeepSeek’s R1 reasoning model. Around 3.8 billion parameters in size, Phi 4 mini reasoning is designed for educational applications, Microsoft says, like “embedded tutoring” on lightweight devices. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters. Phi 4 reasoning, a 14-billion-parameter model, was trained using “high-quality” web data as well as “curated demonstrations” from OpenAI’s aforementioned o3-mini. It’s best for math, science and coding applications, according to Microsoft. As for Phi 4 reasoning plus, it’s Microsoft’s previously-released Phi 4 model adapted into a reasoning model to achieve better accuracy for particular tasks. Microsoft claims Phi 4 reasoning plus approaches the performance levels of DeepSeek R1, which has significantly more parameters (671 billion). The company’s internal benchmarking also has Phi 4 reasoning plus matching o3-mini on OmniMath, a math skills test. Phi 4 mini reasoning, Phi 4 reasoning, Phi 4 reasoning plus, and their detailed technical reports, are available on the AI dev platform Hugging Face. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW “Using distillation, reinforcement learning, and high-quality data, these [new] models balance size and performance,” wrote Microsoft in a blog post. “They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.”
  11. Google’s AdSense advertising network started supporting ads inside users’ chats with some third-party AI chatbots earlier this year, Bloomberg reported. The company is rolling out the feature following tests with AI search startups iAsk and Liner, the report said, citing anonymous sources familiar with the matter. “AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences,” Bloomberg cited a Google spokesperson as saying. The search and advertising giant is ostensibly seeking to capitalize on — and offset the potential threat of — the burgeoning trend of users increasingly using AI chatbots like OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity to search the web and answer common queries. Google has invested heavily in AI tools and products, with a slate of large language models and frequent updates to its Gemini AI apps and models. The company late last year started showing ads in AI Overviews, the AI-generated summaries it supplies for certain Search queries. Google did not immediately respond to a request for comment.
  12. Democratic Sen. Ron Wyden has put a hold on the Trump administration’s nomination of Sean Plankey to head the federal government’s top cybersecurity agency, citing a “multi-year cover up” of security flaws at U.S. telecommunication companies. Wyden said in remarks, seen by TechCrunch and confirmed by the senator’s spokesperson, that he will block the nomination of Plankey to serve as director of the Cybersecurity and Infrastructure Security Agency (CISA) until the agency agrees to release a 2022-dated unclassified report it commissioned detailing security weaknesses across the U.S. telecom network. Senate rules allow for any serving senator to unilaterally and indefinitely hold up a federal nomination. As noted by Reuters, which was first to report Wyden’s hold on Plankey’s nomination, lawmakers often use nomination holds — or the threat of a hold — to demand concessions from the executive branch. Scott McConnell, a spokesperson for CISA, referred comment to the White House, which did not return TechCrunch’s request for comment. In remarks slated for Wednesday, Wyden — who serves on the Senate Intelligence Committee — said his staff members were previously permitted to read the unclassified report but that efforts to publicly release its findings were refused. Wyden said he appealed to then-CISA Director Jen Easterly as well as then-President Joe Biden to release the report prior to the change in government. Wyden said the report is a “technical document containing factual information about U.S. telecom security … as such, this report contains important factual information that the public has a right to see,” he added. “CISA’s multi-year cover up of the phone companies’ negligent cybersecurity has real consequences,” said Wyden, referring to the widespread hacking of U.S. phone companies by Chinese spies known as Salt Typhoon, revealed last year. Wyden said the hacks, which allowed the hackers to snoop on calls and text messages of senior American officials, were “the direct result of U.S. phone carriers’ failure to follow cybersecurity best practices … and federal agencies failing to hold these companies accountable.” Soon after the Salt Typhoon hacks, Wyden introduced legislation aimed at requiring phone companies to implement specific cybersecurity requirements, perform annual testing, and more. “The federal government still does not require U.S. phone companies to meet minimum cybersecurity standards,” Wyden said in his remarks Wednesday.
  13. NSO Group’s notorious spyware Pegasus was used to target 1,223 WhatsApp users in 51 different countries during a 2019 hacking campaign, according to a new court document. The document was published on Friday as part of the lawsuit that Meta-owned WhatsApp filed against NSO Group in 2019, accusing the surveillance tech maker of exploiting a vulnerability in the chat app to target hundreds of users, including more than 100 human rights activists, journalists, and “other members of civil society.” At the time, WhatsApp said around 1,400 users had been targeted. Now, an exhibit published in the court document shows exactly in what countries 1,223 specific victims were located when they were targeted with NSO Group’s Pegasus spyware. The country breakdown is a rare insight into which NSO Group customers may be more active, and where their victims and targets are located. The countries with the most victims of this campaign are Mexico, with 456 individuals; India, with 100; Bahrain with 82; Morocco, with 69; Pakistan, with 58; Indonesia, with 54; and Israel, with 51, according to a chart titled “Victim Country Count,” that WhatsApp submitted as part of the case. There are also victims in Western countries like Spain (21 victims), the Netherlands (11), Hungary (8), France (7), United Kingdom (2), and one victim in the United States. The court document with the list of victims by country was first reported by Israeli news site CTech. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW “Numerous news articles have been written over the years documenting use of Pegasus to target victims around the world,” said Runa Sandvik, a cybersecurity expert who’s been tracking victims of government spyware for years. “What’s often missing from these articles is the true scale of the targeting — the number of victims who were not notified; who did not get their devices checked; who opted not to share their story publicly. The list we see here — with 456 cases in Mexico alone, a country with documented, well-known civil society victims — speaks volumes about the true scale of the spyware problem,” Sandvik told TechCrunch. Contact Us Do you have more information about NSO Group, or other spyware companies? From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop. Another piece of data that shows the scale of the government spyware problem is that the hacking campaign targeting WhatsApp users occurred over a period of only two months, “between in and around April 2019 and May 2019,” as WhatsApp wrote in its original complaint. In other words, in just two months, NSO Group’s government customers targeted more than a thousand WhatsApp users. It’s important to note that it is not clear if the fact that there is a victim located in a certain country means that specific country’s government was the customer using NSO Group’s spyware against those victims. It’s possible that a government customer could be using Pegasus to target someone outside of the country. As CTech noted, Syria appears on the victim list, but NSO Group cannot export its technology to Syria, a country that’s sanctioned by countries all over the world. The number of victims also gives an insight into who may be NSO Group’s highest-paying customers. Companies like NSO Group, and other predecessors like Hacking Team and FinFisher, determine what price to offer their surveillance products to their customers in part by the number of targets that can be concurrently infected with the spyware. Mexico, for example, was reported to have spent more than $60 million on NSO Group’s spyware, according to a 2023 New York Times article that cited Mexican officials, which could explain why there are so many Mexican targets in this list. Last year, WhatsApp scored an historic victory when the judge presiding over the lawsuit ruled that NSO Group had breached U.S. hacking laws by targeting WhatsApp users. The next step in the lawsuit is an upcoming hearing that will determine the damages that the spyware maker will have to pay to WhatsApp. Apart from this list of victims, the court case brought by WhatsApp has led to other revelations, including the fact that NSO Group disconnected 10 government customers after reports that they abused the spyware, and that the WhatsApp hacking tool produced by NSO Group cost up to $6.8 million for a one-year license, which in total netted the company “at least $31 million in revenue in 2019.” WhatsApp spokesperson Zade Alsawah declined to comment. NSO Group did not respond to a request for comment.
  14. Small and medium businesses are the latest targets for cybersecurity attacks, with one in three small businesses experiencing a data breach last year. SMBs are becoming more proactive in detecting and stopping these threats, and today a startup called Cynomi is announcing $37 million in funding to meet that demand. Insight Partners and Entrée Capital are co-leading this Series B, with previous backers Canaan, Flint Capital, and S16VC also participating. Sources close to the deal told TechCrunch that the company was valued at more than $140 million post-money. Cynomi previously raised around $23 million (including this seed round we covered in 2022). London and Tel Aviv-based Cynomi was founded by CEO David Primor, a PhD who previously was the CTO and head of R&D of the Israel Defense Forces; and COO Roy Azoulay, who was a founder and started and led the first startup incubator at Oxford University. Cynomi leans, at a basic level, into the trend of using AI-based agents to do complicated and high-volume work, but it’s also pushing the boundaries of what we might expect those AIs to do. CEO Primor describes his product not as an AI agent but as a “virtual CISO” — an automated, AI-based decision-maker that can help smaller organizations understand how to run their security operations. It’s also building a number of actions this “virtual CISO” is capable of carrying out. It can assess a network, plan a set of security policies, make remediation plans, track progress, run analytics to find vulnerabilities in a network, recommend optimizations for systems, and produce reports on the network status and health. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW All of this is not sold directly by Cynomi to SMBs, but via third parties that SMBs typically use for network connectivity and other managed services. The gap in the market that Cynomi is trying to exploit is a very large one. Malicious hackers used to focus exclusively on more valuable, larger businesses, but these days, they have started to focus on the long tail in the market. SMBs are numerous, accounting for some 90% of all businesses globally, so tapping into them can make for lucrative pickings. SMBs face some particular challenges, however, when it comes to budget and manpower, which is where a product like Cynomi’s comes in. “A virtual CISO service can start at $10,000 to $12,000 a year,” notes Azoulay. “A human CISO would be about at least 10 to 15 times that. It’s about having the knowledge and to be a sophisticated buyer in the sense of finding that CISO. It’s also about having a CISO [be online] the full week, 52 weeks a year.” That formula, so far, has worked for the startup. Cynomi has seen its annual recurring revenue triple in the last year, Primor said, with more than 100 service providers and consultancies — including big telcos like Deutsche Telekom — reselling Cynomi’s services to thousands of SMBs. Some 80% of its customers are in the U.S., and the company will now be widening its focus to Europe and other markets. The funding will be used for R&D and business development because the startup believes there is an even bigger opportunity ahead than just virtual CISOs. “The cybersecurity consulting space is a $163 billion business, but we believe it doesn’t really have an operating system,” said Azoulay. “We believe Cynomi can be that operating system.” There are dozens of cybersecurity companies out there targeting SMBs, and a sizeable group has identified service providers as their primary sales channel. These include the likes of Vanta, Cohere, Qualys, Coro, Bastion, Guardz. CyberSmart, Cowbell, and DataGuard. Philine Huizing, managing director at Insight Partners, said that it’s the “vCISO” hook that reeled Insight in as an investor. “We believe Cynomi is defining a new category with its vCISO platform,” she said. Meanwhile, the startup’s focus on working with managed service providers to deliver the product means it can be tailored or augmented with whatever the service providers are building or selling. That could help differentiate the service and keep it from becoming another commoditized offering. “MSPs can assess each client’s unique risks, customize strategies by industry, and efficiently manage day-to-day interactions, making them more impactful,” Huizing added.
  15. AI-generated code is no doubt changing how software is built, but it’s also introducing new security challenges. More than 50% of organizations encounter security issues with AI-produced code sometimes or frequently, according to a late 2023 survey by developer security platform Synk. For Endor Labs, that opportunity proved alluring enough that it chose to change course somewhat. Endor started off helping companies secure their open source package dependencies — in fact, it even raised a $70 million Series A round just two years ago to grow its developer pipeline governance service. But the startup’s co-founders, Varun Badhwar and Dimitri Stiliadis, saw growing demand elsewhere — spotting and combating vulnerabilities in the growing masses of code that engineers use AI to generate and fine-tune. Today, Endor runs a platform that, it claims, can not only review code and identify risks, but also recommend “precise” fixes and apply them automatically. The company offers a plug-in for AI-powered programming tools like Cursor and GitHub Copilot that scans code as it’s written and flags issues. The pivot could prove to be a wise choice. On Wednesday, Endor announced that it closed a $93 million Series B round led by DFJ Growth, with participation from Salesforce Ventures, Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32, and Citi Ventures. Badhwar (CEO) said that the round values Endor at “orders of magnitude higher” than its Series A valuation. The proceeds will be used to expand Endor’s platform, he added. The Series B brings the startup’s total capital raised to $163 million. “This new round positions us to continue delivering, even in a tougher macro environment than similar companies faced five to 10 years ago,” Badhwar told TechCrunch. “We raised now because we’re seeing strong momentum — 30x annual recurring revenue growth since our Series A in 2023 — and this lets us double down on delivering outcomes for our customers.” Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW Endor Labs’ platform for reviewing AI-generated code securityImage Credits:Endor LabsSeveral months ago, Endor launched a tool designed to help organizations spot where AI models and services integrate with their codebase, and evaluate the integrations for security flaws. The idea is to provide better oversight as AI programming tools proliferate, said Badhwar. Endor says it now protects more than 5 million applications and runs over a million scans each week for customers including OpenAI, Rubrik, Peloton, Snowflake, Egnyte, and Dropbox. “We came out of stealth in October 2022 — right as interest rates spiked — and we’ve seen strong traction ever since,” Badhwar said. Ramin Sayar, venture partner at DFJ Growth, said his firm invested because Endor found itself at the right place, at the right time. “As generative AI transforms coding practices, developers are generating vast amounts of code without thorough visibility and control,” Sayar told TechCrunch. “Endor Labs is not only setting a new standard in application security — the team is creating a movement by launching their expanded platform.” Endor currently has 133 employees concentrated in its offices in Palo Alto and Bangalore.
  16. Health insurance giant Blue Shield of California is notifying millions of people of a data breach. The company confirmed on Wednesday that it had been sharing patients’ private health information with tech and advertising giant Google since 2021. The insurer said that the data sharing stopped in January 2024, but it only learned this February that the years-long collection contained patients’ personal and sensitive health information. Blue Shield said it used Google Analytics to track how its customers used its websites, but a misconfiguration had allowed for personal and health information to be collected as well, such as the search terms that patients used on its website to find healthcare providers. The insurance giant said Google “may have used this data to conduct focused ad campaigns back to those individual members.” Blue Shield said the collected data also included insurance plan names, types, and group numbers, along with personal information such as patients’ city, zip code, gender, and family size. Details of Blue Shield-assigned member account numbers, claim service dates and service providers, patient names, and patients’ financial responsibility were also shared. Per a legally required disclosure with the U.S. government’s health department, Blue Shield of California said it is notifying 4.7 million individuals affected by the breach. The breach is thought to affect the majority of its customers; Blue Shield had 4.5 million members as of 2022. It’s not immediately clear if Blue Shield asked Google to delete the data, or if Google has complied. Mark Seelig, a spokesperson for Blue Shield, did not comment beyond the company’s statement. When reached for comment, Google spokesperson Jacel Booth told TechCrunch that “businesses, not Google, manage the data they collect and must inform users about its collection and use,” but the tech giant would not say if it would delete the collected data. Blue Shield is the latest healthcare company to be caught out by the use of online tracking technologies. Online trackers are small snippets of code, often provided by tech giants, designed to collect information about a customers’ browsing activity by being embedded in mobile apps and websites. Tech and social media companies are usually the sources of these trackers, as they rely on the data for advertising and to drive the majority of their revenues. Last year, U.S. health insurance giant Kaiser notified more than 13 million people that it had been sharing patients’ data with advertisers, including Google, Microsoft, and X, after embedding tracking code on its website. Several other emerging healthcare companies, including mental health startup Cerebral and alcohol recovery startups Monument and Tempest, have disclosed past breaches involving the sharing of patients’ personal and health information with advertising firms. The breach at Blue Shield of California currently stands as the largest healthcare-related data breach of 2025 so far, per the U.S. health department’s Office of Civil Rights. Updated with remarks from Google and Blue Shield.
  17. A data breach at Connecticut’s largest healthcare system Yale New Haven Health affects more than 5.5 million people, according to a legally required notice with the U.S. government’s health department. Yale New Haven said the March cyberattack allowed malicious hackers to obtain copies of patients’ personally identifiable information and some healthcare-related data. Per a notice on the healthcare system’s website, the stolen data varies by person, but can include patient names, dates of birth, postal and email addresses, phone numbers, race and ethnicity data, and Social Security numbers. The stolen data also includes information about types of patients and medical record numbers. Local media quoted the healthcare system’s spokesperson as saying that the number of affected individuals “may change.” When asked about the nature of the cyberattack by TechCrunch, Yale New Haven spokesperson Dana Marnane did not dispute that the incident was related to ransomware. “The sophistication of the attack leads us to believe that it was executed by an individual or group who has a pattern of these types of incidents,” said Marnane, declining to comment further to TechCrunch, citing an ongoing law enforcement investigation. The healthcare provider declined to say if it had any communication with the hackers, or if the hackers made a demand for payment. As of press time, no major ransomware group has publicly taken credit for the hack. It’s not uncommon for ransomware and data extortion gangs to publish a victim’s stolen files when negotiations to pay the ransom demand fail. This is the second major healthcare data breach confirmed this week, after Blue Shield of California revealed it shared health data of 4.7 million patients with Google over several years. Updated with comment and additional details related to ransomware.
  18. The cybersecurity world is full of jargon and lingo. At TechCrunch, we have been writing about cybersecurity for years, and we frequently use technical terms and expressions to describe the nature of what is happening in the world. That’s why we have created this glossary, which includes some of the most common — and not so common — words and expressions that we use in our articles, and explanations of how, and why, we use them. This is a developing compendium, and we will update it regularly. If you have any feedback or suggestions for this glossary, get in touch. Advanced persistent threat (APT) An advanced persistent threat (APT) is often categorized as a hacker, or group of hackers, which gains and maintains unauthorized access to a targeted system. The main aim of an APT intruder is to remain undetected for long periods of time, often to conduct espionage and surveillance, to steal data, or sabotage critical systems. APTs are traditionally well-resourced hackers, including the funding to pay for their malicious campaigns, and access to hacking tools typically reserved by governments. As such, many of the long-running APT groups are associated with nation states, like China, Iran, North Korea, and Russia. In recent years, we’ve seen examples of non-nation state cybercriminal groups that are financially motivated (such as theft and money laundering) carrying out cyberattacks similar in terms of persistence and capabilities as some traditional government-backed APT groups. (See: Hacker) Adversary-in-the-middle attack An adversary-in-the-middle (AitM) attack, traditionally known as a “man-in-the-middle” (MitM), is where someone intercepts network traffic at a particular point on the network in an attempt to eavesdrop or modify the data as it travels the internet. This is why encrypting data makes it more difficult for malicious actors to read or understand a person’s network traffic, which could contain personal information or secrets, like passwords. Adversary-in-the-middle attacks can be used legitimately by security researchers to help understand what data goes in and out of an app or web service, a process that can help identify security bugs and data exposures. Arbitrary code execution The ability to run commands or malicious code on an affected system, often because of a security vulnerability in the system’s software. Arbitrary code execution can be achieved either remotely or with physical access to an affected system (such as someone’s device). In the cases where arbitrary code execution can be achieved over the internet, security researchers typically call this remote code execution. Often, code execution is used as a way to plant a back door for maintaining long-term and persistent access to that system, or for running malware that can be used to access deeper parts of the system or other devices on the same network. (See also: Remote code execution) Attribution Attribution is the process of finding out and identifying who is behind a cyberattack. There is an often repeated mantra, “attribution is hard,” which is to warn cybersecurity professionals and the wider public that definitively establishing who was behind a cyberattack is no simple task. While it is not impossible to attribute, the answer is also dependent on the level of confidence in the assessment. Threat intelligence companies such as CrowdStrike, Kaspersky, and Mandiant, among others, have for years attributed cyberattacks and data breaches to groups or “clusters” of hackers, often referencing groups by a specific codename, based on a pattern of certain tactics, techniques and procedures as seen in previous attacks. Some threat intelligence firms go as far as publicly linking certain groups of hackers to specific governments or their intelligence agencies when the evidence points to it. Government agencies, however, have for years publicly accused other governments and countries of being behind cyberattacks, and have gone as far as identifying — and sometimes criminally charging — specific people working for those agencies. Backdoor A backdoor is a subjective term, but broadly refers to creating the means to gain future access to a system, device, or physical area. Backdoors can be found in software or hardware, such as a mechanism to gain access to a system (or space) in case of accidental lock-out, or for remotely providing technical support over the internet. Backdoors can have legitimate and helpful use cases, but backdoors can also be undocumented, maliciously planted, or otherwise unknown to the user or owner, which can weaken the security of the product and make it more susceptible to hacking or compromise. TechCrunch has a deeper dive on encryption backdoors. Black/white hat Hackers historically have been categorized as either “black hat” or “white hat,” usually depending on the motivations of the hacking activity carried out. A “black hat” hacker may be someone who might break the law and hack for money or personal gain, such as a cybercriminal. “White hat” hackers generally hack within legal bounds, like as part of a penetration test sanctioned by the target company, or to collect bug bounties finding flaws in various software and disclosing them to the affected vendor. For those who hack with less clearcut motivations, they may be regarded as a “gray hat.” Famously, the hacking group the L0pht used the term gray hat in an interview with The New York Times Magazine in 1999. While still commonly used in modern security parlance, many have moved away from the “hat” terminology. (Also see: Hacker, Hacktivist) Botnet Botnets are networks of hijacked internet-connected devices, such as webcams and home routers, that have been compromised by malware (or sometimes weak or default passwords) for the purposes of being used in cyberattacks. Botnets can be made up of hundreds or thousands of devices and are typically controlled by a command-and-control server that sends out commands to ensnared devices. Botnets can be used for a range of malicious reasons, like using the distributed network of devices to mask and shield the internet traffic of cybercriminals, deliver malware, or harness their collective bandwidth to maliciously crash websites and online services with huge amounts of junk internet traffic. (See also: Command-and-control server; Distributed denial-of-service) Brute force A brute-force attack is a common and rudimentary method of hacking into accounts or systems by automatically trying different combinations and permutations of letters and words to guess passwords. A less sophisticated brute-force attack is one that uses a “dictionary,” meaning a list of known and common passwords, for example. A well designed system should prevent these types of attacks by limiting the number of login attempts inside a specific timeframe, a solution called rate-limiting. Bug A bug is essentially the cause of a software glitch, such as an error or a problem that causes the software to crash or behave in an unexpected way. In some cases, a bug can also be a security vulnerability. The term “bug” originated in 1947, at a time when early computers were the size of rooms and made up of heavy mechanical and moving equipment. The first known incident of a bug found in a computer was when a moth disrupted the electronics of one of these room-sized computers. (See also: Vulnerability) Command-and-control (C2) server Command-and-control servers (also known as C2 servers) are used by cybercriminals to remotely manage and control their fleets of compromised devices and launch cyberattacks, such as delivering malware over the internet and launching distributed denial-of-service attacks. (See also: Botnet; Distributed denial-of-service) Crypto This is a word that can have two meanings depending on the context. Traditionally, in the context of computer science and cybersecurity, crypto is short for “cryptography,” the mathematical field of coding and decoding messages and data using encryption. Crypto has more recently also become short for cryptocurrency, such as Bitcoin, Ethereum, and the myriad blockchain-based decentralized digital currencies that have sprung up in the last fifteen years. As cryptocurrencies have grown from a niche community to a whole industry, crypto is now also used to refer to that whole industry and community. For years, the cryptography and cybersecurity community have wrestled with the adoption of this new meaning, going as far as making the phrases “crypto is not cryptocurrency” and “crypto means cryptography” into something that features on its own dedicated website and even T-shirts. Languages change over time depending on how people use words. As such, TechCrunch accepts the reality where crypto has different meanings depending on context, and where the context isn’t clear, then we spell out cryptography, or cryptocurrency. Cryptojacking Cryptojacking is when a device’s computational power is used, with or without the owner’s permission, to generate cryptocurrency. Developers sometimes bundle code in apps and on websites, which then uses the device’s processors to complete complex mathematical calculations needed to create new cryptocurrency. The generated cryptocurrency is then deposited in virtual wallets owned by the developer. Some malicious hackers use malware to deliberately compromise large numbers of unwitting computers to generate cryptocurrency on a large and distributed scale. Dark and deep web The world wide web is the public content that flows across the pipes of the internet, much of what is online today is for anyone to access at any time. The “deep web,” however, is the content that is kept behind paywalls and member-only spaces, or any part of the web that is not readily accessible or browsable with a search engine. Then there is the “dark web,” which is the part of the internet that allows users to remain anonymous but requires certain software (such as the Tor Browser) to access, depending on the part of the dark web you’re trying to access. Anonymity benefits those who live and work in highly censored or surveilled countries, but it also can benefit criminals. There is nothing inherently criminal or nefarious about accessing the dark web; many popular websites also offer dark web versions so that users around the world can access their content. TechCrunch has a more detailed explainer on what the dark web is. Data breach When we talk about data breaches, we ultimately mean the improper removal of data from where it should have been. But the circumstances matter and can alter the terminology we use to describe a particular incident. A data breach is when protected data was confirmed to have improperly left a system from where it was originally stored and usually confirmed when someone discovers the compromised data. More often than not, we’re referring to the exfiltration of data by a malicious cyberattacker or otherwise detected as a result of an inadvertent exposure. Depending on what is known about the incident, we may describe it in more specific terms where details are known. (See also: Data exposure; Data leak) Data exposure A data exposure (a type of data breach) is when protected data is stored on a system that has no access controls, such as because of human error or a misconfiguration. This might include cases where a system or database is connected to the internet but without a password. Just because data was exposed doesn’t mean the data was actively discovered, but nevertheless could still be considered a data breach. Data leak A data leak (a type of data breach) is where protected data is stored on a system in a way that it was allowed to escape, such as due to a previously unknown vulnerability in the system or by way of insider access (such as an employee). A data leak can mean that data could have been exfiltrated or otherwise collected, but there may not always be the technical means, such as logs, to know for sure. Deepfake Deepfakes are AI-generated videos, audios, or pictures designed to look real, often with the goal of fooling people into thinking they are genuine. Deepfakes are developed with a specific type of machine learning known as deep learning, hence its name. Examples of deepfakes can range from relatively harmless, like a video of a celebrity saying something funny or outrageous, to more harmful efforts. In recent years, there have been documented cases of deepfaked political content designed to discredit politicians and influence voters, while other malicious deepfakes have relied on using recordings of executives designed to trick company employees into giving up sensitive information or sending money to scammers. Deepfakes are also contributing to the proliferation of nonconsensual sexual images. Def Con (aka DEFCON) Def Con is one of the most important hacking conferences in the world, held annually in Las Vegas, usually during August. Launched in 1993 as a party for some hacker friends, it has now become an annual gathering of almost 30,000 hackers and cybersecurity professionals, with dozens of talks, capture-the-flag hacking competitions, and themed “villages,” where attendees can learn how to hack internet-connected devices, voting systems, and even aircraft. Unlike other conferences like RSA or Black Hat, Def Con is decidedly not a business conference, and the focus is much more on hacker culture. There is a vendor area, but it usually includes nonprofits like the Electronic Frontier Foundation, The Calyx Institute, and the Tor Project, as well as relatively small cybersecurity companies. Distributed denial-of-service (DDoS) A distributed denial-of-service, or DDoS, is a kind of cyberattack that involves flooding targets on the internet with junk web traffic in order to overload and crash the servers and cause the service, such as a website, online store, or gaming platform to go down. DDoS attacks are launched by botnets, which are made up of networks of hacked internet-connected devices (such as home routers and webcams) that can be remotely controlled by a malicious operator, usually from a command-and-control server. Botnets can be made up of hundreds or thousands of hijacked devices. While a DDoS is a form of cyberattack, these data-flooding attacks are not “hacks” in themselves, as they don’t involve the breach and exfiltration of data from their targets, but instead cause a “denial of service” event to the affected service. (See also: Botnet; Command-and-control server) Encryption Encryption is the way and means in which information, such as files, documents, and private messages, are scrambled to make the data unreadable to anyone other than to its intended owner or recipient. Encrypted data is typically scrambled using an encryption algorithm — essentially a set of mathematical formulas that determines how the data should be encrypted — along with a private key, such as a password, which can be used to unscramble (or “decrypt”) the protected data. Nearly all modern encryption algorithms in use today are open source, allowing anyone (including security professionals and cryptographers) to review and check the algorithm to make sure it’s free of faults or flaws. Some encryption algorithms are stronger than others, meaning data protected by some weaker algorithms can be decrypted by harnessing large amounts of computational power. Encryption is different from encoding, which simply converts data into a different and standardized format, usually for the benefit of allowing computers to read the data. (See also: End-to-end encryption) End-to-end encryption (E2EE) End-to-end encryption (or E2EE) is a security feature built into many messaging and file-sharing apps, and is widely considered one of the strongest ways of securing digital communications as they traverse the internet. E2EE scrambles the file or message on the sender’s device before it’s sent in a way that allows only the intended recipient to decrypt its contents, making it near-impossible for anyone — including a malicious hacker, or even the app maker — to snoop inside on someone’s private communications. In recent years, E2EE has become the default security standard for many messaging apps, including Apple’s iMessage, Facebook Messenger, Signal, and WhatsApp. E2EE has also become the subject of governmental frustration in recent years, as encryption makes it impossible for tech companies or app providers to give over information that they themselves do not have access to. (See also: Encryption) Escalation of privileges Most modern systems are protected with multiple layers of security, including the ability to set user accounts with more restricted access to the underlying system’s configurations and settings. This prevents these users — or anyone with improper access to one of these user accounts — from tampering with the core underlying system. However, an “escalation of privileges” event can involve exploiting a bug or tricking the system into granting the user more access rights than they should have. Malware can also take advantage of bugs or flaws caused by escalation of privileges by gaining deeper access to a device or a connected network, potentially allowing the malware to spread. Espionage When we talk about espionage, we’re generally referring to threat groups or hacking campaigns that are dedicated to spying, and are typically characterized by their stealth. Espionage-related hacks are usually aimed at gaining and maintaining stealthy persistent access to a target’s network to carry out passive surveillance, reconnaissance for future cyberattacks, or the long-term collection and exfiltration of data. Espionage operations are often carried out by governments and intelligence agencies, though not exclusively. Exploit An exploit is the way and means in which a vulnerability is abused or taken advantage of, usually in order to break into a system. (See also: Bug; Vulnerability) Extortion In general terms, extortion is the act of obtaining something, usually money, through the use of force and intimidation. Cyber extortion is no different, as it typically refers to a category of cybercrime whereby attackers demand payment from victims by threatening to damage, disrupt, or expose their sensitive information. Extortion is often used in ransomware attacks, where hackers typically exfiltrate company data before demanding a ransom payment from the hacked victim. But extortion has quickly become its own category of cybercrime, with many, often younger, financially motivated hackers, opting to carry out extortion-only attacks, which snub the use of encryption in favor of simple data theft. (Also see: Ransomware) Forensics Forensic investigations involve analyzing data and information contained in a computer, server, or mobile device, looking for evidence of a hack, crime, or some sort of malfeasance. Sometimes, in order to access the data, corporate or law enforcement investigators rely on specialized devices and tools, like those made by Cellebrite and Grayshift, which are designed to unlock and break the security of computers and cellphones to access the data within. Hacker There is no one single definition of “hacker.” The term has its own rich history, culture, and meaning within the security community. Some incorrectly conflate hackers, or hacking, with wrongdoing. By our definition and use, we broadly refer to a “hacker” as someone who is a “breaker of things,” usually by altering how something works to make it perform differently in order to meet their objectives. In practice, that can be something as simple as repairing a machine with non-official parts to make it function differently as intended, or work even better. In the cybersecurity sense, a hacker is typically someone who breaks a system or breaks the security of a system. That could be anything from an internet-connected computer system to a simple door lock. But the person’s intentions and motivations (if known) matter in our reporting, and guides how we accurately describe the person, or their activity. There are ethical and legal differences between a hacker who works as a security researcher, who is professionally tasked with breaking into a company’s systems with their permission to identify security weaknesses that can be fixed before a malicious individual has a chance to exploit them; and a malicious hacker who gains unauthorized access to a system and steals data without obtaining anyone’s permission. Because the term “hacker” is inherently neutral, we generally apply descriptors in our reporting to provide context about who we’re talking about. If we know that an individual works for a government and is contracted to maliciously steal data from a rival government, we’re likely to describe them as a nation-state or government hacker (or, if appropriate, an advanced persistent threat), for example. If a gang is known to use malware to steal funds from individuals’ bank accounts, we may describe them as financially motivated hackers, or if there is evidence of criminality or illegality (such as an indictment), we may describe them simply as cybercriminals. And, if we don’t know motivations or intentions, or a person describes themselves as such, we may simply refer to a subject neutrally as a “hacker,” where appropriate. (Also see: Advanced persistent threat; Hacktivist; Unauthorized) Hack-and-leak operation Sometimes, hacking and stealing data is only the first step. In some cases, hackers then leak the stolen data to journalists, or directly post the data online for anyone to see. The goal can be either to embarrass the hacking victim, or to expose alleged malfeasance. The origins of modern hack-and-leak operations date back to the early- and mid-2000s, when groups like el8, pHC (“Phrack High Council”) and zf0 were targeting people in the cybersecurity industry who, according to these groups, had foregone the hacker ethos and had sold out. Later, there are the examples of hackers associated with Anonymous and leaking data from U.S. government contractor HBGary, and North Korean hackers leaking emails stolen from Sony as retribution for the Hollywood comedy, The Interview. Some of the most recent and famous examples are the hack against the now-defunct government spyware pioneer Hacking Team in 2015, and the infamous Russian government-led hack-and-leak of Democratic National Committee emails ahead of the 2016 U.S. presidential elections. Iranian government hackers tried to emulate the 2016 playbook during the 2024 elections. Hacktivist A particular kind of hacker who hacks for what they — and perhaps the public — perceive as a good cause, hence the portmanteau of the words “hacker” and “activist.” Hacktivism has been around for more than two decades, starting perhaps with groups like the Cult of the Dead Cow in the late 1990s. Since then, there have been several high profile examples of hacktivist hackers and groups, such as Anonymous, LulzSec, and Phineas Fisher. (Also see: Hacker) Infosec Short for “information security,” an alternative term used to describe defensive cybersecurity focused on the protection of data and information. “Infosec” may be the preferred term for industry veterans, while the term “cybersecurity” has become widely accepted. In modern times, the two terms have become largely interchangeable. Infostealers Infostealers are malware capable of stealing information from a person’s computer or device. Infostealers are often bundled in pirated software, like Redline, which when installed will primarily seek out passwords and other credentials stored in the person’s browser or password manager, then surreptitiously upload the victim’s passwords to the attacker’s systems. This lets the attacker sign in using those stolen passwords. Some infostealers are also capable of stealing session tokens from a user’s browser, which allow the attacker to sign in to a person’s online account as if they were that user but without needing their password or multi-factor authentication code. (See also: Malware) Jailbreak Jailbreaking is used in several contexts to mean the use of exploits and other hacking techniques to circumvent the security of a device, or removing the restrictions a manufacturer puts on hardware or software. In the context of iPhones, for example, a jailbreak is a technique to remove Apple’s restrictions on installing apps outside of its “walled garden” or to gain the ability to conduct security research on Apple devices, which is normally highly restricted. In the context of AI, jailbreaking means figuring out a way to get a chatbot to give out information that it’s not supposed to. Kernel The kernel, as its name suggests, is the core part of an operating system that connects and controls virtually all hardware and software. As such, the kernel has the highest level of privileges, meaning it has access to virtually any data on the device. That’s why, for example, apps such as antivirus and anti-cheat software run at the kernel level, as they require broad access to the device. Having kernel access allows these apps to monitor for malicious code. Malware Malware is a broad umbrella term that describes malicious software. Malware can land in many forms and be used to exploit systems in different ways. As such, malware that is used for specific purposes can often be referred to as its own subcategory. For example, the type of malware used for conducting surveillance on people’s devices is also called “spyware,” while malware that encrypts files and demands money from its victims is called “ransomware.” (See also: Infostealers; Ransomware; Spyware) Metadata Metadata is information about something digital, rather than its contents. That can include details about the size of a file or document, who created it, and when, or in the case of digital photos, where the image was taken and information about the device that took the photo. Metadata may not identify the contents of a file, but it can be useful in determining where a document came from or who authored it. Metadata can also refer to information about an exchange, such as who made a call or sent a text message, but not the contents of the call or the message. Multi-factor authentication Multi-factor authentication (MFA) is the common umbrella term for describing when a person must provide a second piece of information, aside from a username and password, to log into a system. MFA (or two-factor; also known as 2FA) can prevent malicious hackers from re-using a person’s stolen credentials by requiring a time-sensitive code sent to or generated from a registered device owned by the account holder, or the use of a physical token or key. Operational security (OPSEC) Operational security, or OPSEC for short, is the practice of keeping information secret in various situations. Practicing OPSEC means thinking about what information you are trying to protect, from whom, and how you’re going to protect it. OPSEC is less about what tools you are using, and more about how you are using them and for what purpose. For example, government officials discussing plans to bomb foreign countries on Signal are practicing bad OPSEC because the app is not designed for that use-case, and runs on devices that are more vulnerable to hackers than highly restricted systems specifically designed for military communications. On the other hand, journalists using Signal to talk to sensitive sources is generally good OPSEC because it makes it harder for those communications to be intercepted by eavesdroppers. (See also: Threat model) Penetration testing Also known as “pen-testing,” this is the process where security researchers “stress-test” the security of a product, network, or system, usually by attempting to modify the way that the product typically operates. Software makers may ask for a pen-test on a product, or of their internal network, to ensure that they are free from serious or critical security vulnerabilities, though a pen-test does not guarantee that a product will be completely bug-free. Phishing Phishing is a type of cyberattack where hackers trick their targets into clicking or tapping on a malicious link, or opening a malicious attachment. The term derives from “fishing,” because hackers often use “lures” to convincingly trick their targets in these types of attacks. A phishing lure could be attachment coming from an email address that appears to be legitimate, or even an email spoofing the email address of a person that the target really knows. Sometimes, the lure could be something that might appear to be important to the target, like sending a forged document to a journalist that appears to show corruption, or a fake conference invite for human rights defenders. There is an often cited adage by the well-known cybersecurity influencer The Grugq, which encapsulates the value of phishing: “Give a man an 0day and he’ll have access for a day, teach a man to phish and he’ll have access for life.” (Also see: Social engineering) Ransomware Ransomware is a type of malicious software (or malware) that prevents device owners from accessing its data, typically by encrypting the person’s files. Ransomware is usually deployed by cybercriminal gangs who demand a ransom payment — usually cryptocurrency — in return for providing the private key to decrypt the person’s data. In some cases, ransomware gangs will steal the victim’s data before encrypting it, allowing the criminals to extort the victim further by threatening to publish the files online. Paying a ransomware gang is no guarantee that the victim will get their stolen data back, or that the gang will delete the stolen data. One of the first-ever ransomware attacks was documented in 1989, in which malware was distributed via floppy disk (an early form of removable storage) to attendees of the World Health Organization’s AIDS conference. Since then, ransomware has evolved into a multibillion-dollar criminal industry as attackers refine their tactics and hone in on big-name corporate victims. (See also: Malware; Sanctions) Remote code execution Remote code execution refers to the ability to run commands or malicious code (such as malware) on a system from over a network, often the internet, without requiring any human interaction from the target. Remote code execution attacks can range in complexity but can be highly damaging when vulnerabilities are exploited. (See also: Arbitrary code execution) Sanctions Cybersecurity-related sanctions work similarly to traditional sanctions in that they make it illegal for businesses or individuals to transact with a sanctioned entity. In the case of cyber sanctions, these entities are suspected of carrying out malicious cyber-enabled activities, such as ransomware attacks or the laundering of ransom payments made to hackers. The U.S. Treasury’s Office of Foreign Assets Control (OFAC) administers sanctions. The Treasury’s Cyber-Related Sanctions Program was established in 2015 as part of the Obama administration’s response to cyberattacks targeting U.S. government agencies and private sector U.S. entities. While a relatively new addition to the U.S. government’s bureaucratic armory against ransomware groups, sanctions are increasingly used to hamper and deter malicious state actors from conducting cyberattacks. Sanctions are often used against hackers who are out of reach of U.S. indictments or arrest warrants, such as ransomware crews based in Russia. Sandbox A sandbox is a part of a system that is isolated from the rest. The goal is to create a protected environment where a hacker can compromise the sandbox, but without allowing further access to the rest of the system. For example, mobile applications usually run in their own sandboxes. If hackers compromise a browser, for example, they cannot immediately compromise the operating system or another app on the same device. Security researchers also use sandboxes in both physical and virtual environments (such as a virtual machine) to analyze malicious code without risking compromising their own computers or networks. SIM swap SIM swapping is a type of attack where hackers hijack and take control of a person’s phone number, often with the goal of then using the phone number to log into the target’s sensitive accounts, such as their email address, bank account, or cryptocurrency wallet. This attack exploits the way that online accounts sometimes rely on a phone number as a fallback in the event of losing a password. SIM swaps often rely on hackers using social engineering techniques to trick phone carrier employees (or bribing them) into handing over control of a person’s account, as well as hacking into carrier systems. Social engineering Social engineering is the art of human deception, and encompasses several techniques a hacker can use to deceive their target into doing something they normally would not do. Phishing, for example, can be classified as a type of social engineering attack because hackers trick targets into clicking on a malicious link or opening a malicious attachment, or calling someone on the phone while pretending to be their employer’s IT department. Social engineering can also be used in the real world, for example, to convince building security employees to let someone who shouldn’t be allowed to enter the building. Some call it “human hacking” because social engineering attacks don’t necessarily have to involve technology. (Also see: Phishing) Spyware (commercial, government) A broad term, like malware, that covers a range of surveillance monitoring software. Spyware is typically used to refer to malware made by private companies, such as NSO Group’s Pegasus, Intellexa’s Predator, and Hacking Team’s Remote Control System, among others, which the companies sell to government agencies. In more generic terms, these types of malware are like remote access tools, which allows their operators — usually government agents — to spy and monitor their targets, giving them the ability to access a device’s camera and microphone or exfiltrate data. Spyware is also referred to as commercial or government spyware, or mercenary spyware. (See also: Stalkerware) Stalkerware Stalkerware is a kind of surveillance malware (and a form of spyware) that is usually sold to ordinary consumers under the guise of child or employee monitoring software but is often used for the purposes of spying on the phones of unwitting individuals, oftentimes spouses and domestic partners. The spyware grants access to the target’s messages, location, and more. Stalkerware typically requires physical access to a target’s device, which gives the attacker the ability to install it directly on the target’s device, often because the attacker knows the target’s passcode. (See also: Spyware) Threat model What are you trying to protect? Who are you worried about that could go after you or your data? How could these attackers get to the data? The answers to these kinds of questions are what will lead you to create a threat model. In other words, threat modeling is a process that an organization or an individual has to go through to design software that is secure, and devise techniques to secure it. A threat model can be focused and specific depending on the situation. A human rights activist in an authoritarian country has a different set of adversaries, and data, to protect than a large corporation in a democratic country that is worried about ransomware, for example. (See also: Operational security) Unauthorized When we describe “unauthorized” access, we’re referring to the accessing of a computer system by breaking any of its security features, such as a login prompt or a password, which would be considered illegal under the U.S. Computer Fraud and Abuse Act, or the CFAA. The Supreme Court in 2021 clarified the CFAA, finding that accessing a system lacking any means of authorization — for example, a database with no password — is not illegal, as you cannot break a security feature that isn’t there. It’s worth noting that “unauthorized” is a broadly used term and often used by companies subjectively, and as such has been used to describe malicious hackers who steal someone’s password to break in through to incidents of insider access or abuse by employees. Virtual private network (VPN) A virtual private network, or VPN, is a networking technology that allows someone to “virtually” access a private network, such as their workplace or home, from anywhere else in the world. Many use a VPN provider to browse the web, thinking that this can help to avoid online surveillance. TechCrunch has a skeptics’ guide to VPNs that can help you decide if a VPN makes sense for you. If it does, we’ll show you how to set up your own private and encrypted VPN server that only you control. And if it doesn’t, we explore some of the privacy tools and other measures you can take to meaningfully improve your privacy online. Vulnerability A vulnerability (also referred to as a security flaw) is a type of bug that causes software to crash or behave in an unexpected way that affects the security of the system or its data. Sometimes, two or more vulnerabilities can be used in conjunction with each other — known as “vulnerability chaining” — to gain deeper access to a targeted system. (See also: Bug; Exploit) Zero-click (and one-click) attacks Malicious attacks can sometimes be categorized and described by the amount of user interaction that malware, or a malicious hacker, needs in order to achieve successful compromise. One-click attacks refer to the target having to interact only once with the incoming lure, such as clicking on a malicious link or opening an attachment, to grant the intruder access. But zero-click attacks differ in that they can achieve compromise without the target having to click or tap anything. Zero-clicks are near-invisible to the target and are far more difficult to identify. As such, zero-click attacks are almost always delivered over the internet, and are often reserved for high-value targets for their stealthy capabilities, such as deploying spyware. (Also see: Spyware) Zero-day A zero-day is a specific type of security vulnerability that has been publicly disclosed or exploited but the vendor who makes the affected hardware or software has not been given time (or “zero days”) to fix the problem. As such, there may be no immediate fix or mitigation to prevent an affected system from being compromised. This can be particularly problematic for internet-connected devices. (See also: Vulnerability) First published on September 20, 2024.
  19. Perhaps no one in the world has made such catastrophic tech flubs this year as U.S. Secretary of Defense Pete Hegseth. The saga started when the editor-in-chief of The Atlantic, Jeffrey Goldberg, reported that he had been mistakenly added to an unauthorized Signal group chat by U.S. National Security Advisor Michael Waltz, where numerous high-ranking government officials discussed detailed plans for attacking the Houthis in Yemen, including the times and places where such attacks would occur. To be fair, we’ve all made some embarrassing tech mistakes. But for most people, that means accidentally liking an ex’s Instagram post from five years ago — not sharing top-secret government military plans on a commercial messaging app with unauthorized recipients. This mishandling of massively sensitive information was already troublesome enough, but this week, The New York Times reported that Hegseth shared information about the attacks on Yemen in another Signal chat, which included his lawyer, his wife, and his brother, who had no reason to receive such sensitive information; Hegseth’s wife doesn’t even work for the Pentagon. These security failures are particularly egregious — how do you manage to accidentally loop in a journalist on your military plans? But this is far from the first time that contemporary technology has landed global governments in tricky situations — and we’re not just talking Watergate. Stationed in the military? Don’t use Strava The fitness tracking/social media app Strava can be a privacy nightmare, even for your average athlete. The app allows people to share their exercise logs — often runs, hikes, or bike rides — on a public account with their friends, who can like and comment on their morning jogs in the park. But Strava accounts are public by default, meaning that if you aren’t savvy enough to check your privacy settings, you will inadvertently broadcast to the world exactly where you work out. Strava defaults to hiding the first and last 200 meters of a run as a means of obscuring where someone lives, since people are likely to begin and end runs near their home. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW For anyone on the internet, it’s still risky to broadcast a 200-mile radius of where you live, but it’s even more dangerous if you’re a member of the military at a secret base, for instance. In 2018, Strava unveiled a global heat map, showing where in the world public users have logged activities. This doesn’t really matter if you’re looking at a map of New York City, but in places like Afghanistan and Iraq, few people use Strava aside from foreigners, so one can assume that hot spots of activity may occur at or around military bases. Okay here is where things get problematic: Via Strava, using pre-set segments we can scrape location specific user data from basically public profiles (and yes those exist w/in bases and lead us straight so social media profile of service members). https://t.co/VDNBGcKvIY — Tobias Schneider (@tobiaschneider) January 29, 2018 To make matters worse, users could look at certain running routes on Strava to see the public profiles of the users who logged activities there. So, it would be possible for a bad actor to find a list of U.S. soldiers stationed at a certain base in Iraq, for example. Joe Biden’s not-so-secret Venmo Venmo is a peer-to-peer payments app, yet for some reason, it defaults to publicly sharing your transactions. So, by simply opening my Venmo app — which synced my Facebook friends to my account at some point, probably over 10 years ago — I can see that two girls I went to high school with got dinner together last night. Good for them. The information we share on Venmo can be pretty boring and benign, but dedicated fans of reality shows like “Love Is Blind” will search for contestants’ accounts to predict who from the show is still dating (if the couple sends each other rent money, then yes, they probably live together). So, if you can find reality stars on Venmo, why not search for the president? In 2021, some BuzzFeed News reporters decided to search for Joe Biden’s Venmo. Within 10 minutes, they found his account. From Biden’s account, the reporters could easily find other members of the Biden family and his administration and map out their broader social circles. Even if a user makes their account on Venmo private, their friends list will remain public. When BuzzFeed News contacted the White House, Biden’s profile was wiped clean, but the White House didn’t provide a comment. So, yes, reporters did indeed locate the Venmo accounts of Pete Hegseth, Mike Waltz, and other government officials, too. Some things never change. Encrypted messaging can’t protect you from cameras You can take all of the precautions you want to protect your messages, but nothing can save you from the looming possibility of human error. Carles Puigdemont, the former president of Catalonia, led a movement in 2017 to attain independence from Spain and become its own country. But the Spanish government blocked this attempt and ousted Puigdemont from leadership. When the Spanish government issued a warrant for the arrest of Puigdemont and his allies, they fled to Belgium. A few months later, the Spanish media attended an event in Belgium where Puigdemont was expected to speak — he sent in a video of a speech instead, but as the clip was playing, a Spanish broadcaster noticed that a former Catalan health minister, Toni Comín, was texting with his screen fully visible. The camera operator zoomed in on Comín’s phone, exposing texts from Puigdemont, where he had resigned himself to defeat in his attempts to bring about Catalan independence. Puigdemont later tweeted that he was expressing himself in a moment of doubt but that he didn’t intend to back down. No matter what steps you take to encrypt your private messages, you might want to look over your shoulder before reading sensitive information in public — especially when you’re texting with a self-exiled former president.
  20. 4chan is partly back online after a hack took the infamous image-sharing site down for nearly two weeks. The site first went down on April 14, with the person responsible for the hack apparently leaking data, including a list of moderators and “janitors” (one janitor told TechCrunch they were “confident” that the leaked data was real). 4chan’s extended disappearance led to at least one premature obituary, with journalist Ryan Broderick writing for Wired that “what began as a hub for internet culture and an anonymous way station for the internet’s anarchic true believers devolved over the years into a fan club for mass shooters, the central node of Gamergate, and the beating heart of far-right fascism around the world.” But the 4chan team responded defiantly in a post on X: “Wired says ‘4chan is dead.’ Is that so?” And on Friday, the site came back online. Shortly afterward, a post on the official 4chan blog said “a hacker using a UK IP address” was able to gain access to one of 4chan’s servers using a “bogus PDF upload,” subsequently “exfiltrating database tables and much of 4chan’s source code,” then beginning to “vandalize 4chan at which point moderators became aware and 4chan’s servers were halted, preventing further access.” The damage, the post said, was “catastrophic.” “Ultimately this problem was caused by having insufficient skilled man-hours available to update our code and infrastructure, and being starved of money for years by advertisers, payment providers, and service providers who had succumbed to external pressure campaigns,” the post said, later adding, “Advertisers and payment providers willing to work with 4chan are rare, and are quickly pressured by activists into cancelling their services.” Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW The breached server was subsequently replaced, the post said, although the site has new limitations — PDF uploads are “temporarily” disabled, and a board for sharing Flash animations has been left offline as the team saw “no realistic way to prevent similar exploits using .swf files.” As of Sunday afternoon, the site’s status checker showed that the boards and front page were up, while posting, images, and thumbnails were not working. “4chan is back,” the post said. “No other website can replace it, or this community. No matter how hard it is, we are not giving up.”
  21. Unknown hackers last month targeted leaders of the exiled Uyghur community in a campaign involving Windows spyware, researchers revealed Monday. Citizen Lab, a digital rights research group based at the University of Toronto, detailed an espionage campaign against members of the World Uyghur Congress (WUC), an organization that represents the Muslim-minority group, which has for years faced repression, discrimination, surveillance, and hacking from China’s government. Google alerted some WUC members to the hacking campaign in mid-March, prompting the members to contact journalists and Citizen Lab’s researchers, the report said. Citizen Lab investigated and found a targeted phishing email sent to members of WUC, impersonating a trusted contact who sent a Google Drive link for a password-protected compressed file containing a malicious version of a Uyghur language text editor. The researchers said the campaign wasn’t particularly sophisticated and didn’t involve zero-day exploits or mercenary spyware, but noted that “the delivery of the malware showed a high level of social engineering, revealing the attackers’ deep understanding of the target community.”
  22. Hackers working for governments were responsible for the majority of attributed zero-day exploits used in real-world cyberattacks last year, per new research from Google. Google’s report said that the number of zero-day exploits — referring to security flaws that were unknown to the software makers at the time hackers abused them — had dropped from 98 exploits in 2023 to 75 exploits in 2024. But the report noted that of the proportion of zero-days that Google could attribute — meaning identifying the hackers who were responsible for exploiting them — at least 23 zero-day exploits were linked to government-backed hackers. Among those 23 exploits, 10 zero-days were attributed to hackers working directly for governments, including five exploits linked to China and another five to North Korea. Another eight exploits were identified as having been developed by spyware makers and surveillance enablers, such as NSO Group, which typically claim to only sell to governments. Among those eight exploits made by spyware companies, Google is also counting bugs that were recently exploited by Serbian authorities using Cellebrite phone-unlocking devices. A chart showing the zero-day exploits that were attributed in 2024.Image Credits:GoogleEven though there were eight recorded cases of zero-days developed by spyware makers, Clément Lecigne, a security engineer at Google Threat Intelligence Group (GTIG), told TechCrunch that those companies “are investing more resources in operational security to prevent their capabilities being exposed and to not end up in the news.” Google added that surveillance vendors continue to proliferate. “In instances where law enforcement action or public disclosure has pushed vendors out of business, we’ve seen new vendors arise to provide similar services,” James Sadowski, a principal analyst at GTIG, told TechCrunch. “As long as government customers continue to request and pay for these services, the industry will continue to grow.” Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW Contact Us Do you have more information about government hacking groups, zero-day developers, or spyware makers? From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. The remaining 11 attributed zero-days were likely exploited by cybercriminals, such as ransomware operators targeting enterprise devices, including VPNs and routers. The report also found that the majority of the total 75 zero-days exploited during 2024 were targeting consumer platforms and products, like phones and browsers, while the rest exploited devices typically found on corporate networks. The good news, according to Google’s report, is that software makers defending against zero-day attacks are increasingly making it more difficult for exploit makers to find bugs. “We are seeing notable decreases in zero-day exploitation of some historically popular targets such as browsers and mobile operating systems,” per the report. Sadowski specifically pointed to Lockdown Mode, a special feature for iOS and macOS that disables certain functionality with the goal of hardening cell phones and computers, which has a proven track record of stopping government hackers, as well as Memory Tagging Extension (MTE), a security feature of modern Google Pixel chipsets that helps detect certain types of bugs and improve device security. Reports like Google’s are valuable because they give the industry, and observers, data points that contribute to our understanding of how government hackers operate — even if an inherent challenge with counting zero-days is that, by nature, some of them go undetected, and of those that are detected, some still go without attribution.
  23. U.K. retail conglomerate The Co-operative Group said it has shut down some of its IT systems, citing an attempted cyberattack. Co-op spokesperson Mark Carrington said the company “recently experienced attempts” by hackers to break into some of its systems and took “proactive steps” to keep those systems safe. The spokesperson said the company’s back office and call center functions are facing some disruption as a result. It’s not clear if the attempted intrusions were successful. The Co-op — one of the largest food retailers in the U.K. with more than 5 million members — said its stores were operating normally and that it was not asking customers “to do anything differently” at this time. When asked by TechCrunch, The Co-op would not describe the specific nature of the incident, such as ransomware, or if it is known, nor would it say if it has disclosed the incident to the U.K.’s data protection regulator, the Information Commissioner’s Office, as is required in the event of a suspected data breach. The company confirmed it is working with the National Cyber Security Centre. The Co-op’s spokesperson also would not say if the company had any communication with the threat actors, such as a ransomware gang. News of the disruption at The Co-op comes days after U.K. retailer Marks & Spencer confirmed a cyberattack that left customers unable to pick up their orders. The retailer said it notified the U.K. data regulator of the incident, indicating a possible data breach. The ongoing disruption at Marks & Spencer has since entered its second week.
  24. I will design adsense approved niche website View File Hey Welcome, If your looking for Google Adsense approval, with new adsense policy. So youre at the right place to get your domain or blog approved. I have 5 years of experience in approval for Adsense on tones of websites. I prefer delivering quality service to my clients, & client satisfaction which is the most necessary thing for me. SERVICES: Under Construction Low Value Content. Ready your for Adsense Approval. Full Support Until your Adsense Approved. Note: 100% guaranteed approval on your website. Google Adsense always takes time to approve. Sometimes we apply multiple times for google Adsense approval. Submitter ceacer Submitted 05/03/2025 Category Serve  
  25. Version Google Adsense

    0 downloads

    Hey Welcome, If your looking for Google Adsense approval, with new adsense policy. So youre at the right place to get your domain or blog approved. I have 5 years of experience in approval for Adsense on tones of websites. I prefer delivering quality service to my clients, & client satisfaction which is the most necessary thing for me. SERVICES: Under Construction Low Value Content. Ready your for Adsense Approval. Full Support Until your Adsense Approved. Note: 100% guaranteed approval on your website. Google Adsense always takes time to approve. Sometimes we apply multiple times for google Adsense approval.
    $50
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. to insert a cookie message