It’s clear that AI is here to stay, with many of us (myself included) using AI to help us increase the speed and thoroughness of content creation. Leaving reviews on the products and services we use is one area where AI could be applied for this purpose.
Part of my Product R&D team here at G2 leads review moderation, verifying every single user review to ensure trust and authenticity of all reviews in our marketplace. So while we of course recognize that AI can make significant and positive contributions to the world of reviews in different ways, inauthentic reviews of any kind are not accepted on G2.
Prioritizing authentic feedback: review moderation at G2
At the heart of it all, we know that users trust the experiences of others, which is why we prioritize the quality of reviews over quantity. Buyers have come to trust G2 for researching and evaluating software because we do everything we can to guarantee authenticity, such as:
- Verifying the identity of every user on G2
- Leveraging over 43 data points to assess authenticity
- Vetting every review with a consistent and strict guideline
G2 takes an unbiased approach to moderate reviews. It is our aim to ensure every review is the honest opinion of a real user and their own experience with a product or service. We also actively monitor for patterns that indicate fraudulent behavior. When such instances occur, we perform a thorough audit, take down the reviews in question, and implement adjustments to our product or operating procedures to prevent recurrence.
G2’s stance on generative AI and software reviews
As we continue to embrace AI to drive more innovation, while ensuring authenticity and quality, G2 now permits AI-assisted and generated reviews—provided they meet verification standards for reviews collected via Vendor Verified, LinkedIn Verified, or in-app submissions.
Ultimately, only reviewers who meet one of these three verification standards can use generative AI to generate reviews. This marks an update to how we think about AI in reviews, reflecting how reviewers naturally share feedback today—often using AI to refine and express their thoughts.
Determining how a user is using AI is incredibly difficult. Because of that, we are opting to focus on validation of authentic users’ and real usage and invest in adopting AI tools directly, so we know how they use them. Take, for example, G2’s recent acquisition of unSurvey to bring conversational AI to G2 reviews with voice review collection and AI transcript-generated reviews. In all of these instances, what users can expect of G2 is that we will stand committed to ensuring G2 remains a platform of real content.
Here’s a bit more context on how we think about AI-assisted versus AI-generated reviews:
AI-assisted: These are reviews in which a real user provides the firsthand draft of their review (their experience, insights, ratings, etc) and then uses AI tools only to polish or clarify their own words. AI-assisted reviews are always allowed on G2 when they are authentic and follow our guidelines. Typical AI-assisted tasks might include:
- Grammar, spelling, or punctuation fixes
- Rephrasing sentences for clarity or flow
- Translating from another language
- Suggesting synonyms or tightening up the content structure
Key characteristics include:
- Human-first: The core thoughts, opinions, and examples come directly from the reviewer.
- AI as helper: The AI never invents new experiences or details; it merely refines what the user has already written.
AI-generated: These are reviews where AI is used to produce the bulk of the written content on behalf of the reviewer, effectively drafting full paragraphs or complete narratives based on prompts. These reviews are only approved when they are authentic from the user’s voice and meet one of the three standards noted above for review collection via Vendor Verified, LinkedIn Verified, or in-app submissions. G2 is ensuring that in these instances, we know the user is authentic and trusting them to use AI to highlight their unique experience.
The key characteristic of these reviews is that they are ‘AI-first,’ meaning the AI writes most or all of the review text.
How G2 review moderation detects AI-generated content
Our team has a rigorous vetting and auditing process for all new review submissions. Because of the massive number of reviews we collect on any given day, we must use a thoughtful blend of best-in-class technology and well-trained team members with a discerning eye.
G2 uses leading third-party tools and services to detect AI-generated content in review submissions. Given our position as an unbiased marketplace, we won’t tip our hand on the specific technologies leveraged.
The tools at G2’s disposal are enterprise-grade content detection solutions that scan and flag AI-generated content and behavior. After flagging this content, our in-house team of moderators reviews these instances along with other factors to validate.
While AI content detectors and our machine learning model help flag fake reviews at scale, we also layer in the specialized expertise of our moderators to audit reviews that require more considerable evaluation. Our team is skilled at identifying patterns commonly associated with fraudulent activity and creating effective fraud prevention practices.
Evolving our approach with industry trends and standards in mind
To be the most trusted data source in the age of AI for informing software buying decisions and go-to-market strategies, we are committed to evolving our review moderation approach. This means keeping up to date on market trends, communicating with communities that help us stay informed on how other organizations deter fake reviews, and more.
One major way the fight against fake reviews and testimonials has progressed is with a recent ruling by the Federal Trade Commission (FTC). On August 14, 2024, the Federal Trade Commission announced a final rule to combat fake reviews and testimonials by prohibiting their sale or purchase and allowing the agency to seek civil penalties against knowing violators.
Within this final rule is an announcement regarding fake or false customer reviews, customer testimonials, and celebrity testimonials. The rule states:
The final rule addresses reviews and testimonials that misrepresent that they are by someone who does not exist, such as AI-generated fake reviews, or who did not have actual experience with the business or its products or services, or that misrepresent the experience of the person giving it. It prohibits businesses from creating or selling such reviews or testimonials. It also prohibits them from buying such reviews, procuring them from company insiders, or disseminating such testimonials, when the business knew or should have known that the reviews or testimonials were fake or false.
Regarding this change, FTC Chair Lina M. Khan shared, “Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors. By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest, and competitive.”
In addition to the update regarding fake reviews, the rule also announced updates regarding buying positive or negative reviews, company-controlled review sites, insider reviews and customer testimonials, review suppression, and the misuse of fake social media indicators.
Reviews you can continue to trust in the age of AI
At G2, we continue to embrace AI by finding ways to work smarter and bringing new innovations to the market for software buyers and sellers. Through this continued age of innovation and evolution, we remain staunchly committed to blocking fake or fraudulent reviews.
AI is here to help genuine users share their real software experiences more easily, not to bypass our verification standards. Because we believe in the power of authentic reviews, you can always count on G2 to champion the voice of the customer.
Check out G2’s community guidelines and review validity approach to learn how we prevent fake reviews from our platform.