Juul survives a blow from the FDA – for now

Can you currently? Buy Juul e-cigarette? That depends on what day of the week it is.

Earlier this week, the FDA denied marketing authorization to Juul, which first began selling its e-cigarettes in 2015 (though it has been operating under different company names since 2007). The FDA said: the reason for the refusal was that Juul “failed to provide sufficient toxicological data to show that the products were safe”, ArsTechnica reports, and as such, the agency was unable to complete its toxicology assessment. The FDA specifically pointed to “potentially harmful chemicals that leach from the company’s own e-liquid pods” as a concern.

However, Juul pushed back and secured a temporary win. In a court filing filed with the U.S. Court of Appeals for the DC Circuit, Juul called the FDA ban “arbitrary and capricious‘ and suggested the agency succumbed to pressure from Congress. The federal appeals court then decided to block the FDA’s order until it can hear more arguments on the matter.

The FDA’s denial and subsequent stay are just the latest in a years-long battle between regulators and Juul. In 2018, the FDA launched an investigation into the sale of Juul products to underage consumers, requested marketing materials from the company and required the company to file a plan to thwart sales to teens. The following year, the FDA sent a warning letter to Juul about her claims that vaping was less harmful than traditional cigarettes. At one point, fruit-flavored e-cigarettes were banned in the US.

The latest ban, if it ever comes into effect, would apply to the Juul device itself, a sleek vape pen, and to four specific liquid cartridges, all of which have tobacco or menthol flavors — mimicking the flavors of traditional ones. cigarettes. The FDA’s refusal came just days after the agency said it would also limit the amount of nicotine allowed in real cigarettes sold in the US.

Here’s some more news.

Instagram’s Age Crackdown

On Thursday, Instagram announced that it will introduce new tools for verify users’ ages on the platform. When a user changes their date of birth to make them older or younger than 18, Instagram will now require them to verify the change. This means either uploading an ID, getting mutual friends to vouch for you, or uploading a selfie video. The latter option is offered through a partnership with digital recognition company Yoti, which then scans the video selfie with its facial recognition technology to estimate the person’s age.

Instagram says the goal is to tailor the app differently for teens and adults, and make sure those experiences are different. Despite those declared noble intentions, the move still makes privacy and AI experts nervous† After all, Instagram’s parent company, Meta, has a long history of data sharing and privacy lapses.

For now, Instagram is only testing the age verification requirement with users in the US.

Microsoft scraps controversial emotion-detecting AI

On Tuesday, The New York Times reported that Microsoft will remove features from its Azure cloud computing platform that uses facial recognition software to track people’s physical features and even emotions in images. It has been a controversial feature, criticized for its potential to be both biased and inaccurate.

Microsoft is no stranger to questionable ethical situations. In 2018, it came under fire for using the Azure platform to partner with ICE, the US Immigration and Customs Enforcement program. But now Microsoft seems eager to stay ahead of the curve. The move to rule in Azure came as part of Microsoft’s recently released Responsible AI Standard, a document he believes will give direction to how the company uses AI in its products.

Leave a Reply

Your email address will not be published. Required fields are marked *