Parents in France do not have to get their children vaccinated, but GPs and paediatricians will strongly recommend it — and, crucially, those children will not be allowed to go to school, or attend daycare during the holidays, or do extracurricular activities.
Which vaccinations are we talking about?
Article L.3111-2-I of the Public Health Code states that every child born on or after January 1st, 2018, must receive – unless there is a recognised medical reason that they cannot – vaccines against the following diseases:
diphtheria;
tetanus;
polio;
whooping cough;
the bacterium Haemophilus influenzae type b;
hepatitis B;
pneumococcus;
ACWY meningococci;
type B meningococci;
measles;
mumps;
and rubella.
France’s vaccination calendar. Graphic: ameli.fr
Before a child starts school in France, parents have to provide proof of vaccination, via their child’s GP-updated carnet de santé (personal child health record).
Any child who has not had the full dose of those vaccinations by the time they start in full-time education, which is now compulsory from the age of three, cannot be fully enrolled in school.
Instead, they will be provisionally enrolled and the parents will be given three months to get their children up to date with their innoculations.
Since April 2023, public health officials have also recommended that infants aged six weeks to six months be vaccinated against Rotavirus, and that children with underlying health conditions also receive an annual vaccination against influenza, if they are aged between two and 17.
Meanwhile, the human papillomavirus (HPV) vaccine is increasingly recommended — but not required — for children aged 11 to 14. These prevent up to 90 percent of human papillomavirus infections which cause various cancers.
Furthermore, for adolescents aged 11 to 14, ACWY vaccination is now recommended, regardless of their prior vaccination status. It is also recommended for individuals aged 15 to 24 as part of catch-up vaccination programmes.
Vaccination against meningococcus B can be offered to people aged 15 to 24.
And schools routinely organise vaccination drives to ensure that all pupils are up to date with their jabs.
But what if I don’t want my child to be vaccinated at all?
Your GP will push to change your mind — they are legally obliged to do so — by informing parents of the benefits of vaccination and of the risks of refusal, for health and social reasons.
If you really don’t want your child to have the required vaccinations, you have the right to refuse.
But you also have the responsibility of understanding that they will not be allowed to attend state or private schools, which means you will have to home school.
They will also not be allowed to attend holiday daycare centres, such as MJCs, which also demand confirmation of vaccinations; or join sports clubs and take part in extracurricular group activities.
Your refusal to have your child vaccinated will be recorded on their health record.
Can’t I get my doctor to sign a form confirming the vaccines are safe?
Some parents have tried to make vaccination conditional upon the doctor signing a form guaranteeing that the vaccines are not harmful.
Sample documents of this type are available to parents on anti-vax websites.
These aren’t worth the pixels on the computer screen before you wasted the paper and ink to print it out. Your doctor will, quite rightly, refuse.
Homeschooling doesn’t sound so bad
Parents of an estimated 50,000 children in France — who have done so for a variety of reasons, including religious ones — would agree with you. But, there are strict rules in place.
Before you start, you must register to become a home-educator with your local authority, which by all accounts, isn’t necessarily straightforward, and requires consent from the local authority, which is far from a given.
Specific health or disability needs, or if your family moves frequently making school attendance difficult, or if your child has a talent in sports or art that requires intensive practice are some of the more usual reasons for homeschooling.
Once accepted, brace for regular inspections to ensure you are teaching a balanced and appropriate curriculum.
Recently, Climavision, a Louisville, Kentucky-based company that sells climate data to media companies, announced that Lamar County and the City of Reno, along with five other rural counties across the state, is helping the National Weather Service fill weather blindspots,…
On the rugged edge of the Massif Central, Dominic Rippon uncovers a corner of France where vineyards cling to cliffs and tradition is being reborn…
France is blessed with a wealth of beautiful wine regions. Think of Alsace’s fairy-tale hillsides, Jura’s subalpine slopes, or the Roussillon, where vineyards stretch south into the Pyrenees. Yet Aveyron, little known outside its borders, might just outshine them all. Here in the northernmost reaches of Occitanie, vines are woven into the wild foothills of the Massif Central, a stunning landscape of dizzying terraces and timeless stone villages.
Part of Aveyron’s mystery lies in its small scale. The heart of production is Marcillac, a patchwork of only 200 hectares just north of Rodez. The star here is the Fer Servadou grape, known locally as Mansois: an ancient cousin of Cabernet, it thrives in this high, rocky terrain. Thick-skinned and resilient, it produces dark, spicy cassis-scented reds, with a freshness sharpened by the cool Saltitude and a ripeness coaxed by warm autumn breezes. Until the 1960s, however, these hills were better known for digging coal than for tending vines, as wine cellars churned out thin, rough piquette to slake the thirst of the miners.
When the pits closed, growers had to change course: vines were replanted, production scaled back, and ancient terraces were reshaped to allow for the passage of modern equipment. Out of that transformation, quality began to emerge and recognition followed. The vineyards of Marcillac gained appellation status in 1990, and in 2011 Estaing, Côtes de Millau, and Entraygues-Le Fel joined the fold. These smaller areas specialise in lively whites made from Chenin Blanc and Mauzac, while Fer Servadou again shapes the reds – either as a pure varietal or blended with Gamay. In Côtes de Millau, to the south, Syrah adds a distinctly Mediterranean accent to the wines.
Today, the region is gradually finding its voice again. What were once dismissed as humble ‘miners’ wines’ are now capturing the attention of sommeliers and more adventurous drinkers. Cooperative cellars like the Vignerons du Vallon have led the revival, inspiring young winemakers to reclaim the dramatic terraces and rediscover the beauty of working some of France’s most striking vineyard landscapes.
Paris (France) (AFP) – A French jihadist was sentenced to life in jail on Friday for involvement in Islamic State group atrocities against Iraq’s Yazidi minority, the first case in France to tackle the issue.
Issued on: Modified:
3 min Reading time
The Paris Assizes Court found Sabri Essid guilty in absentia of genocide, crimes against humanity and complicity in the crimes, committed between 2014 and 2016 when the jihadists occupied swathes of northern Syria and Iraq.
“Sabri Essid took part in the genocide perpetrated by Islamic State,” presiding judge Marc Sommerer told the court.
“Essid became part of the criminal network repeatedly buying and reselling a very large number of Yazidi victims,” he said, adding the court judged that the group had “specifically targeted” the Yazidi minority for its religious beliefs.
The Islamic State group regarded the Yazidis, who follow a pre-Islamic faith, as heretics.
Essid, a Frenchman born in 1984 and who joined IS in Syria in 2014, is presumed to have been killed in 2018. But without proof of his death, he was tried and convicted in absentia.
He is accused of buying several Yazidi women at markets and then repeatedly raping them, as well as depriving them of water and food.
IS seized large swathes of Syria and neighbouring Iraq in 2014, declaring a so-called caliphate there.
In August of that year, they murdered thousands of Yazidi men in Iraq’s Sinjar province and took into Syria thousands of women and girls to sell them in markets as sex slaves to be abused by jihadists from around the world.
United Nations investigators have qualified these actions as genocide.
On Thursday, a Yazidi woman who was sold by IS as a sex slave described in stark detail to the Paris court the horrors she endured under jihadist captivity in Syria.
She said she was raped almost daily by her first two owners – a married Saudi man and then Essid. She was resold to six other men before escaping with her daughter and walking through the night to reach a post manned by Kurdish forces.
Sommerer said on Thursday he had overseen several trials for crimes against humanity but had “never heard before” the atrocities endured by the woman, whose name AFP is withholding to protect her privacy.
Known in Syria as Abu Dojanah al-Faransi, Essid was thought to be close to Jean-Michel and Fabien Clain.
The Clain brothers, now believed to be dead, claimed responsibility on behalf of IS for France‘s worst ever jihadist attacks in Paris in 2015.
Lawyers had earlier stressed the significance of the Essid trial.
“Given that in the past Islamic State fighters believed to be dead have resurfaced, it is essential that this trial take place,” said Patrick Baudouin, a lawyer for France’s Human Rights League.
“It is essential that it shed light on the particularly grave abuses committed against civilian populations and in particular the genocidal policy implemented against the Yazidi population,” said Clemence Bectarte, a lawyer representing three Yazidi women survivors and their eight children.
After Essid went to Syria, his wife, their three children and her son from a previous relationship joined him.
In an IS propaganda video released in 2015, Essid is seen pushing his 12-year-old stepson to shoot a Palestinian hostage in the head.
His wife has been jailed since returning to France.
Similar trials have taken place elsewhere in Europe.
In 2021, a German court issued the first ruling worldwide to recognise crimes against the Yazidi community as genocide.
It sentenced an Iraqi man to life in jail on charges that he chained a five-year-old Yazidi girl “house slave” outdoors in heat of up to 50C as punishment for wetting her mattress, leading her to die of thirst.
Last month, a Swedish court convicted a 52-year-old woman of genocide for keeping Yazidi women and children as slaves in Syria in 2015.
US-backed forces eventually defeated the IS proto-state in 2019, though isolated cells still operate in the Syrian desert.
Hussein Qaidi, who heads the Kidnapped Yazidi Rescue Office, told AFP last year that IS had abducted 6,416 Yazidis, more than half of whom had since been rescued.
The war in Iran has been unfolding at breakneck pace ever since the US and Israel launched a series of surprise attacks across the country on February 28. An eye-watering 1,000 targets were hit in the first 24 hours of operation ‘Epic Fury’ alone, and within days, Supreme Leader Ali Khamenei and numerous other high-level Iranian officials were assassinated in targeted strikes.
In a video posted to X on March 11, Admiral Brad Cooper, head of US Central Command (CENTCOM), said that American forces had at the time hit more than 5,500 targets inside Iran. Cooper credited the success of at least part of those operations to advanced AI tools. “Humans will always make final decisions on what to shoot and what not to shoot and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds,” he said. The statement offered rare insight into how AI is used in modern warfare.
To display this content from YouTube, you must enable advertisement tracking and audience measurement.
One of your browser extensions seems to be blocking the video player from loading. To watch this content, you may need to disable it on this site.
Since announcing a $200 million defence contract in July 2025, AI company Anthropic quickly became embedded in the military‘s workflow and its AI model, called ‘Claude’, was the first approved to operate on classified military networks.
Then came a public squabble.
Days before the attacks on Iran, Anthropic’s leadership refused the Pentagon’s demand for “unrestricted” access to Claude. Its co-founder Dario Amodei released a public statement, saying that Anthropic could not ‘in good conscience’ accede to the requests of The Pentagon, and adding that “some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
Anthropic implied that the US Department of War was attemping to overturn two conditions: to use its AI models for mass domestic surveillance, and for fully autonomous weapons. Just hours after the statement was released, another AI company – Sam Altman’s OpenAI – swooped in and took Anthropic’s place in the Department of War.
US Secretary of War Pete Hegseth in turn banned Anthropic, and called its decision to turn down the Pentagon’s defense contract a “master class in arrogance and betrayal”, adding that his department would designate Anthropic a Supply-Chain Risk to National Security (recent reports claim that Anthropic’s AI tools remain in use despite the blacklisting, and that Pentagon staffers are reluctant to use other models). Anthropic sued the Department of War and other federal agencies in response.
The feud brough to light a side of AI that could have more worrying consequences than slop-associated-brainrot. Its role in defence is accellerating at a disquieting pace and the US is not the only government with which it is enmeshed. Israel, China, Russia, France and the UK are among the growing lists of nations incorporating large language models (LLMs) – modern AIs pre-trained on vast amounts of data – into their defence systems.
Information about how, where and why it is used is slowly begininning to trickle out.
A lack of accuracy and oversight
The US military has reportedly been using the Maven Smart System built by Palantir and Anthropic’s technology during their operations in Iran. Maven was also used by the Pentagon to capture Venezuelan President Nicolas Maduro, according to reports by the Wall Street Journal.
The precise details of its use are unlikely to be made public. Dr. Heidy Khlaaf, Chief AI Scientist at the AI Now Institute says its primary role is to streamline what’s known as a ‘kill chain’ – a military concept which identifies the sequence of an attack
“That would include surveillance, gathering intelligence, selection and then ultimately striking a target,” says Khlaaf. She says that AI could significantly reduce the time for each step, and the personell required to do it. “For example Maven, Anthropic’s and Palantir’s system, claims that their technology allowed one unit of just 20 people to do the work of 2,000 staff. Speed is what’s being sold here.”
Most of these AI tools collate, analyse and synthesise data in what is known as ‘Decisions support systems’. Khlaaf says that, theoretically, decisions support systems just make military reccomendations and require oversight. However, she adds, that oversight may not be very effective.
“People have what we call an automation bias, which is a tendency for humans to favor suggestions made by automated systems like AI. So that oversight is really superficial in practice, especially in the military space where the automation bias is the worst. Human beings just become rubber stamps at that point. It veers into autonomous systems technologies,” Khlaaf argues.
Autonomous weapon systems have the power to select targets and perform their function without any oversight from a human being. It’s unclear if the Pentagon’s push for unrestricted use of AI signals the use of these systems, which have the power to launch strikes on their own.
The US isn’t the only government using AI to streamline the kill chain. Before Israeli jets fired the ballistic missiles that killed Iran’s Supreme Leader Ali Khamenei, Israel’s intelligence services had long used AI to monitor Tehran’s hacked traffic cameras and intercept communications. The main platform used by the IDF today is an AI system called Habsora (“The Gospel”), which has allegedly been used to generate large numbers of target recommendations – often residential homes linked to suspected Hamas members in Gaza.
Former intelligence officers describe it as enabling a “mass assassination factory,” with strikes reportedly killing entire families, including in homes with no confirmed militant presence.
This is even more disturbing when the innacuracy rates of AI are taken into account. Khlaaf says generative AI and large language models (LLMs) often have accuracy rates as low as 50% or even lower. For targeting systems, such as those investigated in Israel’s Gaza operations like Gospel or Lavender, accuracy is as low as 25-30% in some cases.
The accuracy rates are specifically low for large language models like the ones militaries are using now. “We have been using AI since the 1960s, but those were built by the military itself and worked with limited data sets that focus on specific tasks, so they were a lot more accurate.”
Khlaaf says the sheer scale of new AI models make things inherently opaque – “One model will have to analyse billions of data points. How would you know where the errors come from?”
Safety and security risks
Most AImodels used in the military are ‘black boxes’ – their internal workings are a mystery to its users. Khlaaf cites a recent investigation that quoted the US military as saying it had “no way of knowing” whether it used artificial intelligence in conducting a specific airstrike in Iraq in February 2024 that killed 20-year-old student Abdul-Rahman al-Rawi.
“This is a huge problem because it completely obscures accountability. There’s no way of knowing if attacks are deliberate, if they are intelligence failures or if the AI is innacurate. The black box nature of AI makes it particularly opaque.”
That opacity won’t just hurt the people being targeted – according to Khlaaf, it could also end up putting the national security of the countries using AI at risk.
“Large language models are seriously compromised: they have huge amounts of vulnerabilities because they are trained on the open internet. There is no control of the supply chain. False claims can come from reddit, or someone’s blog posts – there isn’t much discretion in picking what goes in.”
“So it’s easy to create backdoors and very hard to find out. We have seen targeted operations from Russia and China, where huge amounts of propaganda have been put out into the world to try and change the outcomes of large language models. Anthropic has even said that you need just 250 data points to change the behaviour of an AI model. We could be compromised right now, and we would’t even know.”
Perhaps one of the most worrying aspects about AI use in military comes from a recent study led by Professor Kenneth Payne from the Department of Defence Studies at King’s College London. Three leading AI models, versions of GPT, Claude and Gemini, were placed in a tournament of 21 simulated nuclear crisis scenarios. The nuclear taboo, according to the paper, was weaker than expected. “Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats,” the paper said. “Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms.”
“Those results should give us pause,” Khlaaf says, citing the King’s College study as one sobering example. “There’s a lot of research that shows AI compromises restraint and increases the chances of escalation. This should be nowhere near human lives.”
AI is not a weapon of war, because unlike nuclear arms or missiles, the evidence of the efficacy, accuracy and full uses of AI are unknown.
“There’s a narrative that this is like an AI arms race, that the military that conquers AI will win. But we have no idea if that’s true, because we lack the evidence to show that the tech will actually serve a purpose we want,” says Khlaaf. “The US is seen as setting a precedent for AI, but it should really be a cautionary tale.”
TroisTrois hommes, dont deux animateurs, ont été interpellés pour des faits d’agressions sexuelles commis sur 12 enfants, âgés de 3 à 9 ans, dans trois écoles parisiennes, a appris vendredi l’AFP de source proche du dossier. Ces hommes ont été interpellés et présentés à la justice lors des deux dernières semaines, a-t-on précisé de même source.
Les faits concernent une école du XVe arrondissement de Paris, Vigée Lebrun, avec neuf victimes de 6 à 9 ans agressées par un animateur, une école maternelle du XXe arrondissement, Grands Champs, avec deux enfants de 3 ans et 4 ans agressés par le mari d’une institutrice, et une école du Xe arrondissement où une petite fille de 5 ans a été agressée par un animateur.
Dans ce même établissement du Xe arrondissement, l’école Aqueduc, un professeur a également été interpellé et placé en garde à vue pour des agressions sexuelles sur 6 enfants de 3 ans et 4 ans mais n’a pas été déféré, a indiqué la source.
Ces trois enquêtes ont été confiées à la section intrafamiliale de Brigade de protection des mineurs (BPM) de la police judiciaire de Paris.
À Paris, la bataille municipale est rattrapée par le scandale des violences sexuelles dans le périscolaire. La droite accuse la majorité sortante d’inaction. Après l’annonce d’un plan de lutte contre les violences sexuelles à la mi-novembre, la Ville assure que les suspensions d’agents mis en cause sont désormais « immédiates ».
A new report by investigative group Disclose reveals that police in France are using smartphones equipped with facial recognition technology to access sensitive records during routine identity checks, in what critics say is a breach of French law.
Issued on:
2 min Reading time
The investigation, published this week, claims officers are using police-issued mobile devices to search for people in a restricted database using only their photo.
The technology has been available to police since 2022 and gendarmes since 2020, according to Interior Ministry documents seen by Disclose.
The newsroom spoke to people in France’s three largest cities – Paris, Marseille and Lyon – who said they had been photographed and identified by police in the past four years, in some cases without their consent.
Facial matching software installed on the devices allows police to identify individuals by cross-referencing their image with the criminal records database (TAJ), which holds millions of photos. The database contains information not only on people accused of offences, but also about victims and missing persons.
According to official procedure, only authorised police officers are supposed to access the database, and only when investigating an offence. Consulting it unlawfully is punishable by a fine or even prison time.
Yet Disclose obtained documents from France’s police oversight body, the IGPN, that state officers “very frequently” open the database during identity checks – and raise concerns that access via smartphones will only increase the number of unjustified searches.
France lacks a comprehensive legal framework regulating the use of facial recognition, says digital rights group La Quadrature du Net, which has sounded the alarm over unchecked use of the technology before.
In response to the latest revelations, it accused the Interior Ministry of knowingly organising “abusive and illegal surveillance”.
A French gendarme next to a video surveillance camera in Sainte-Soline, central-western France, on 18 July 2024.@ AFP – PHILIPPE LOPEZ
Current French law only expressly authorises real-time facial recognition as part of investigations of serious crimes or during automated passport scans at border checkpoints.
While the country approved the use of artificial intelligence-powered video surveillance in the run-up to the Paris Olympics, legislators have remained more cautious about allowing widespread facial scans.
Limited experiments have taken place locally – notably in Nice, which was the first city to trial the technology when it scanned the faces of thousands of volunteers attending its carnival in 2019.
France’s data protection watchdog, CNIL, has repeatedly warned against privacy risks. Last year, a proposal to ban facial recognition was submitted to parliament, but has not yet been passed.
Tech multi-billionaire Elon Musk could be forced to pay billions of dollars in damages after a federal jury found that the world’s richest man had misled Twitter shareholders while purchasing the social media platform. Musk was found to have posted false statements to drive down the company’s share price. FRANCE 24 correspondent Wassim Cornet reports from Los Angeles.
This page may contain affiliate links to legal sports betting partners. If you sign up or place a wager, FOX Sports may be compensated. Read more about Sports Betting on FOX Sports.