halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
8
timestamp
stringclasses
938 values
year
stringclasses
55 values
url
stringlengths
43
389
text
stringlengths
16
2.18M
size
int64
16
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
0
229
01757864
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://amu.hal.science/hal-01757864/file/Desmarchelier%20et%20al%20final%20version.pdf
Charles Desmarchelier Véronique Rosilio email: [email protected] David Chapron Ali Makky Damien Preveraud Estelle Devillard Véronique Legrand-Defretin Patrick Borel Damien P Prévéraud Molecular interactions governing the incorporation of cholecalciferol and retinyl-palmitate in mixed taurocholate-lipid micelles Keywords: bioaccessibility, surface pressure, bile salt, compression isotherm, lipid monolayer, vitamin A, vitamin D, phospholipid ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Retinyl esters and cholecalciferol (D 3 ) (Figure 1) are the two main fat-soluble vitamins found in foods of animal origin. There is a renewed interest in deciphering their absorption mechanisms because vitamin A and D deficiency is a public health concern in numerous countries, and it is thus of relevance to identify factors limiting their absorption to tackle this global issue. The fate of these vitamins in the human upper gastrointestinal tract during digestion is assumed to follow that of dietary lipids [START_REF] Borel | Vitamin D bioavailability: state of the art[END_REF]. This includes emulsification, solubilization in mixed micelles, diffusion across the unstirred water layer and uptake by the enterocyte via passive diffusion or apical membrane proteins [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF]. Briefly, following consumption of vitamin-rich food sources, the food matrix starts to undergo degradation in the acidic environment of the stomach, which contains several enzymes, leading to a partial release of these lipophilic molecules and to their transfer to the lipid phase of the meal. Upon reaching the duodenum, the food matrix is further degraded by pancreatic secretions, promoting additional release from the food matrix, and both vitamins then transfer from oil-in-water emulsions to mixed micelles (and possibly other structures, such as vesicles, although not demonstrated yet). As it is assumed that only free retinol can be taken up by enterocytes, retinyl esters are hydrolyzed by pancreatic enzymes, namely pancreatic lipase, pancreatic lipase-related protein 2 and cholesterol ester hydrolase [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Bioaccessible vitamins are then taken up by enterocytes via simple passive diffusion or facilitated diffusion mediated by apical membrane proteins (Desmarchelier et al. 2017). The apical membrane protein(s) involved in retinol uptake by enterocytes is(are) yet to be identified but in the case of D 3 , three proteins have been shown to facilitate its uptake: NPC1L1 (NPC1 like intracellular cholesterol transporter 1), SR-BI (scavenger receptor class B member 1) and CD36 (Cluster of differentiation 36) [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF]. Both vitamins then transfer across the enterocyte towards the basolateral side. The transfer of vitamin A is mediated, at least partly, by the cellular retinol-binding protein, type II (CRBPII), while that of vitamin D is carried out by unknown mechanisms. Additionally, a fraction of retinol is re-esterified by several enzymes (Borel & Desmarchelier 2017). Vitamin A and D are then incorporated in chylomicrons in the Golgi apparatus before secretion in the lymph. The solubilization of vitamins A and D in mixed micelles, also called micellarization or micellization, is considered as a key step for their bioavailability because it is assumed that the non-negligible fraction of fat-soluble vitamin that is not micellarized is not absorbed [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Mixed micelles are mainly made of a mixture of bile salts, phospholipids and lysophospholipids, cholesterol, fatty acids and monoglycerides [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]). These compounds may form various self-assembled structures, e.g., spherical, cylindrical or disk-shaped micelles [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF][START_REF] Leng | Kinetics of the micelle-to-vesicle transition ; aquous lecithin-bile salt mixtures[END_REF] or vesicles, depending on their concentration, the bile salt/phospholipid ratio [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF], the phospholipid concentration, but also the ionic strength, pH and temperature of the aqueous medium (Madency & Egelhaaf 2010;[START_REF] Salentinig | Self-assembled structures and pKa value of oleic acid in systems of biological relevance[END_REF][START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. Fat-soluble micronutrients display large variations with regards to their solubility in mixed micelles [START_REF] Sy | Effects of physicochemical properties of carotenoids on their bioaccessibility, intestinal cell uptake, and blood and tissue concentrations[END_REF][START_REF] Gleize | Form of phytosterols and food matrix in which they are incorporated modulate their incorporation into mixed micelles and impact cholesterol micellarization[END_REF] and several factors are assumed to account for these differences (Desmarchelier & Borel 2017, for review). The mixed micelle lipid composition has been shown to significantly affect vitamin absorption. For example, the substitution of lysophospholipids by phospholipids diminished the lymphatic absorption of vitamin E in rats [START_REF] Koo | Phosphatidylcholine inhibits and lysophosphatidylcholine enhances the lymphatic absorption of alpha-tocopherol in adult rats[END_REF]. In rat perfused intestine, the addition of fatty acids of varying chain length and saturation degree, i.e. butyric, octanoic, oleic and linoleic acid, resulted in a decrease in the rate of D 3 absorption [START_REF] Hollander | Vitamin D-3 intestinal absorption in vivo: influence of fatty acids, bile salts, and perfusate pH on absorption[END_REF]. The effect was more pronounced in the ileal part of the small intestine following the addition of oleic and linoleic acid. It was suggested that unlike short-and medium-chain fatty acids, which are not incorporated into micelles, long-chain fatty acids hinder vitamin D absorption by causing enlargement of micelle size, thereby slowing their diffusion towards the enterocyte. Moreover, the possibility that D 3 could form self-aggregates in water [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], although not clearly demonstrated, has led to question the need of mixed micelles for its solubilization in the aqueous environment of the intestinal tract lumen [START_REF] Rautureau | Aqueous solubilisation of vitamin D3 in normal man[END_REF][START_REF] Maislos | Bile salt deficiency and the absorption of vitamin D metabolites. In vivo study in the rat[END_REF]. This study was designed to compare the relative solubility of D 3 and RP in the aqueous phase rich in mixed micelles that exists in the upper intestinal lumen during digestion, and to dissect, by surface tension and surface pressure measurements, the molecular interactions existing between these vitamins and the mixed micelle components that explain the different solubility of D 3 and RP in mixed micelles. Materials and methods Chemicals 2-oleoyl-1-palmitoyl-sn-glycero-3-phosphocholine (POPC) (phosphatidylcholine, ≥99%; Mw 760.08 g/mol), 1-palmitoyl-sn-glycero-3-phosphocholine (Lyso-PC) (lysophosphatidylcholine, ≥99%; Mw 495.63 g/mol), free cholesterol (≥99%; Mw 386.65 g/mol), oleic acid (reagent grade, ≥99%; Mw 282.46 g/mol), 1-monooleoyl-rac-glycerol (monoolein, C18:1,-cis-9, Mw 356.54 g/mol), taurocholic acid sodium salt hydrate (NaTC) (≥95%; Mw 537.68 g/mol) ), cholecalciferol (>98%; Mw 384.64 g/mol; melting point 84.5°C; solubility in water: 10 -4 -10 -5 mg/mL; logP 7.5) and retinyl palmitate (>93.5%; Mw 524.86 g/mol; melting point 28.5°C; logP 13.6) were purchased from Sigma-Aldrich (Saint-Quentin-Fallavier, France). Chloroform and methanol (99% pure) were analytical grade reagents from Merck (Germany). Ethanol (99.9%), n-hexane, chloroform, acetonitrile, dichloromethane and methanol were HPLC grade reagents from Carlo Erba Reagent (Peypin, France). Ultrapure water was produced by a Milli-Q ® Direct 8 Water Purification System (Millipore, Molsheim, France). Prior to all surface tension, and surface pressure experiments, all glassware was soaked for an hour in a freshly prepared hot TFD4 (Franklab, Guyancourt, France) detergent solution (15% v/v), and then thoroughly rinsed with ultrapure water. Physico-chemical properties of D 3 and RP were retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov/). Micelle formation The micellar mixture contained 0.3 mM monoolein, 0.5 mM oleic acid, 0.04 mM POPC, 0.1 mM cholesterol, 0.16 mM Lyso-PC, and 5 mM NaTC [START_REF] Reboul | Lutein transport by Caco-2 TC-7 cells occurs partly by a facilitated process involving the scavenger receptor class B type I (SR-BI)[END_REF]. Total component concentration was thus 6.1 mM, with NaTC amounting to 82 mol%. Two vitamins were studied: crystalline D 3 and RP. Mixed micelles were formed according to the protocol described by [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Lipid digestion products (LDP) (monoolein, oleic acid, POPC, cholesterol and Lyso-PC, total concentration 1.1 mM) dissolved in chloroform/methanol (2:1, v/v), and D 3 or RP dissolved in ethanol were transferred to a glass tube and the solvent mixture was carefully evaporated under nitrogen. The dried residue was dispersed in Tris buffer (Tris-HCl 1mM, CaCl 2 5mM, NaCl 100 mM, pH 6.0) containing 5 mM taurocholate, and incubated at 37 °C for 30 min. The solution was then vigorously mixed by sonication at 25 W (Branson 250W sonifier; Danbury, CT, U.S.A.) for 2 min, and incubated at 37 °C for 1 hour. To determine the amount of vitamin solubilized in structures allowing their subsequent absorption by enterocytes (bioaccessible fraction), i.e. micelles and possibly small lipid vesicles, whose size is smaller than that of mucus pores [START_REF] Cone | Barrier properties of mucus[END_REF], the solutions were filtered through cellulose ester membranes (0.22 µm) (Millipore), according to [START_REF] Tyssandier | Processing of vegetable-borne carotenoids in the human stomach and duodenum[END_REF]. The resulting optically clear solution was stored at -20 °C until vitamin extraction and HPLC analysis. D 3 and RP concentrations were measured by HPLC before and after filtration. For surface tension measurements and cryoTEM experiments, the mixed micelle systems were not filtered. Self-micellarization of D 3 Molecular assemblies of D 3 were prepared in Tris buffer using the same protocol as for mixed micelles. D 3 was dissolved into the solvent mixture and after evaporation, the dry film was hydrated for 30 min at 37°C with taurocholate-free buffer. The suspension was then sonicated. All D 3 concentrations reported in the surface tension measurements were obtained from independent micellarization experiments -not from the dilution of one concentrated D 3 solution. Surface tension measurements Mixed micelle solutions were prepared as described above, at concentrations ranging from 5.5 nM to 55 mM, with the same proportion of components as previously mentioned. The surface tension of LDP mixtures hydrated with a taurocholate-free buffer, and that of pure taurocholate solutions were also measured at various concentrations. The solutions were poured into glass cuvettes. The aqueous surface was cleaned by suction, and the solutions were left at rest under saturated vapor pressure for 24 hours before measurements. For penetration studies, glass cuvettes with a side arm were used, allowing injection of NaTC beneath a spread LDP or vitamin monolayer. Surface tension measurements were performed by the Wilhelmy plate method, using a thermostated automatic digital tensiometer (K10 Krüss, Germany). The surface tension g was recorded continuously as a function of time until equilibrium was reached. All experiments were performed at 25 ±1°C under saturated vapor pressure to maintain a constant level of liquid. The reported values are mean of three measurements. The experimental uncertainty was estimated to be 0.2 mN/m. Surface pressure (π) values were deduced from the relationship π = γ 0 -γ, with γ 0 the surface tension of the subphase and γ the surface tension in the presence of a film. Surface pressure measurements Surface pressure-area π-A isotherms of the LDP and LDP-vitamin mixtures were obtained using a thermostated Langmuir film trough (775.75 cm 2 , Biolin Scientific, Finland) enclosed into a Plexiglas box (Essaid et al. 2016). Solutions of lipids in a chloroform/methanol (9:1, v/v) mixture were spread onto a clean buffer subphase. Monolayers were left at rest for 20 minutes to allow complete evaporation of the solvents. They were then compressed at low speed (6.5 Å 2 .molecule -1 .min -1 ) to minimize the occurrence of metastable phases. The experimental uncertainty was estimated to be 0.1 mN/m. All experiments were run at 25 ±1°C. Mean isotherms were deduced from at least three compression isotherms. The surface compressional moduli K of monolayers were calculated using Eq. 1: (Eq. 1) Excess free energies of mixing were calculated according to Eq. 2: (Eq. 2) with X L and A L the molar fraction and molecular area of lipid molecules, and X VIT and A VIT the molar fraction and molecular area of vitamin molecules, respectively [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF]. Cryo-TEM analysis A drop (5 µL) of LDP-NaTC micellar solution (15 mM), LDP-NaTC-D 3 (3:1 molar ratio) or pure D 3 "micellar suspension" (5 mM, theoretical concentration) was deposited onto a perforated carbon-coated, copper grid (TedPella, Inc); the excess of liquid was blotted with a filter paper. The grid was immediately plunged into a liquid ethane bath cooled with liquid nitrogen (180 °C) and then mounted on a cryo holder [START_REF] Da Cunha | Overview of chemical imaging methods to address biological questions[END_REF]. Transmission electron measurements (TEM) measurements were performed just after grid preparation using a JEOL 2200FS (JEOL USA, Inc., Peabody, MA, U.S.A.) working under an acceleration voltage of 200 kV (Institut Curie). Electron micrographs were recorded by a CCD camera (Gatan, Evry, France). K = -A dπ dA       T ∆ G EXC = X L A L + X VIT A VIT ( ) 0 π ∫ dπ 2.7. Vitamin analysis 2.7.1. Vitamin extraction. D 3 and RP were extracted from 500 µL aqueous samples using the following method [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]: retinyl acetate was used as an internal standard and was added to the samples in 500 µL ethanol. The mixture was extracted twice with two volumes of hexane. The hexane phases obtained after centrifugation (1200 × g, 10 min, 10°C) were evaporated to dryness under nitrogen, and the dried extract was dissolved in 200 µL of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). A volume of 150 µL was used for HPLC analysis. Extraction efficiency was between 75 and 100%. Sample whose extraction efficiency was below 75% were re-extracted or taken out from the analysis. 2.7.2. Vitamin HPLC analysis. D 3 , and RP and retinyl acetate were separated using a 250 x 4.6-nm RP C18, 5-µm Zorbax Eclipse XDB column (Agilent Technologies, Les Ulis, France) and a guard column. The mobile phase was a mixture of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). Flow rate was 1.8 mL/min and the column was kept at a constant temperature (35 °C). The HPLC system comprised a Dionex separation module (P680 HPLC pump and ASI-100 automated sample injector, Dionex, Aix-en-Provence, France). D 3 was detected at 265 nm while retinyl esters were detected at 325 nm and were identified by retention time compared with pure (>95%) standards. Quantification was performed using Chromeleon software (version 6.50, SP4 Build 1000) comparing the peak area with standard reference curves. All solvents used were HPLC grade. Statistical analysis Results are expressed as means ± standard deviation. Statistical analyses were performed using Statview software version 5.0 (SAS Institute, Cary, NC, U.S.A.). Means were compared by the non-parametric Kruskal-Wallis test, followed by Mann-Whitney U test as a post hoc test for pairwise comparisons, when the mean difference using the Kruskal-Wallis test was found to be significant (P<0.05). For all tests, the bilateral alpha risk was α = 0.05. Results Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles D 3 and RP at various concentrations were mixed with micelle components (LDP-NaTC). D 3 and RP concentrations were measured by HPLC before and after filtration of aggregates with a diameter smaller than 0.22 µm (Figure 2). D 3 and RP solubilization in the solution that contained mixed micelle solution followed different curves: D 3 solubilization was linear (R²=0.98, regression slope = 0.71) and significantly higher than that of RP, which reached a plateau with a maximum concentration around 125 µM. The morphology of the LDP-NaTC and LDP-NaTC-D 3 samples before filtration was analyzed by cryoTEM. In Figure 3, micelles are too small to be distinguished from ice. At high LDP-NaTC concentration (15 mM) small and large unilamellar vesicles (a), nano-fibers (b) and aggregates (c) are observed (Figure 3A). Both nano-fibers and aggregates seem to emerge from the vesicles. In the presence of D 3 at low micelle and D3 concentration (5 mM LDP-NaTC + 1.7 mM D 3 ) (Figures 3B and3C), the morphology of the nano-assemblies is greatly modified. Vesicles are smaller and deformed, with irregular and more angular shapes (a'). There are also more abundant. A difference in contrast in the bilayers is observed, which would account for leaflets with asymmetric composition. Some of them coalesce into larger structures, extending along the walls of the grid (d). Fragments and sheets are also observed (figure 3B). They exhibit irregular contour and unidentified membrane organization. The bilayer structure is not clearly observable. New organized assemblies appear, such as disk-like nano-assemblies (e) and emulsion-like droplets (f). At higher concentration (15 mM LDP-NaTC + 5 mM D 3 in figure 3D), the emulsion-like droplets and vesicles with unidentified membrane structure (g) are enlarged. They coexist with small deformed vesicules. Compression properties of LDP components, the LDP mixture and the vitamins To better understand the mechanism of D 3 and RP interaction with LDP-NaTC micelles, we focused on the interfacial behavior of the various components of the system. We first determined the interfacial behavior of the LDP components and their mixture in proportions similar to those in the micellar solution, by surface pressure measurements. The π-A isotherms are plotted in Figure 4A. Based on the calculated compressibility modulus values, the lipid monolayers can be classified into poorly organized (K < 100 mN/m, for lyso-PC, monoolein, and oleic acid), liquid condensed (100 < K < 250 mN/m, for POPC and the LDP mixture) and highly rigid monolayers (K > 250 mN/m, for cholesterol) [START_REF] Davies | Interfacial phenomena 2nd ed[END_REF]. The interfacial behavior of the two studied vitamins is illustrated in Figure 4B. D 3 shows a similar compression profile to that of the LDP mixture, with comparable surface area and surface pressure at collapse (A c = 35 Å 2 , π c = 38 mN/m) but a much higher rigidity, as inferred from the comparison of their maximal K values (187.4 mN/m and 115.4 mN/m for D 3 and LDP, respectively). RP exhibits much larger surface areas and lower surface pressures than D 3 . The collapse of its monolayer is not clearly identified from the isotherms, and is estimated to occur at π c = 16.2 mN/m (A c = 56.0 Å 2 ), as deduced from the slope change in the π-A plot. Self-assembling properties of D 3 in an aqueous solution Since D 3 showed an interfacial behavior similar to that of the lipid mixture, and since it could be solubilized at very high concentrations in an aqueous phase rich in mixed micelles (as shown in Figure 2), its self-assembling properties were more specifically investigated. Dried D 3 films were hydrated with the sodium taurocholate free-buffer. Surface tension measurements at various D 3 concentrations revealed that the vitamin could adsorb at the air/solution interface, and significantly lower the surface tension of the buffer to g cmc = 30.6 mN/m. A critical micellar concentration (cmc = 0.45 µM) could be deduced from the γ-log C relationships and HPLC assays. Concentrated samples D 3 samples were analyzed by cryo-TEM (Figure 3E and3F). Different D 3 self-assemblies were observed, including circular nano-assemblies (h) coexisting with nano-fibers (i), and large aggregates (j) with unidentified structure. The analysis in depth of the circular nano-assemblies allowed to conclude that they were disk-like nano-assemblies, rather than nanoparticles. Interaction of LDP with NaTC To better understand how the two studied vitamins interacted with the mixed micelles, we compared the interfacial behaviors of the pure NaTC solutions, LDP mixtures hydrated by NaTC-free buffer, and LDP mixtures hydrated by the NaTC buffered solutions (full mixed micelle composition). The LDP mixture composition was maintained constant, while its concentration in the aqueous medium was increased. The concentration of NaTC in the aqueous phase was also calculated so that the relative proportion of the various components (LDP and NaTC) remained unchanged in all experiments. From the results plotted in Figure 5A, the critical micellar concentration (cmc) of the LDP-NaTC mixture was 0.122 mM (γ cmc = 29.0 mN/m), a concentration 50.8 times lower than the concentration used for vitamin solubilization. The cmc values for the LDP mixture and the pure NaTC solutions were 0.025 mM (γ cmc = 24.0 mN/m), and 1.5 mM (γ cmc = 45.3 mN/m), respectively. Experiments modeling the insertion of NaTC into the LDP film during rehydration by the buffer suggested that only few NaTC molecules could penetrate in the condensed LDP film (initial surface pressure: π i = 28 mN/m) and that the LDP-NaTC mixed film was not stable, as shown by the decrease in surface pressure over time (Figure 5B). Interaction of D 3 and RP with NaTC The surface tension of the mixed NaTC-LDP micelle solutions was only barely affected by the addition of 0.1 or 1 mM D 3 or RP: the surface tension values increased by no more than 2.8 mN/m. Conversely, both vitamins strongly affected the interfacial behavior of the NaTC micellar solution, as inferred from the significant surface tension lowering observed (-7.0 and -8.1 mN/m for RP and D 3 , respectively). Interaction of D 3 and RP with lipid digestion products The interaction between the vitamins and LDP molecules following their insertion into LDP micelles was modeled by compression of LDP/D 3 and LDP/RP mixtures at a 7:3 molar ratio. This ratio was chosen arbitrarily, to model a system in which LDP was in excess. The π-A isotherms are presented in Figures 6A and6B. They show that both vitamins modified the isotherm profile of the lipid mixture, however, not in the same way. In the LDP/D 3 mixture, the surface pressure and molecular area at collapse were controlled by LDP. For LDP/RP, despite the high content in LDP, the interfacial behavior was clearly controlled by RP. From the isotherms in Figures 6A and6B, compressibility moduli and excess free energies of mixing were calculated and compared (Figures 6C, and6D). D 3 increased the rigidity of LDP monolayers, whereas RP disorganized them. The negative ∆G EXC values calculated for the LDP-D 3 monolayers at all surface pressures account for the good mixing properties of D 3 and the lipids in all molecular packing conditions. Conversely for RP, the positive and increasing ∆G EXC values with the surface pressure demonstrate that its interaction with the lipids was unfavorable. Discussion The objective of this study was to compare the solubility of RP and D 3 in aqueous solutions containing mixed micelles, and to decipher the molecular interactions that explain their different extent of solubilization. Our first experiment revealed that the two vitamins exhibit very different solubilities in an aqueous medium rich in mixed micelles. Furthermore, the solubility of D 3 was so high that we did not observe any limit, even when D 3 was introduced at a concentration > 1mM in the aqueous medium. To our knowledge, this is the first time that such a difference is reported. Cryo-TEM pictures showed that D 3 dramatically altered the organization of the various components of the mixed micelles. The spherical vesicles were deformed with angular shapes. The nano-fibers initiating from the vesicles were no longer observed. Large irregular in shape vesicle and sheets, disk-like nano-assemblies and emulsionlike droplets appeared in LDP-NaTC-D 3 mixtures, only. The existence of so many different assemblies would account for a different interaction of D 3 with the various components of mixed micelles, and for a reorganization of the components. D 3 could insert in the bilayer of vesicles and deform them, but also form emulsion-like droplets with fatty acids and monoglyceride. It is noteworthy that these emulsion-like droplets were not observed in pure D 3 samples, nor mixed micelles. Since previous studies have shown that both bile salts and some mixed micelle lipids, e.g. fatty acids and phospholipids, can modulate the solubility of fatsoluble vitamins in these vehicles [START_REF] Yang | Vitamin E and vitamin E acetate solubilization in mixed micelles: physicochemical basis of bioaccessibility[END_REF], we decided to study the interactions of these two vitamins with either bile salts or micelle lipids to assess the specific role of each component on vitamin solubility in mixed micelles. The characteristics of pure POPC, Lyso-PC, monoolein, and cholesterol isotherms were in agreement with values published in the literature [START_REF] Pezron | Monoglyceride Surface-Films -Stability and Interlayer Interactions[END_REF]Flasinsky et al. 2014;[START_REF] Huynh | Structural properties of POPC monolayers under lateral compression: computer simulations analysis[END_REF]. For oleic acid, the surface pressure at collapse was higher (π c = 37 mN/m) and the corresponding molecular area (A c = 26 Å 2 ) smaller than those previously published [START_REF] Tomoaia-Cotisel | Insoluble mixed monolayers[END_REF], likely due to the pH of the buffer solution (pH 6) and the presence of calcium. The interfacial properties of D 3 were close to those deduced from the isotherm published by [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF] for a D 3 monolayer spread from a benzene solution onto a pure water subphase. The molecular areas at collapse are almost identical in the two studies (about 36 Å 2 ), but the surface pressures differ (30 mN/m in Meredith and coworkers' study, and 38 mN/m in ours). Compressibility modulus values show that D 3 molecules form monolayers with higher molecular order than the LDP mixture, which suggests that they might easily insert into LDP domains. As could be expected from its chemical structure, RP exhibited a completely different interfacial behavior compared to D 3 and the LDP, even to lyso-PC which formed the most expanded monolayers of the series, and displayed the lowest collapse surface pressure. The anomalous isotherm profile of lyso-PC has been attributed to monolayer instability and progressive solubilization of molecules into the aqueous phase [START_REF] Heffner | Thermodynamic and kinetic investigations of the release of oxidized phospholipids from lipid membranes and its effect on vascular integrity[END_REF]. The molecular areas and surface pressures for RP have been compared to those measured by [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF] for RP monolayers spread from benzene solutions at 25°C onto a water subphase. Their values are much lower than ours, accounting for even more poorly organized monolayers. The low collapse surface pressure could correspond to molecules partially lying onto the aqueous surface, possibly forming multilayers above 16 mN/m as inferred from the continuous increase in surface pressure above the change in slope of the isotherm. The maximal compressibility modulus confirms the poor monolayer order. The significant differences in RP surface pressure and surface area compared to the LDP mixture might compromise its insertion and stability into LDP domains. The dogma in nutrition is that fat-soluble vitamins need to be solubilized in bile salt micelles to be transported to the enterocyte and then absorbed. It is also well known that although NaTC primary micelles can be formed at 2-3 mM with a small aggregation number, concentrations as high as 10-12 mM are usually necessary for efficient lipid solubilization in the intestine [START_REF] Baskin | Bile salt-phospholipid aggregation at submicellar concentrations[END_REF]. Due to their chemical structure bile salts have a facial arrangement of polar and non-polar domains (Madency & Egelhaaf 2010). Their selfassembling (dimers, multimers, micelles) is a complex process involving hydrophobic interaction and cooperative hydrogen bonding, highly dependent on the medium conditions, and that is not completely elucidated. The cmc value for sodium taurocholate in the studied buffer was particularly low compared to some of those reported in the literature for NaTC in water or sodium chloride solutions (3-12 mM) [START_REF] Kratohvil | Concentration-dependent aggregation patterns of conjugated bile-salts in aqueous sodiumchloride solutions -a comparison between sodium taurodeoxycholate and sodium taurocholate[END_REF][START_REF] Meyerhoffer | Critical Micelle Concentration Behavior of Sodium Taurocholate in Water[END_REF][START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF]. At concentrations as high as 10-12 mM, NaTC molecules form elongated cylindrical "secondary" micelles [START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF][START_REF] Bottari | Structure and composition of sodium taurocholate micellar aggregates[END_REF]. The cryoTEM analysis did not allow to distinguish micelles from the ice. In our solubilization experiment, the concentration of NaTC did not exceed 5 mM. Nevertheless, the micelles proved to be very efficient with regards to vitamin solubilization. When bile salts and lipids are simultaneously present in the same environment, they form mixed micelles [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]. Bile salts solubilize phospholipid vesicles and transform into cylindrical micelles [START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF] suggested that sodium cholate cylindrical micelles evolved from the edge of lecithin bilayer sheets. Most published studies were performed at high phospholipid/bile salt ratio. In our system, the concentration of the phospholipids was very low compared to that of NaTC. We observed however the presence of vesicles, and nano-fiber structures emerging from them. In their cryoTEM analysis, [START_REF] Fatouros | Colloidal structures in media simulating intestinal fed state conditions with and without lypolysis products[END_REF] compared bile salt/phospholipid mixtures to bile salt/phospholipid/fatty acid/monoglyceride ones at concentrations closer to ours. They observed only micelles in bile salt/phospholipid mixtures. However, in the presence of oleic acid and monoolein, vesicles and bilayer sheets were formed. This would account for a reorganization of the lipids and bile salts in the presence of the fatty acid and the monoglyceride. We therefore decided to study the interactions between bile salts and LDP. The results obtained show that the surface tension, the effective surface tension lowering concentration, and cmc Solubilization experiments and the analysis of vitamin-NaTC interaction cannot explain why the LDP-NaTC mixed micelles solubilize D 3 better than RP. Therefore, we studied the interfacial behavior of the LDP mixture in the presence of each vitamin, to determine the extent of their interaction with the lipids. The results obtained showed that D 3 penetrated in LDP domains and remained in the lipid monolayer throughout compression. At large molecular areas, the π-A isotherm profile of the mixture followed that of the LDP isotherm with a slight condensation due to the presence of D 3 molecules. Above 10 mN/m, an enlargement of the molecular area at collapse and a change in the slope of the mixed monolayer was observed. However, the surface pressure at collapse was not modified, and the shape of the isotherm accounted for the insertion of D 3 molecules into LDP domains. This was confirmed by the surface compressional moduli. D 3 interacted with lipid molecules in such manner that it increased monolayer rigidity (K max = 134.8 mN/m), without changing the general organization of the LDP monolayer. The LDP-D 3 mixed monolayer thus appeared more structured than the LDP one. D 3 behavior resembles that of cholesterol in phospholipid monolayers, however without the condensing effect of the sterol [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF]. The higher rigidity of LDP monolayer in the presence of D 3 could be related to the cryo-TEM pictures showing the deformed, more angular vesicles formed with LDP-NaTC-D 3 . The angular shape would account for vesicles with rigid bilayers [START_REF] Kuntsche | Cryogenic transmission electron microscopy (cryo-TEM) for studying the morphology of collidal drug delivery systems[END_REF]. For RP, the shape of the isotherms show evidence that lipid molecules penetrated in RP domains, rather than the opposite. Indeed, the π-A isotherm profile of the LDP-RP monolayer is similar to that of RP alone. The insertion of lipid molecules into RP domains is also attested by the increase in the collapse surface pressure from 16 to 22 mN/m. Partial collapse is confirmed by the decrease in the compressibility modulus above 22 mN/m. Thus, RP led to a destructuration of the LDP mixed monolayer and when the surface density of the monolayer increased, the vitamin was partially squeezed out from the interface. The calculated ∆G EXC values for both systems suggest that insertion of D 3 into LDP domains was controlled by favorable (attractive) interactions, whereas mixing of RP with LDP was limited due to unfavorable (repulsive) interactions, even at low surface pressures. According to [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF], RP can be partially solubilized in the bilayer of phospholipids (up to 3 mol%), and the excess is separated from the phospholipids, and dispersed as emulsion droplets stabilized by a phospholipid monolayer. On the whole, the information obtained regarding the interactions of the two vitamins with NaTC and LDP explain why D 3 is more soluble than RP in an aqueous medium rich in mixed micelles. Both vitamins can insert into pure NaTC domains, but only D 3 can also insert into the LDP domains in LDP-enriched NaTC micelles. Furthermore, the results obtained suggest that this is not the only explanation. Indeed, since it has been suggested that D 3 could form cylindrical micelle-like aggregates [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], we hypothesize that the very high solubility of D 3 in the aqueous medium rich in mixed micelles was partly due to the solubilization of a fraction of D 3 as self-aggregates. Indeed, we observed that D 3 at concentrations higher than 0.45 µM, could self-assemble into various structures including nano-fibers. To our knowledge, no such structures, especially nanofibers, have been reported for D 3 so far. Rod diameter was smaller than 10 nm, much smaller than for the rods formed by lithocholic acid, for example [START_REF] Terech | Self-assembled monodisperse steroid nanotubes in water[END_REF]. They were similar to those observed in highly concentrated LDP-NaTC mixtures, which seemed formed via desorganization of lipid vesicles. Disk-like and aggregates with unidentified structure, also observed in concentrated D 3 samples, could be related to these nano-fibers. In our solubilization experiments, which were performed at much higher D 3 concentrations, both insertion of D 3 molecules into NaTC and LDP domains, and D 3 self-assembling could occur, depending on the kinetics of insertion of D 3 into the NaTC-DLP mixed micelles. Conclusion The solubilization of a hydrophobic compound in bile salt-lipid micelles is dependent upon its chemical structure and its ability to interact with the mixed micelles components. Most hydrophobic compounds are expected to insert into the bile salt-lipid micelles. The extent of the solubilizing effect is, however, much more difficult to predict. As shown by others before us, mixed micelles components form a heterogeneous system with various molecular assemblies differing in shape and composition. The conditions of the medium (pH, ionic strength and temperature) affect the formation of these molecular assemblies, although we did not study this effect on our system. Our results showed that D 3 displayed a higher solubility in mixed micelle solutions than RP. This difference was attributed to the different abilities of the two vitamins to insert in between micelle components, but it was also explained by the propensity of D 3 , contrarily to RP, to self-associate into structures that are readily soluble in the aqueous phase. It is difficult to predict the propensity of a compound to self-association. We propose here a methodology that was efficient to distinguish between two solubilizing behaviors, and could be easily used to predict the solubilization efficiency of other hydrophobic compounds. Whether the D 3 self-assemblies are available for absorption by the intestinal cells needs further studies. values were very much influenced by LDP. The almost parallel slopes of Gibbs adsorption isotherms for pure NaTC and mixed NaTC-LDP suggest that LDP molecules inserted into NaTC domains, rather than the opposite. This was confirmed by penetration studies, which showed that NaTC (0.1 mM) could hardly penetrate in a compact LDP film. So, during lipid hydration, LDP molecules could insert into NaTC domains. The presence of LDP molecules improved NaTC micellarization.After having determined the interfacial properties of each micelle component and measured the interactions between NaTC and LDP, we assessed the ability of D 3 and RP to solubilize in either NaTC or NaTC-LDP micelles. Surface tension values clearly show that both vitamins could insert in between NaTC molecules adsorbed at the interface, and affected the surface tension in the same way. The interfacial behavior of the molecules being representative of their behavior in the bulk, it is reasonable to think that both D 3 and RP can be solubilized into pure NaTC micelles. For the mixed NaTC-LDP micelles, the change in surface tension was too limited to allow conclusions, but the solubilization experiments clearly indicated that neither vitamin was solubilized to the same extent. Figure 1 : 1 Figure 1: Chemical structures for D 3 and RP. Figure 2 : 2 Figure 2: Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles: (l), Figure 3 : 3 Figure 3: Cryo-TEM morphology of (A) 15 mM mixed LDP-NaTC micelles, (B) and (C) 5 Figure 4 : 4 Figure 4: Mean compression isotherms for (A) the pure micelles components and the LDP Figure 5 : 5 Figure 5: (A) Adsorption isotherms for LDP hydrated in NaTC-free buffer (○), LDP hydrated Figure 6 : 6 Figure 6: π-A isotherms (A,B), compressibility moduli (C) and excess free energies (D) for Ø Figure 1 Acknowledgements: The authors are grateful to Dr Sylvain Trépout (Institut Curie, Orsay, France) for his contribution to cryoTEM experiments and the fruitful discussions. Funding: This study was funded by Adisseo France SAS. Conflicts of interest: DP, ED and VLD are employed by Adisseo. Adisseo markets formulated vitamins for animal nutrition.
42,446
[ "764461", "1293290", "749069", "938088", "18561" ]
[ "180118", "527021", "251210", "251210", "251210", "440261", "414821", "414821", "180118", "527021" ]
01757936
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757936/file/CK2017_Nayak_Caro_Wenger_HAL.pdf
Abhilash Nayak email: [email protected] Stéphane Caro email: [email protected] Philippe Wenger email: [email protected] Local and Full-cycle Mobility Analysis of a 3-RPS-3-SPR Series-Parallel Manipulator Keywords: series-parallel manipulator, mobility analysis, Jacobian matrix, screw theory, Hilbert dimension without any proof, and shown to be five in [4] and [3] with an erroneous proof. Screw theory is used to derive the kinematic Jacobian matrix and the twist system of the mechanism, leading to the determination of its local mobility. I turns out that this local mobility is found to be six in several arbitrary configurations, which indicates a full-cycle mobility equal to six. This full-cycle mobility is confirmed by calculating the Hilbert dimension of the ideal made up of the set of constraint equations. It is also shown that the mobility drops to five in some particular configurations, referred to as impossible output singularities. Introduction A series-parallel manipulator (S-PM) is composed of parallel manipulators mounted in series and has merits of both serial and parallel manipulators. The 3-RPS-3-SPR S-PM is such a mechanism with the proximal module being composed of the 3-RPS parallel mechanism and the distal module being composed of the 3-SPR PM. Hu et al. [START_REF] Hu | Analyses of inverse kinematics, statics and workspace of a novel 3-RPS-3-SPR serial-parallel manipulator[END_REF] analyzed the workspace of this manipulator. Hu formulated the Jacobian matrix for S-PMs as a function of Jacobians of the individual parallel modules [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF]. In the former paper, it was assumed that the number of local dof of the 3-RPS-3-SPR mechanism is equal to six, whereas Gallardo et al. found out that it is equal to five [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF][START_REF] Gallardo-Alvarado | Kinematics of a series-parallel manipulator with constrained rotations by means of the theory of screws[END_REF]. As a matter of fact, it is not straightforward to find the local mobility of this S-PM due to the third-order twist systems of each individual module. It is established that the 3-RPS PM performs a translation and two non pure rotations about non fixed axes, which induce two translational parasitic motions [START_REF] Hunt | Structural kinematics of in-parallel-actuated robot-arms[END_REF]. The 3-SPR PM also has the same type of dof [START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. In addition, these mechanisms are known as zero-torsion mechanisms. When they are mounted in series, the axis about which the torsional motion is constrained, is different for a general configuration of the S-PM. Gallardo et al. failed to consider this fact but only those special configurations in which the axes coincide resulting in a mobility equal to five. This paper aims at clarifying that the full-cycle mobility of the 3-RPS-3-SPR S-PM is equal to six with the help of screw theory and some algebraic geometry concepts. Although the considered S-PM has double spherical joints and two sets of three coplanar revolute joint axes, the proposed methodology to calculate the mobility of the manipulator at hand is general and can be applied to any series-parallel manipulator. The paper is organized as follows : The manipulator under study is described in Section 2. The kinematic Jacobian matrix of a general S-PM with multiple modules is expressed in vector form in Section 3. Section 4 presents some configurations of the 3-RPS-3-SPR S-PM with the corresponding local mobility. Section 5 deals with the full-cycle mobility of the 3-RPS-3-SPR S-PM. Manipulator under study The architecture of the 3-RPS-3-SPR S-PM under study is shown in Fig. 1. It consists of a proximal 3-RPS PM module and a distal 3-SPR PM module. The 3-RPS PM is composed of three legs each containing a revolute, a prismatic and a spherical joint mounted in series, while the legs of the 3-SPR PM have these lower pairs in reverse order. Thus, the three equilateral triangular shaped platforms are the fixed base, the coupler and the end effector, coloured brown, green and blue, respectively. The vertices of these platforms are named A i , B i and C i , i = 0, 1, 2. Here after, the subscript 0 corresponds to the fixed base, 1 to the coupler platform and 2 to the end-effector. A coordinate frame F i is attached to each platform such that its origin O i lies at its circumcenter. The coordinate axes, x i points towards the vertex A i , y i is parallel to the opposite side B i C i and by the right hand rule, z i is normal to platform plane. Besides, the circum-radius of the i-th platform is denoted as h i . p i and q i , i = 1, ..., 6 are unit vectors along the prismatic joints while u i and v i , i = 1, ..., 6 are unit vectors along the revolute joint axes. Kinematic modeling of series-parallel manipulators Keeping in mind that the two parallel mechanisms are mounted in series, the end effector twist (angular velocity vector of a body and linear velocity vector of a point on the body) for the 3-RPS-3-SPR S-PM with respect to base can be represented as follows: 0 t 2/0 = 0 t PROX 2/0 + 0 t DIST 2/1 =⇒ 0 ω 2/0 0 v O 2 /0 = 0 ω PROX 2/0 0 v PROX O 2 /0 + 0 ω DIST 2/1 0 v DIST O 2 /1 (1) where 0 t PROX 2/0 is the end effector twist with respect to the base (2/0) due to the proximal module motion and 0 t DIST 2/1 is the end effector twist with respect to the coupler h 1 h 0 h 2 z 1 y 1 x 1 z 0 y 0 x 0 z 2 y 2 x 2 O 2 O 1 O 0 A 0 B 0 C 0 A 1 B 1 C 1 C 2 B 2 A 2 F 0 F 1 F 2 u 2 u 1 u 3 v 2 v 1 v 3 p 1 p 3 p 2 q 1 q 3 3 q 2 PROXIMAL module DISTAL module Fig. 1: A 3-RPS-3-SPR series-parallel manipulator F 0 F n F 1 Module 1 Module 2 Module n F 2 F n-1 Fig. 2: n parallel mechanisms (named modules) arranged in series (2/1) due to the distal module motion. These twists are expressed in the base frame F 0 , hence the left superscript. The terms on right hand side of Eq. ( 1) are not known, but can be expressed in terms of the known twists using screw transformations. To do so, the known twists are first noted down. If the proximal and distal modules are considered individually, the twist of their respective moving platforms with respect to their fixed base will be expressed as a function of the actuated joint velocities : A PROX 0 t PROX 1/0 = B PROX ρ13 =⇒         ( 0 r O 1 A 1 × 0 p 1 ) T 0 p T 1 ( 0 r O 1 B 1 × 0 p 2 ) T 0 p T 2 ( 0 r O 1 C 1 × 0 p 3 ) T 0 p T 3 ( 0 r O 1 A 1 × 0 u 1 ) T 0 u T 1 ( 0 r O 1 B 1 × 0 u 2 ) T 0 u T 2 ( 0 r O 1 C 1 × 0 u 3 ) T 0 u T 3         0 ω PROX 1/0 0 v PROX O 1 /0 = I 3×3 0 3×3   ρ1 ρ2 ρ3   (2) A DIST 1 t DIST 2/1 = B DIST ρ46 =⇒         ( 1 r O 2 A 1 × 1 q 1 ) T 1 q T 1 ( 1 r O 2 B 1 × 1 q 2 ) T 1 q T 2 ( 1 r O 2 C 1 × 1 q 3 ) T 1 q T 3 ( 1 r O 2 A 1 × 1 v 1 ) T 1 v T 1 ( 1 r O 2 B 1 × 1 v 2 ) T 1 v T 2 ( 1 r O 2 C 1 × 1 v 3 ) T 1 v T 3         1 ω DIST 2/1 1 v DIST O 2 /1 = I 3×3 0 3×3   ρ4 ρ5 ρ6   (3) where, 0 t PROX 1/0 is the twist of the coupler with respect to the base expressed in F 0 and1 t DIST 2/1 is the twist of the end effector with respect to the coupler expressed in F 1 . A PROX and A DIST are called forward Jacobian matrices and they incorporate the actuation and constraint wrenches of the 3-RPS and 3-SPR PMs, respectively [START_REF] Joshi | Jacobian analysis of limited-DOF parallel manipulators[END_REF]. B PROX and B DIST are called inverse Jacobian matrices and they are the result of the reciprocal product between wrenches of the mechanism and twists of the joints for the 3-RPS and 3-SPR PMs, respectively. ρ13 = [ ρ1 , ρ2 , ρ3 ] T and ρ46 = [ ρ4 , ρ5 , ρ6 ] T are the prismatic joint velocities for the proximal and distal modules, respectively. k r PQ denotes the vector pointing from a point P to point Q expressed in frame F k . Considering Eq. ( 1), the unknown twists 0 t PROX 2/0 and 0 t DIST 2/1 can be expressed in terms of the known twists 0 t PROX 1/0 and 1 t PROX 2/1 using the following screw transformation matrices [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF][START_REF] Binaud | The kinematic sensitivity of robotic manipulators to joint clearances[END_REF]. 0 ω PROX 2/0 0 v PROX O 2 /0 = 2 Ad 1 0 ω PROX 1/0 0 v PROX O 1 /0 (4) with 2 Ad 1 = I 3×3 0 3×3 -0 rO 1 O 2 I 3×3 , 0 rO 1 O 2 =   0 -0 z O 1 O 2 0 y O 1 O 2 0 z O 1 O 2 0 -0 x O 1 O 2 -0 y O 1 O 2 0 x O 1 O 2 0   2 Ad 1 is called the adjoint matrix. 0 rO 1 O 2 is the cross product matrix of vector 0 r O 1 O 2 = [ 0 x O 1 O 2 , 0 y O 1 O 2 , 0 z O 1 O 2 ], pointing from point O 1 to point O 2 expressed in frame F 0 . Similarly, for the distal module, the velocities 1 ω DIST 2/1 and 1 v DIST O 2 /1 can be transformed from frame F 1 to F 0 just by multiplying each of them by the rotation matrix 0 R 1 from frame F 0 to frame F 1 : 0 ω DIST 2/1 0 v DIST O 2 /1 = 0 R 1 1 ω DIST 2/1 1 v DIST O 2 /1 with 0 R 1 = 0 R 1 I 3×3 I 3×3 0 R 1 (5) 0 R 1 is called the augmented rotation matrix between frames F 0 and F 1 . Consequently from Eqs. ( 4) and (5), 0 t 2/0 = 2 Ad 1 0 t PROX 1/0 + 0 R 1 1 t DIST 2/1 (6) Note that Eq. ( 6) amounts to the twist equation derived in [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF] whereas Gallardo et al. add the twists of individual modules directly without considering the screw transformations. It is noteworthy that Equation [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF] in [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF] is incorrect, so are any further conclusions based on this equation. Following Eqs. ( 2) and ( 3), with the assumption that the proximal and distal modules are not in a parallel singularity 1 or in other words, matrices A PROX and A DIST are invertible, 0 t 2/0 = 2 Ad 1 A -1 PROX B PROX ρ13 + 0 R 1 A -1 DIST B DIST ρ46 = 2 Ad 1 A -1 PROX B PROX 0 R 1 A -1 DIST B DIST ρ13 ρ46 = J S-PM ρ13 ρ46 (7) J S-PM is the kinematic Jacobian matrix of the 3-RPS-3-SPR S-PM under study. The rank of this matrix provides the local mobility of the S-PM. Equations ( 6), ( 7) and ( 8) can be extended to a series-parallel manipulator with n number of parallel mechanisms, named modules in this paper, in series as shown in Fig. 2. Thus, the twist of the end effector with respect to the fixed base expressed in frame F 0 can be expressed as follows : 0 t n/0 = n ∑ i=1 0 R (i-1) n Ad i (i-1) t M i i/(i-1) = J 6×3n      ρM 1 ρM 2 . . . ρM n      with 0 R i = 0 R i I 3×3 I 3×3 0 R i , n Ad i = I 3×3 0 3×3 -(i-1) rO i O n I 3×3 and J 6×3n = n Ad 1 A -1 M 0 B M 0 0 R 1 n Ad 2 A -1 M 1 B M 1 ... 0 R n A -1 M n M n (8) where, J 6×3n is the 6 × 3n kinematic Jacobian matrix of the n-module hybrid manipulator. M i stands for the i-th module, A M i and B M i are the forward and inverse Jacobian matrices of M i of the series-parallel manipulator, respectively. ρM i is the vector of the actuated prismatic joint rates for the i-th module. Twist system of the 3-RPS-3-SPR S-PM Each leg of the 3-RPS and 3-SPR parallel manipulators are composed of three joints, but the order of the limb twist system is equal to five and hence there exist five twists associated to each leg. Thus, the constraint wrench system of the i-th leg reciprocal to the foregoing twists is spanned by a pure force W i passing through the spherical joint center and parallel to the revolute joint axis. Therefore, the constraint wrench systems of the proximal and distal modules are spanned by three zero-pitch wrenches, namely, 0 W PROX = 3 i=1 0 W i PROX = span 0 u 1 0 r O 2 A 1 × 0 u 1 , 0 u 2 0 r O 2 B 1 × 0 u 2 , 0 u 3 0 r O 2 C 1 × 0 u 3 1 W DIST = 3 i=1 1 W i DIST = span 1 v 1 1 r O 2 A 1 × 1 v 1 , 1 v 2 1 r O 2 B 1 × 1 v 2 , 1 v 3 1 r O 2 C 1 × 1 v 3 (9) Due to the serial arrangement of the parallel mechanisms, the constraint wrench system of the S-PM is the intersection of the constraint wrench systems of each module. Alternatively, the twist system of the S-PM is the direct sum (disjoint union) of the twist systems of each module. Therefore, the nullspace of the 3 × 6 matrix containing the basis screws of 0 W PROX and 1 W DIST leads to the screws that form the basis of the twist system of each module, 0 T PROX = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 } and 1 T DIST = span{ 1 ξ 4 , 1 ξ 5 , 1 ξ 6 }, respectively. The augmented rotation matrix derived in Eq. ( 5) is exploited to ensure that all the screws are expressed in one frame (F 0 in this case). Therefore, the total twist system of the S-PM can be obtained as follows : 0 T S-PM = 0 T PROX 0 T DIST = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 , 0 R 1 1 ξ 4 , 0 R 1 1 ξ 5 , 0 R 1 1 ξ 6 } (10) The order of the twist system 0 T S-PM yields the local mobility of the whole manipulator. Some general and singular configurations of the 3-RPS-3-SPR S-PM with h 0 = 2, h 1 = 1 and h 2 = 2 are considered and its mobility is listed based on the rank of the Jacobian and the order of the twist system in Table 1. For general configurations like 2 and 3, the mobility is found to be six. The mobility reduces only when some singularities are encountered. For a special configuration when the three platform planes are parallel to each other as shown in the first row of this table, the rotations of the coupler generate translational motions of the end effector. Yet, the torsional axes of both mechanisms coincide and hence, the mechanism cannot perform any rotation about an axis of vertical direction leading to a mobility equal to five. Moreover, a configuration in which any revolute joint axis in the end effector is parallel to its corresponding axis in the fixed base results in a mobility lower than six for the S-PM. For instance, for the 4th configuration in the table, there exists a constraint force f , parallel to the two parallel revolute joint axes resulting in a five dof manipulator locally. Configurations 1 and 4 are the impossible output singularities as identified by Zlatanov et al. [START_REF] Zlatanov | A unifying framework for classification and interpretation of mechanism singularities[END_REF]. It should be noted that if one of the modules is in a parallel singularity, the motion of the moving-platform of the manipulator becomes uncontrollable. A detailed singularity analysis of series-parallel manipulators will be performed in a future work for a better understanding of their behaviour in singular configurations. Full-cycle mobility of the 3-RPS-3-SPR S-PM The full cycle mobility can be obtained by calculating the Hilbert dimension of the set of constraint equations of the mechanism [START_REF] Husty | A Proposal for a New Definition of the Degree of Freedom of a Mechanism[END_REF]. Two Study transformation matrices are considered : 0 X 1 from F 0 to F 1 and 1 Y 2 from F 1 to F 2 composed of Study parameters x i and y i , i = 0, 1, ..., 7, respectively. Thus, the coordinates of points A j , B j and C j , j = 0, 1, 2 and vectors u k and v k , k = 1, 2, can be represented in F 0 to yield sixteen constraint equations (six for the 3-RPS PM, six for the 3-SPR Number Study parameters and configuration Rank of J S-PM Order of 0 T S-PM 1 x i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.75) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.8) F 0 F 1 F 2 5 5 2 x i = (0.35 : -0.9 : 0.25 : 0 : 0.57 : 0.27 : -1.76 : -1.33) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : -0.8) F 0 F 1 F 2 6 6 3 x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.79 : -0.59 : 0.16 : 0 : -0.16 : -0.13 : -1.25 : -2.04) F 0 F 1 F 2 6 6 4 x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.39 : 0 : 0.92 : 0 : 0 : -1.88 : 0 : 0.12) F 0 F 1 F 2 f 5 5 Table 1: Mobility of the 3-RPS-3-SPR S-PM in different configurations PM, Study quadric and normalization equations for each transformations). It was established that the 3-RPS and the 3-SPR parallel mechanisms have two operation modes each, characterized by x 0 = 0, x 3 = 0 and y 0 = 0, y 3 = 0, respectively [START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF][START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. For the S-PM, four ideals of the constraint equations are considered : K 1 , when x 0 = y 0 = 0, K 2 , when x 3 = y 0 = 0, K 3 , when x 0 = y 3 = 0 and K 4 , when x 3 = y 3 = 0. The Hilbert dimension of these ideals over the ring C[h 0 , h 1 , h 2 ] is found to be six 1and hence the global mobility of the 3-RPS-3-SPR S-PM. dimK i = 6, i = 1, 2, 3, 4. (11) Conclusions and future work In this paper, the full-cycle mobility of a 3-RPS-3-SPR PM was elucidated to be six. The kinematic Jacobian matrix of the series-parallel manipulator was calculated with the help of screw theory and the result was extended to n-number of modules. Moreover, the methodology for the determination of the twist system of series-parallel manipulators was explained. The rank of the Jacobian matrix or the order of the twist system gives the local mobility of the S-PM. Global mobility was calculated as the Hilbert dimension of the ideal of the set of constraint equations. In the future, we intend to solve the inverse and direct kinematics using algebraic geometry concepts and to enlist all possible singularities of series-parallel mechanisms. Additionally, it is challenging to consider n-modules (n > 2) and to work on the trajectory planning of such manipulators the number of output parameters is equal to six and lower than the number of actuated joints, which is equal to 3n. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity[START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF] The pdf file of the Maple sheet with calculation of Hilbert dimension can be found here : https://www.dropbox.com/s/3bqsn45rszvgdax/Mobility3RPS3SPR.pdf?dl=0 Acknowledgements This work was conducted with the support of both the École Centrale de Nantes and the French National Research Agency (ANR project number: ANR-14-CE34-0008-01).
18,669
[ "1307880", "10659", "16879" ]
[ "111023", "473973", "481388", "473973", "441569", "473973" ]
01757941
en
[ "info", "scco" ]
2024/03/05 22:32:10
2016
https://amu.hal.science/hal-01757941/file/GalaZiegler_CL4LC-2016.pdf
N Úria Gala email: [email protected] Johannes Ziegler email: [email protected] Reducing lexical complexity as a tool to increase text accessibility for children with dyslexia Lexical complexity plays a central role in readability, particularly for dyslexic children and poor readers because of their slow and laborious decoding and word recognition skills. Although some features to aid readability may be common to many languages (e.g., the majority of 'easy' words are of low frequency), we believe that lexical complexity is mainly language-specific. In this paper, we define lexical complexity for French and we present a pilot study on the effects of text simplification in dyslexic children. The participants were asked to read out loud original and manually simplified versions of a standardized French text corpus and to answer comprehension questions after reading each text. The analysis of the results shows that the simplifications performed were beneficial in terms of reading speed and they reduced the number of reading errors (mainly lexical ones) without a loss in comprehension. Although the number of participants in this study was rather small (N=10), the results are promising and contribute to the development of applications in computational linguistics. Introduction It is a fact that lexical complexity must have an effect on the readability and understandability of text for people with dyslexia [START_REF] Hyönä | Eye fixation patterns among dyslexic and normal readers : effects of word length and word frequency[END_REF]. Yet, many of the existing tools have only focused on the visual presentation of text, such as the use of specific dyslexia fonts or increased letter spacing [START_REF] Zorzi | Extra-large letter spacing improves reading in dyslexia[END_REF]. Here, we investigate the use of text simplification as a tool for improving text readability and comprehension. It should be noted that comprehension problems in dyslexic children are typically a consequence of their problems in basic decoding and word recognition skills. In other words, children with dyslexia have typically no comprehension problems in spoken language. However, when it comes to reading a text, their decoding is so slow and strenuous that it takes up all their cognitive resources. They rarely get to the end of a text in a given time, and therefore fail to understand what they read. Long, complex and irregular words are particularly difficult for them. For example, it has been shown that reading times of children with dyslexia grow linearily with each additional letter [START_REF] Spinelli | Length effect in word naming in reading : role of reading experience and reading deficit in italian readers[END_REF] [START_REF] Ziegler | Developmental dyslexia in different languages : Language-specific or universal[END_REF]. Because children with dyslexia fail to establish the automatic procedures necessary for fluent reading, they tend to read less and less. Indeed, a dyslexic child reads in one year what a normal reader reads in two days [START_REF] Cunningham | What reading does for the mind[END_REF]) -a vicious circle for a dyslexic child because becoming a fluent reader requires extensive training and exposure to written text [START_REF] Ziegler | Modeling reading development through phonological decoding and self-teaching : Implications for dyslexia[END_REF] In this paper, we report an experiment comparing the reading performance of dyslexic children and poor readers on original and simplified corpora. To the best of our knowledge, this is the first time that such an experiment is undertaken for French readers. Our aim was to reduce the linguistic complexity of ten standardized texts that had been developped to measure reading speed. The idea was to identify the words and the structures that were likely to hamper readability in children with reading deficits. Our hypothesis was that simplified texts would not only improve reading speed but also text comprehension. A lexical analysis of the reading errors enabled us to identify what kind of lexical complexity was particularly harmful for dyslexic readers and define what kind of features should be taken into account in order to facilitate readability. Experimental Study Procedure and participants We tested the effects of text simplification by contrasting the reading performance of dyslexic children on original and manually simplified texts and their comprehension by using multiple choice questions at the end of each text. The children were recorded while reading aloud. They read ten texts, five original and five simplified in a counter-balanced order. Each text was read in a session with their speech therapists. The texts were presented on a A4 sheet printed in 14 pt Arial font. The experiment took place between december 2014 and march 2015. After each text, each child had to answer the three multiple-choice comprehension questions without looking at the texts (the questions were the same for the original and the simplified versions of the text). Three possible answers were provided in a randomized order : the correct one, a plausible one taking into account the context, and a senseless one. Two trained speech therapists collected the reading times and comprehension scores, annotated the reading errors, and proposed a global analysis of the different errors (cf. 3.1) [START_REF] Brunel | Simplification de textes pour faciliter leur lisibilité et leur compréhension[END_REF]. Ten children aged between 8 and 12 attending regular school took part in the present study (7 male, 3 female). The average age of the participants was 10 years and 4 months. The children had been formally diagnosed with dyslexia through a national reference center for the diagnosis of learning disabilities. Their reading age1 corresponds to 7 years and 6 months, which meant that they had an average reading delay of 2 years and 8 months. Data set The corpora used to test text simplification is a collection of ten equivalent standardized texts (IReST, International Reading Speed Texts2 ). The samples were designed for different languages keeping the same difficulty and linguistic characteristics to assess reading performances in different situations (low vision patients, normal subjects under different conditions, developmental dyslexia, etc.). The French collection consists on nine descriptive texts and a short story (more narrative style). The texts were analyzed using TreeTagger [START_REF] Schmid | Probabilistic part-of-speech tagging using decision trees[END_REF], a morphological analyzer which performs lemmatization and part-of-speech tagging. The distribution in terms of part-of-speech categories is roughly the same in original and simplified texts, although simplified ones have more nouns and less verbs and adjectives. Table 1 shows the average number of tokens per text and per sentence, the average number of sentences per text, the distribution of main content words and the total number of lemmas : Simplifications Each corpus was manually simplified at three linguistic levels (lexical, syntactic, discursive). It is worth mentioning that, in previous work, text simplifications are commonly considered as lexical and syntactic [START_REF] Carroll | Simplifying Text for Language Impaired readers[END_REF], little attention is generally paid to discourse simplification with a few exceptions. In this study, we decided to perform three kinds of linguistic transformations because we made the hypothesis that all of them would have an effect on the reading performance. However, at the time being, only the lexical simplifications have been analyzed in detail (cf. section 3.2). The manual simplifications were made according to a set of criteria. Because of the absence of previous research on this topic, the criteria were defined by three annotators following the recommendations for readers with dyslexia [START_REF] Ecalle | Des difficultés en lecture à la dyslexie : problèmes d'évaluation et de diagnostic[END_REF] for French and [START_REF] Rello | DysWebxia. A Text Accessibility Model for People with Dyslexia[END_REF] for Spanish. Lexical simplifications. At the lexical level, priority was given to high-frequency words, short words and regular words (high grapheme-phoneme consistency). Content words were replaced by a synonym 3 . The lexical difficulty of a word was determined on the basis of two available resources : Manulex [START_REF] Lété | Manulex : A grade-level lexical database from French elementary-school readers[END_REF] 4 , a grade-level lexical database from French elementary school readers, and FLELex (Franc ¸ois et al., 2014) 5 , a graded lexicon for French as a foreign language reporting frequencies of words across different levels. If the word in the original text had a simpler synonym (an equivalent in a lower level) the word was replaced. For instance, the word consommer ('to consume') has a frequency rate of 3.55 in Manulex, it was replaced by manger ('to eat') that has 30.13. In most of the cases, a word with a higher frequency is also a shorter word : elle l'enveloppe dans ses fils collants pour le garder et le consommer plus tard > ... pour le garder et le manger plus tard ('she wraps it in her sticky net to keep it and eat it later'). Adjectives or adverbs were deleted if there was an agreement among the three annotators, i.e. if it was considered that the information provided by the word was not relevant to the comprehension of the sentence. To give an example, inoffensives ('harmless') was removed in Il y a des mouches inoffensives qui ne piquent pas ('there are harmless flies that do not sting'). In French, lexical replacements often entail morphological or syntactic modifications of the sentence, in these cases the words or the phrases were also modified to keep the grammaticality of the sentence (e.g. determiner and noun agreement) and the same content (meaning). Example, respectively with number and gender agreement : une partie des plantes meurt and quelques plantes meurent ('some plants die'), or la sécheresse ('drought') and au temps sec ('dry wheather'). Syntactic simplifications. Structural simplifications imply a modification on the order of the constituents or a modification of the sentence structure (grouping, deletion, splitting [START_REF] Brouwers | Syntactic French Simplification for French[END_REF]). In French, the canonical order of a sentence is SVO, we thus changed the sentences where this order was not respected (for stylistic reasons) : ensuite poussent des buissons was transformed into ensuite des buissons poussent ('then the bushes grow'). The other syntactic reformulations undertaken on the IReST corpora are the following : passive voice to active voice, and present participle to present tense (new sentence through ponctuation or coordinate conjunction). Discursive simplifications. As for transformations dealing with the coherence and the cohesion of the text, given that the texts were short, we only took into account the phenomena of anaphora resolution, i.e. expliciting the antecedent of a pronoun (the entity which it refers to). Although a sentence where the pronouns have been replaced by the antecedents may be stylistically poorer, we made the hypothesis that it is easier to understand. For instance : leurs traces de passage ('their traces') was replaced by les traces des souris ('the mice traces'). The table 2 gives an idea of the transformations performed in terms of quantity. As clearly showed, the majority of simplifications were lexical : 3. The following reference resources were used : the database www.synonymes.com and the Trésor de la Langue Franc ¸aise informatisé (TLFi) http://atilf.atilf.fr/tlf.htm. 4 Results Two different analyses were performed : one for quantitatively measuring the reading times, the number of errors and the comprehension scores. The second one took specifically into account the lexicon : the nature of the words incorrectly read. Behavioral data analysis Reading Times. The significance of the results was assessed with a pairwise t-test (Student) 6 From this table it can be seen that the overall reading times of simplified texts were significantly shorter than the reading times of original texts. While this result can be attributed to the fact that simplified texts were slightly shorter than original texts, it should be emphasized that reading speed (words per minute), which is independent of the length of a text, was significantly greater in simplified texts than in original texts. Number of errors. The total number of errors included : -(A) the total number of skipped words, repeated words (words read twice), interchanged words, line breaks, repeated lines (line read twice) -(B) the total number of words incorrectly read for lexical reasons (the word read is a pseudo-word or a different word) -(C) the total number of words incorrectly read for grammatical reasons (the word read has the same grammatical category (part-of-speech) but varies on number, gender, tense, mode, person) First of all, it should be noted that participants made fewer errors in simplified texts than in original ones (5,5% vs 7,7%) 7 . The table 4 shows the distribution of all the errors : It can be noted that lexical and grammatical errors occurred equally often 8 . Comprehension scores 6. ** significant results with p < 0.01 7. This difference was significant in a t-test (t = 2,3, p < 0.05) 8. A more detailed analysis of these errors is proposed on section 3.2. The results of the comprehension questionnaire are better for simplified than for original texts (marginal gain 9 ) as shown on table 5 : These results entail that dyslexic children read the simplified version of the corpus without a significant loss of comprehension. If anything, they showed a marginal increase in comprehension scores for simplified texts. Lexical analysis As we were interested in the lexicon of the corpus, an analysis of the content words (i.e. nouns, verbs, adjectives, adverbs) incorrectly read was undertaken in order to better target the reading pitfalls. From our study, we identified 404 occurrences that were incorrectly read, corresponding to 213 different lemmas (to be precise, there were 235 tokens (22 were inflected variants), i.e. arbre and arbres, or restaient, restent, rester). 404 wrong read words corresponds to 26.81 % of the content words of the corpora, which means that more than one word out of four is incorrectly read. It is worth mentioning that we did not count monosyllabic grammatical words as determiners, pronouns or prepositions, although an important number or errors occurred also on those tokens, i.e. le read la ('the'), ces read des ('these'), pour read par ('for'). We make the hypothesis that the readers concentrate their efforts on decoding content words, and not grammatical ones, because they are those that carry the semantic information and are thus important for text comprehension. Besides, as grammatical words are usually very short and frequent in French, they have a higher number of orthographic neighbours and people with dyslexia tend to confuse short similar words. We distinguished the words that were replaced by a pseudo-word (29.46%) and those replaced by other existing words on French vocabulary (70.37%). These figures can be compared with those obtained by Rello and collaborators [START_REF] Rello | A First Approach to the Creation of a Spanish Corpus of Dyslexic Texts[END_REF]. Non-word errors are pronunciations that do not result in an existing word, real-word errors are pronunciations that result in an incorrect but existing word. Non-word errors appear to be higher in English (83%) and in Spanish (79%), but not in French where real-word errors were clearly a majority 10 : Grammatical variants concern variations on gender and number for nouns, and for person, tense and mode for verbs. Lexical remplacements are words read as if they were other words with orthographical similarities (lieu > île, en fait > enfin, commun > connu, etc.). Morphological variants are words of 9. p < 0.1 10. This finding will deserve more attention in future work. the same morphological family (baisse > basse, malchanceux > malchance). As for orthographical neighbours, we specifically distinguish word pairs where the difference is only of one letter (raisins > raisons, bon > don). Concerning word length for all the mentionned features, 36.88% of the words read were replaced by words of strictly the same length (forment > formant, catégorie > *calégorie), 14.11% were replaced by longer ones (utile > utilisé, suffisant > suffisamment), 49.01% were replaced by shorter ones (nourriture > nature, finie > fine, empilées > empli). The average length of the 404 words incorrectly read is 7.65 characters (the shortest has three characters, bon, and the longest 16, particulièrement). The average number of orthographical neighbours is 3.24, with eight tokens having more than ten neighbours : bon, bois, basse, foule, fine, fils, garde, sont ('good, wood, low, crowd, thin, thread, keeps, are'). As far as the grammatical categories are concerned, the majority of the errors were on verbs. They concerned grammatical variants of person, tense (past imparfait > present) and mode (present > present participle). The distribution on part-of-speech tags errors is shown on table 8 In French, it is stated that the more frequent (and easier) structure is CV and V. In our results, 58,69% of the words contain this common structure, while 41,31% present a more complex structure (CVC, CVCC, CYC 11 , etc.) We finally analyzed the consistency of grapheme-to-phoneme correspondences which is particularly irregular in French (silent letters, nasal vowels, etc.) 12 . As mentioned above, the average length of the words incorrectly read is 7.65 and their average in number of phonemes is 4.95. This means that the 11. C is a consonant, V is a vowel, Y is a semi-vowel, i.e. [j] in essayait [e-se-je], [w] in doivent [dwav] 12. This is not the case for other languages, e.g. the Spanish writing system has consistent grapheme-to-phoneme correspondences. average difference between the number of letters and the number of real phonemes is 2.71. Only four tokens were regular (same number of phonemes than letters : existe, mortel, partir, plus ('exists, mortal, leave, plus')). The highest difference is 6 in apparaissent, épargneaient ('appear, saved') with 12 letters and 6 phonemes each, and mangeaient ('ate') with 10 letters and 4 phonemes. All the words incorrectly read were thus irregular as far as grapheme-to-phoneme consistency is concerned. 4 Discussion : determining where complexity is According to the literature, complexity for children with dyslexia should be found on long and less frequent words. More precisely, from the analysis of the reading errors obtained on our first pilot-study, the errors mainly occur on verbs and nouns with complex syllable structure, i.e. irregular grapheme-tophoneme correspondences, words with many orthographic neighbours or many morphological family members which are more frequent. Visual similarity is a source of error, specially for the following pairs 13 : In all the replacements we can observe visual similarities. As shown in table 12 ,the word that is actually read tends to be in most of the cases shorter and more frequent 14 than the original one : To sum up, lexical complexity for dyslexic readers in French is to be found on verbs and nouns longer than seven characters, presenting letters with similar equivalents, with complex syllables and irregular phoneme-to-grapheme consistency. Lexical replacements of words incorrectly read should consider shorter and more frequent words and words with higher grapheme-to-phoneme consistency. Conclusion In this paper we have presented the results of a first pilot-study aiming at testing the effects of text simplification on children with dyslexia. From our results, reading speed is increased without a loss of 13. Other possible similar pairs (not found in our corpora) : t/f, u/v, a/o 14. The frequencies have been extracted from the Manulex database (column including the five levels). comprehension. It is worth mentioning that reading errors were lower on simplified texts (in this experiment, simplified texts contained a majority of lexical simplifications). The comprehensive analyses of reading errors allow us to propose a detailed description of lexical complexity for dyslexic children. The causes of lexical complexity were mainly related to word length (words longer than seven characters), irregular spelling-to-sound correspondences and infrequent syllable structures. The insights obtained as a result of this first pilot-study are currently being integrated into a model aiming at providing better accessibility of texts for children with dyslexia. We are currently working in a new study with children in French schools to refine the features that are to be taken into account in our model. These results will be integrated into a tool that will automatically simplify texts by replacing complex lexical items with simpler ones. TABLE 1 - 1 IReST corpora features before and after manual simplifications. TABLE 2 - 2 Linguistic transformations on the IReST French corpora. Lexical Simplifications 85.91% Direct replacements 57.04% Removals 13.38% Replacements with morphological changes 4.93% Replacements with syntactical changes 10.56% Syntactic Simplifications 9.86% Reformulations 7.75% Constituent order 2.11% Discursive Simplifications 4.23% Total 100 % . http://www.manulex.com 5. http://cental.uclouvain.be/flelex/ TABLE 3 - 3 . The results are shown on table 3 : Significance of the results obtained. Variables Original texts Simplified texts T value Significance Reading times (sec) 159.94 134.70 -3.528 0.006** Reading speed (words per minute) 64.85 71.10 4.105 0.003** TABLE 4 - 4 Distribution of the types of errors in original and simplfied texts. TABLE 5 - 5 Significance of the results obtained. TABLE 6 - 6 Error typology compared accross languages.The overall error typology that we propose is shown on table 7 : Type of lexical replacement Original word English translations Pseudo-word 119 29.46% grenouille > *greniole frog, * Grammatical variant 135 33.42 % oubliaient > oublient forgot, forget Lexical replacement 84 20.79% attendent > attaquent wait, attack Morphological variant 43 10.64% construction > construire build, to build Orthographical neighbour 23 5.69% jaunes > jeunes yellow, young Total 404 100% TABLE 7 - 7 Error typology. : Part-of-speech tags of tokens incorrectly read VERB 196 48.51 % NOUN 115 28.47% ADJECTIVE 48 11.88% ADVERB 25 6.19% Other categories (determiners excluded) 20 4.95% TABLE 8 - 8 Part-of-speech distribution of the tokens in the corpora.We analyzed the syllabe structure of the 404 tokens. The average number of syllables is 2.09, the distribution is shown on table 9 : Number of syllabs 1 syllab 72 30,64% 2 syllabs 96 40,85% 3 syllabs 47 20,00% 4 syllabs 15 6,38% 5 syllabs 5 2,13% 235 100,00% TABLE 9 - 9 Syllabs distribution of the tokens in the corpora. , as shown on table10 : Syllable structure CV 230 47,03% V 57 11,66% CVC 107 21,88% CVCC, CCVC, CYVC 47 9,61% CYV, CCV, VCC, CVY 34 6,95% VC, YV 10 2,04% VCCC, CCYV, CCVCC 4 0,82% 489 100,00% TABLE 10 - 10 Syllable structure. TABLE 11 - 11 Graphical alternations. TABLE 12 - 12 Lexical replacements typology with frequencies of the tokens. We used standardized reading tests to assess the reading level of each child, i.e. lAlouette[START_REF] Lefavrais | Test de l'alouette[END_REF] and PM47[START_REF] Raven | Pm47 : Standard progressive matrices : Sets a[END_REF] and a small battery of tests to assess general cognitive abilities. http ://www.vision-research.eu Acknowledgements We deeply thank the speech therapists Aurore and Mathilde Combes for collecting the reading data and providing a first analysis of the data. We also thank Luz Rello for her valuable insights on parts of the results.
24,271
[ "18582", "12344" ]
[ "862", "849" ]
01757946
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757946/file/CK2017_WuBaiCaro_HAL.pdf
Guanglei Wu Shaoping Bai Stéphane Caro email: [email protected] Transmission Quality Evaluation for a Class of Four-limb Parallel Schönflies-motion Generators with Articulated Platforms Keywords: Schönflies motion, Jacobian, pressure angle, transmission. 1 This paper investigated the motion/force transmission quality for a class of parallel Schönflies-motion generators built with four identical RRΠ RR-type limbs. It turns out that the determinant of the forward Jacobian matrices for this class of parallel robots can be expressed as the scalar product of two vectors, the first vector being the cross product of the four unit vectors along the parallelograms, the second one being related to the rotation of the mobile platform. The pressure angles, derived from the determinants of forward and inverse Jacobians, respectively, are used for the evaluation of the transmission quality of the robots. Four robots are compared based on the proposed method as illustrative examples. Introduction Parallel robots performing Schönflies motions are well adapted to high-speed pickand-place (PnP) operations [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF][START_REF] Amine | Singularity conditions of 3T1R parallel manipulators with identical limb structures[END_REF], thanks to their lightweight architecture and high stiffness. A typical robot is the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF] by Adept Technologies Inc., the fastest industrial robot available. Its latest version can reach an acceleration up to 15 G with a 2 kg payload, allowing to accomplish four standard PnP cycles per second. Its similar version is the H4 robot [START_REF] Pierrot | H4: a new family of 4-dof parallel robots[END_REF] that consists of four identical limbs and an articulated traveling plate [START_REF] Company | Internal singularity analysis of a class of lower mobility parallel manipulators with articulated traveling plate[END_REF]. Recently, the Veloce. robot [START_REF] Veloce | [END_REF] with a different articulated platform that is connected by a screw pair has been developed. Besides, the four-limb robots with single-platform architecture have also been reported [START_REF] Wu | Architecture optimization of a parallel schönflies-motion robot for pick-and-place applications in a predefined workspace[END_REF][START_REF] Xie | Design and development of a high-speed and high-rotation robot with four identical arms and a single platform[END_REF]. Four-limb parallel robots with an articulated mobile platform are displayed in Fig. 1. It is noteworthy that the H4 robot with the modified mobile platform can be mounted vertically instead of the horizontal installation for the reduced mounting space, to provide a rotation around an axis of vertical direction, which is named as "V4" for convenience in the following study. In the design and analysis of a manipulator, its kinematic Jacobian matrix plays an important role, since the dexterity/manipulability of the robot can be evaluated by the condition number of Jacobians as well as the accuracy/torque capability [START_REF] Merlet | Jacobian, manipulability, condition number, and accuracy of parallel robots[END_REF] be-tween the actuators and end-effector. On the other hand, a problem usually encountered in this procedure is that the parallel manipulators with mixed input or/and output motions, i.e., compound linear and angular motions, will result in dimensionally inhomogeneous Jacobians, thus, the conventional performance indices associated with the Jacobian matrix, such as norm or condition number, will lack in physical significance [START_REF] Kim | New dimensionally homogeneous Jacobian matrix formulation by three end-effector points for optimal design of parallel manipulators[END_REF]. As far as Schönflies-motion generators are concerned, their endeffector generates a mixed motion of three translations and one rotation (3T1R), for which the terms of the kinematic Jacobian matrix do not have the same units. A common approach to overcome this problem is to introduce a characteristic length [START_REF] Altuzarra | Multiobjective optimum design of a symmetric parallel Schönflies-motion generator[END_REF] to homogenize the Jacobian matrix, whereas, the measurement significantly depends on the choice of the characteristic length that is not unique, resulting in biased evaluation, although a "best" one can be found by optimization technique [START_REF] Angeles | Is there a characteristic length of a rigid-body displacement?[END_REF]. Alternatively, an efficient approach to accommodate this dimensional inhomogeneity is to adopt the concept of the virtual coefficient, namely, the transmission index, which is closely related to the transmission/pressure angle. The pressure angle based transmission index will be adopted in this work. This paper presents a uniform evaluation approach for transmission quality of a family of four-limb 3T1R parallel robots with articulated mobile platforms. The pressure angles, derived from the forward and inverse Jacobians straightforward, are used for the evaluation of the transmission quality of the robots. The defined transmission index is illustrated with four robot counterparts for the performance evaluation and comparison. The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and Manipulator Architecture α 1 = -α 2 = α -π/2, α 3 = -α 4 = β + π/2 . Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively. C2 C4 H pair (lead: h) P2 (c) Kinematics and Jacobian Matrix of the Robots The Cartesian coordinates of points A i and B i expressed in the frame F b are respectively derived by a i = R cos η i sin η i 0 T (1) b i = bv i + a i ; v i = R z (α i )R x (θ i )j (2) where η i = (2i -1)π/4, i = 1, ..., 4, and θ i is the input angle. Let the mobile platform pose be denoted by χ χ χ = p T φ T , p = x y z T , the Cartesian coordinates of point C i in frame F b are expressed as c i =    sgn(cos η i )rR z (φ )i + sgn(sin η i )cj + p, Quattro (H4) -sgn(cos η i )rR y (φ )i + sgn(cos η i )cj + p, V4 rR z (η i )i + mod(i, 2)hφ /(2π)k + p, Veloce. (3) where sgn(•) stands for the sign function of (•), and mod stands for the modulo operation, h being the lead of the screw pair of the Veloce. robot. The inverse geometric problem has been well documented [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF]. It can be solved from the following the kinematic constraint equations: (c i -b i ) T (c i -b i ) = l 2 , i = 1, ..., 4 (4) Differentiating Eq. ( 4) with respect to time, one obtains φ rw T i s i + w T i ṗ = θi bw T i (u i × v i ) (5) with w i = c i -b i l ; s i =    sgn(cos η i )R z (φ )j, Quattro (H4) sgn(cos η i )R y (φ )k, V4 mod(i, 2)hφ /(2π)k, Veloce. (6) Equation ( 5) can be cast in a matrix form, namely, A χ χ χ = B θ θ θ (7) with A = e 1 e 2 e 3 e 4 T ; χ χ χ = ẋ ẏ ż φ T (8a) B = diag h 1 h 2 h 3 h 4 ; θ θ θ = θ1 θ2 θ3 θ4 T (8b) where A and B are the forward and inverse Jacobian matrices, respectively, and e i = w T i rw T i s i T ; h i = bw T i (u i × v i ) (9) As along as A is nonsingular, the kinematic Jacobian matrix is obtained as J = A -1 B (10) According to the inverse Jacobian matrix, each limb can have two working modes, which is characterized by the sign "-/+" of h i . In order for the robot not to reach any serial singularity, the mode h i < 0, i = 1, ..., 4, is selected as the working mode for all the robots. Transmission Quality Analysis Our interests are the transmission quality, which is related to the robot Jacobian. The determinant |B| of the inverse Jacobian matrix B is expressed as |B| = 4 ∏ i=1 h i = b 4 4 ∏ i=1 w T i (u i × v i ) (11) sequentially, the pressure angle µ i associated with the motion transmission in the ith limb, i.e., the motion transmitted from the actuated link to the parallelogram, is defined as: µ i = cos -1 w T i (u i × v i ), i = 1, ..., 4 (12) namely, the pressure angle between the velocity of point B i along the vector of u i × v i and the pure force applied to the parallelogram along w i , as shown in Fig. 3(a). where w mn = w m × w n . Taking the Quattro robot as an example, the pressure angle σ amongst limbs, namely, the force transmitted from the end-effector to the passive parallelograms in the other limbs, provided that the actuated joints in these limbs are locked, is derived below: A i B i C i u i v i w i u i ×v i μ i (a) σ = cos -1 (w 14 × w 23 ) T s w 14 × w 23 ( 14 ) wherefrom the geometrical meaning of angle σ can be interpreted as the angle between the minus Y -axis (s is normal to segment P 1 P 2 ) and the intersection line of planes B 1 P 1 B 4 and B 2 P 2 B 3 , where plane B 1 P 1 B 4 (B 2 P 2 B 3 ) is normal to the common perpendicular line between the two skew lines along w 1 and w 4 (w 2 and w 3 ), as depicted in Fig. 3(b). To illustrate the angle σ physically, (w 14 × w 23 ) T s can be rewritten in the following form: (w 14 × w 23 ) T s = w T 14 [w 3 (w 2 • s) -w 2 (w 3 • s)] (15) = w T 23 [w 4 (w 1 • s) -w 1 (w 4 • s)] The angle σ now can be interpreted as the pressure angle between the velocity in the direction of w 1 × w 4 and the forces along w 2 × w 3 imposed by the parallelograms in limbs 2 and 3 to point P, under the assumption that the actuated joints in limbs 1 and 4 are locked simultaneously. The same explanation is applicable for the case when the actuated joints in limbs 2 and 3 are locked. By the same token, the pressure angle for the remaining robot counterparts can be defined. Consequently, the motion κ and force ζ transmission indices (TI) a prescribed configuration are defined as the minimum value of the cosine of the pressure angles, respectively, κ = min(| cos µ i |), i = 1, ..., 4; ζ = | cos σ | (16) To this end, the local transmission index (LTI) [START_REF] Wang | Performance evaluation of parallel manipulators: Motion/force transmissibility and its index[END_REF] is defined as η = min{κ, ζ } = min{| cos µ i |, | cos σ |} ∈ [0, 1] (17) The larger the value of the index η, the better the transmission quality of the manipulator. This index can also be applicable for singularity measurement, where η = 0 means singular configuration. Transmission Evaluation of PnP Robots In this section, the transmission index over the regular workspace, for the Quattro, H4, Veloce. and V4 robots, will be mapped to analyzed their motion/force transmission qualities. According to the technical parameters of the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF], the parameters of the robots' base and mobile platforms are given in Table 1, and other parameters are set to R = 275 mm, b = 375 mm and l = 800 mm, respectively. Table 1 Geometrical parameters of the base and mobile platforms of the four-limb robots. The LTI isocontours of the four robots with different rotation angles of mobile platform are visualized in Fig. 4, from which it is seen that the minimum LTI of the Quattro and Veloce. robots are much higher than those of H4 and V4. Moreover, the volumes of the formers with LTI ≥ 0.7 are larger, to formulate larger operational workspace with high transmission quality. This means that the four-limb robots with a fully symmetrical structure have much better transmission performance than the asymmetric robot counterparts. Another observation is that the transmission performance of the robots decreases with the increasing MP rotation angle. As displayed in Fig. 4(a), the transmission index of the Quattro robot have larger values in the central region, which admits a singularity-free workspace with rotational capability φ = ±45 • . Similarly, Fig. 4(c) shows that the Veloce. robot can also have a high-transmission workspace free of singularity with smaller lead of screw pair, which means that this type of mobile platform allows the robot to have high performance in terms of transmission quality and rotational capability of fullcircle rotation. By contrast, the asymmetric H4 and V4 robots result in relatively small operational workspace and relatively low transmission performance, as illustrated in Figs. 4(b) and 4(d), but similar mechanism footprint ratio with same link dimensions and close platform shapes. Conclusions This paper presents the transmission analysis for a class of four-limb parallel Schönflies-motion robots with articulated mobile platforms, closely in connection with two pressure angles derived from the forward and inverse Jacobian matrices, wherein the determinant of the forward Jacobian matrices was simplified in an elegant manner, i.e., the scalar product between two vectors, through the Laplace expansion. The cosine function of the pressure angles based indices are defined to evaluate the transmission quality. It appears that the robot with the screw-pair-based mobile platform, namely, the Veloce., is the best in terms of transmission quality for any orientation of the mobile-platform. Figure 2 ( 2 Figure 2(a) depicts a simplified CAD model of the parallel Schönflies-motion generator, which is composed of four identical RRΠRR 1 -type limbs connecting the base and an articulated mobile platform (MP). The generalized base platform and the different mobile platforms of the four robots are displayed in Figs. 2(b) and 2(c), respectively.The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and α 1 = -α 2 = απ/2, α 3 = -α 4 = β + π/2. Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively. Fig. 1 1 Fig. 1 The four-limb PnP robots with different base and mobile platforms: (a) Quattro [1]; (b) H4 [9]; (c) Veloce. [2]; (d) "V4" [12]. Fig. 2 2 Fig. 2 The parameterization of the four-limb robots: (a) simplified CAD model; (b) a generalized base platform; (c) three different mobile platforms for the four robots. Fig. 3 3 Fig. 3 The pressure angles of the four-limb robots in the motion/force transmission: (a) µ i for all robots; (b) σ for Quattro. robots base mobile platform Quattro α = -π/4, β = 3π/4 r = 80 mm, c = 70 mm H4, V4 α = 0, β = π/2 r = 80 mm, c = 70 mm Veloce. α = -π/4, β = 3π/4 r = 100 mm, γ = (2i -1)π/4, h Fig. 4 4 Fig. 4 The LTI isocontours of the robots: (a) Quattro, φ = 0 and φ = 45 • ; (b) H4, φ = 0 and φ = 45 • ; (c) Veloce. with φ = 2π, screw lead h = 20 and h = 50; (d) V4, φ = 0 and φ = 45 • . Acknowledgements The reported work is partly supported by the Fundamental Research Funds for the Central Universities (DUT16RC(3)068) and by Innovation Fund Denmark (137-2014-5).
15,964
[ "10659" ]
[ "224365", "224365", "481388", "473973", "441569" ]
01757949
en
[ "info" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01757949/file/HMMSuspicious.pdf
Loïc Hélouët email: [email protected] John Mullins email: [email protected] Hervé Marchand email: [email protected] Concurrent secrets with quantified suspicion A system satisfies opacity if its secret behaviors cannot be detected by any user of the system. Opacity of distributed systems was originally set as a boolean predicate before being quantified as measures in a probabilistic setting. This paper considers a different quantitative approach that measures the efforts that a malicious user has to make to detect a secret. This effort is measured as a distance w.r.t a regular profile specifying a normal behavior. This leads to several notions of quantitative opacity. When attackers are passive that is, when they just observe the system, quantitative opacity is brought back to a language inclusion problem, and is PSPACEcomplete. When attackers are active, that is, interact with the system in order to detect secret behaviors within a finite depth observation, quantitative opacity turns to be a two-player finitestate quantitative game of partial observation. A winning strategy for an attacker is a sequence of interactions with the system leading to a secret detection without exceeding some profile deviation measure threshold. In this active setting, the complexity of opacity is EXPTIME-complete. I. INTRODUCTION Opacity of a system is a property stating that occurrences of runs from a subset S of runs of the system (the secret) can not be detected by malicious users. Opacity [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] can be used to model several security requirements like anonymity and non-interference [START_REF] Goguen | Security policies and security models[END_REF]. In the basic version of non-interference, actions of the system are divided into high (classified) actions and low (public) ones, and a system is non-interferent iff one can not infer from observation of low operations that highlevel actions were performed meaning that occurrence of high actions cannot affect "what an user can see or do". This implicitly means that users have, in addition to their standard behavior, observation capacities. Non-interference is characterized as an equivalence between the system as it is observed by a low-level user and a ideally secure version of it where high-level actions and hence any information flow, are forbidden. This generic definition can be instantiated in many ways, by considering different modeling formalisms (automata, Petri nets, process algebra,...), and equivalences (language equivalence, bisimulation(s),...) representing the discriminating power of an attacker. (see [START_REF] Sabelfeld | Language-based information-flow security[END_REF] for a survey). Opacity generalizes non-interference. The secrets to hide in a system are sets of runs that should remain indistinguishable from other behaviors. A system is considered as opaque if, as observed, one can not deduce that the current execution belongs to the secret. In the standard setting, violation of opacity is a passive process: attackers only rely on their partial observation of runs of the system. Checking whether a system is opaque is a PSPACE-complete problem [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF]. As such, opacity does not take in account information that can be gained by active attackers. Indeed, a system may face an attacker having the capability not only to observe the system but also to interact with him in order to eventually disambiguate observation and detect a secret. A second aspect usually ignored is the quantification of opacity : the more executions leaking information are costly for the attacker, the more secure is the system. In this paper we address both aspects. A first result of this paper is to consider active opacity, that is opacity in a setting where attackers of a system perform actions in order to collect information on secrets of the system. Performing actions in our setting means playing standard operations allowed by the system, but also using observation capacities to infer whether a sensible run is being performed. Checking opacity in an active context is a partial information reachability game, and is shown EXPTIME-complete. We then address opacity in a quantitative framework, characterizing the efforts needed for an attacker to gain hidden information with a cost function. Within this setting, a system remains opaque if the cost needed to obtain information exceeds a certain threshold. This cost is measured as a distance of the attacker's behavior with respect to a regular profile, modeling that deviations are caught by anomaly detection mechanisms. We use several types of distances, and show that quantitative and passive opacity remains PSPACE-complete, while quantitative and active opacity remains EXPTIMEcomplete. Opacity with passive attackers has been addressed in a quantitative setting by [START_REF] Bérard | Quantifying opacity[END_REF]. They show several measures for opacity. Given a predicate φ characterizing secret runs, a first measure quantifies opacity as the probability of a set of runs which observation suffice to claim the run satisfies φ. A second measure considers observation classes (sets of runs with the same observation), and defines the restrictive probabilistic opacity measure as an harmonic mean (weighted by the probability of observations) of probability that φ is false in a given observation class. Our setting differs from the setting of [START_REF] Bérard | Quantifying opacity[END_REF] is the sense that we do not measure secrecy as the probability to leak information to a passive attacker, but rather quantify the minimal efforts required by an active attacker to obtain information. The paper is organized as follows: Section II introduces our model for distributed systems, and the definition of opacity. Section III recalls the standard notion of opacity usually found in the literature and its PSPACE-completeness, shows how to model active attackers with strategies, and proves that active opacity can be solved as a partial information game over an exponential size arena, and is EXPTIME-complete. Section IV introduces quantification in opacity questions, by measuring the distance between the expected behavior of an agent and its current behavior, and solves the opacity question with respect to a bound on this distance. Section V enhances this setting by discounting distances, first by defining a suspicion level that depends on evolution of the number of errors within a bounded window, and then, by averaging the number of anomalies along runs. The first window-based approach does not change the complexity classes of passive/active opacity, but deciding opacity for averaged measures is still an open problem. II. MODEL Let Σ be an alphabet, and let Σ ⊆ Σ. A word of Σ * is a sequence of letters w = σ 1 . . . σ n . We denote by w -1 the mirror of w, i.e., w -1 = σ n . . . σ 1 .The projection of w on Σ ⊆ Σ is defined by the morphism π Σ : Σ * → Σ * defined as π Σ ( ) = , π Σ (a.w) = a.π Σ (w) if a ∈ Σ and π Σ (a.w) = π Σ (w) otherwise. The inverse projection of w is the set of words which projection is w, and is defined as π -1 Σ (w) = {w ∈ Σ * | π Σ (w ) = w}. For a pair of words w, w defined over alphabets Σ and Σ , the shuffle of w and w is denoted by w||w and is defined as the set of words w||w = {w | π Σ (w ) = w ∧ π Σ (w ) = w }. The shuffle of two languages L 1 , L 2 is the set of words obtained as a shuffle of a words of L 1 with a word of L 2 . Definition 1: A concurrent system S = (A, U ) is composed of: • A finite automaton A = (Σ, Q, -→, q 0 , F ) • A finite set of agents U = u 1 , . . . u n , where each u i is a tuple u i = (A i , P i , S i , Σ i o ), where A i , P i , S i are automata and Σ i o an observation alphabet. Agents behave according to their own logic, depicted by a finite automaton A i = (Σ i , Q i , -→ i , q i 0 , F i ) over an action alphabet Σ i . We consider that agents moves synchronize with the system when performing their actions. This allows modeling situations such as entering critical sections. We consider that in A and in every A i , all states are accepting. This way, every sequence of steps of S that conforms to transition relations is a behavior of S. An agent u i observes a subset of actions, defined as an observation alphabet Σ i o ⊆ Σ 1 . Every agent u i possesses a secret, defined as a regular language L(S i ) recognized by automaton S i = (Σ, Q S i , -→ S i , q S 0,i , F S i ). All states of secret automata are not accepting, i.e. some behaviors of an agent u i are secret, some are not. We equip every agent u i with a profile P i = (Σ, Q P i , δ P i , s P 0,i , F P i ), that specifies its "normal" behavior. The profile of an agent is 1 A particular case is Σ i o = Σ i , meaning that agent u i observes only what it is allowed to do. prefix-closed. Hence, F P i = Q P i , and if w.a belongs to profile L(P i ) then w is also in user u i 's profile. In profiles, we mainly want to consider actions of a particular agent. However, for convenience, we define profiles over alphabet Σ, and build them in such a way that L( P i ) = L(P i ) (Σ \ Σ i ) * . We assume that the secret S i of an user u i can contain words from Σ * , and not only words in Σ * i . This is justified by the fact that an user may want to hide some behavior that are sensible only if they occur after other agents actions (u 1 plays b immediately after a was played by another agent). For consistency, we furthermore assume that Σ i ⊆ Σ i o , i.e., an user observes at least its own actions. Two users may have common actions (i.e., Σ i ∩ Σ j = ∅), which allows synchronizations among agents. We denote by Σ U = ∪ i∈U Σ i the possible actions of all users. Note that Σ U ⊆ Σ as the system may have its own internal actions. Intuitively, in a concurrent system, A describes the actions that are feasible with respect to the current global state of the system (available resources, locks, access rights,...). The overall behavior of the system is a synchronized product of agents behaviors, intersected with L(A). Hence, within a concurrent system, agents perform moves that are allowed by their current state if they are feasible in the system. If two or more agents can perform a transition via the same action a, then all agents that can execute a move conjointly to the next state in their local automaton. More formally, a configuration of a concurrent system is a tuple C = (q, q 1 , . . . , q |U | ), where q ∈ Q is a state of A and each q i ∈ Q i is a local state of user u i . The first component of a configuration C is denoted state(C). We consider that the system starts in an initial configuration C 0 = (q 0 , q 1 0 , . . . , q |U | 0 ). A move from a configuration C = (q, q 1 , . . . , q |U | ) to a configuration C = (q , q 1 , . . . , q |U | ) via action a is allowed • if a ∈ Σ U and (q, a, q ) ∈-→, or • if a ∈ Σ U , (q, a, q ) ∈-→, there exists at least one agent u i such that (q i , a, q i ) ∈-→ i , and for every q j such that some transition labeled by a is firable from q j , (q j , a, q j ) ∈-→ j . The local state of agents that cannot execute a remains unchanged, i.e., if agent u k is such that a ∈ Σ k and (q j , a, q j ) ∈-→ j , then q k = q k . A run of S = (A, U ) is a sequence of moves ρ = C 0 a1 -→ C 1 . . . C k . Given a run ρ = C 0 a1 -→ C 1 . . . a k -→ C k , we denote by l(ρ) = a 1 • • • a k its corresponding word. The set of run of S is denoted by Runs(S), while the language L(S) = l(Runs(S)) is the set of words labeling runs of S. We denote by Conf (S) the configurations reached by S starting from C 0 . The size |S| of S is the size of its set of configurations. Given an automaton A, P i , or S i , we denote by δ(q, A, a) (resp δ(q, P i , a), δ(q, S i , a)) the states that are successors of q by a transition labeled by a, i.e. δ(q, A, a) = {q | q a -→ q }. This relation extends to sets of states the obvious way, and to words, i.e. δ(q, A, w.a) = δ(δ(q, A, w), A, a) with δ(q, A, ) = {q}. Last, for a given sub-alphabet Σ ⊆ Σ and a letter a ∈ Σ , we define by ∆ Σ (q, A, a) the set of states that are reachable from q in A by sequences of moves which observation is a. More formally, ∆ Σ (q, A, a) = {q | ∃w ∈ (Σ \ Σ ) * , q ∈ δ(q, A, w.a)}. III. OPACITY FOR CONCURRENT SYSTEMS The standard Boolean notion of opacity introduced by [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] says that the secret of u i in a concurrent system S is opaque to u j if, every secret run of u i is equivalent with respect to u j 's observation to a non-secret run. In other words, u j cannot say with certainty that the currently executed run belongs to L(S i ). Implicitly, opacity assumes that the specification of the system is known by all participants. In the setting of concurrent system with several agents and secrets, concurrent opacity can then be defined as follows: Definition 2 (Concurrent Opacity): A concurrent system S is opaque w.r.t. U (noted U -Opaque) if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Clearly, U -opacity is violated if one can find a pair of users u i , u j and a run labeled by a word w ∈ L(S i )∩L(S) such that π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ), i.e. after playing w, there in no ambiguity for u j on the fact that w is a run contained in u i s secret. Unsurprisingly, checking opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. This property was already shown in [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF] with a slightly different model (with a single agent j which behavior is Σ * j and a secret defined as a sub-language of the system A). Theorem 3 ( [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF]): Deciding whether S is U -opaque is PSPACE-complete. Proof:[sketch] The proof of PSPACE-completeness consists in first showing that one can find a witness run in polynomial space. One can chose a pair of users u i , u j in logarithmic space with respect to the number of users, and then find a run after which u j can estimate without error that u i is in a secret state. Then, an exploration has to maintain u j 's estimation of possible configuration of status of u i 's secret with |Conf (S)| * |S i | bits. It is also useless to consider runs of length greater than 2 |Conf (S)| * |Si| . So finding a witness is in NPSPACE and using Savitch's lemma [START_REF] Walter | Relationships between nondeterministic and deterministic tape complexities[END_REF] and closure of PSPACE by complementation, opacity is in PSPACE. Hardness comes from a reduction from universality question for regular languages. We refer interested readers to appendix for a complete proof. The standard notion of opacity considers accidental leakage of secret information to an honest user u j that is passive, i.e. that does not behave in order to obtain this information. One can also consider an active setting, where a particular agent u j behaves in order to obtain information on a secret S i . In this setting, one can see opacity as a partial information reachability game, where player u j tries to reach a state in which his estimation of S i s states in contained in F S i . Following the definition of non-interference by Goguen & Messeguer [START_REF] Goguen | Security policies and security models[END_REF], we also equip our agents with observation capacities. These capacities can be used to know the current status of resources of the system, but not to get directly information on other agents states. We define a set of atomic propositions Γ, and assign observable propositions to each state of A via a map O : Q → 2 Γ . We next equip users with additional actions that consist in asking for the truth value of a particular proposition γ ∈ Γ. For each γ ∈ Γ, we define action a γ that consists in checking the truth value of proposition γ, and define Σ Γ = {a γ | γ ∈ Γ}. We denote by a γ (q) the truth value of proposition γ in state q, i.e., a γ (q) = tt if γ ∈ O(q) and ff otherwise. Given a set of states X = {q 1 , . . . q k }, the refinement of X with assertion γ = v where v ∈ {tt, ff } is the set X \γ=v = {q i ∈ X | a γ (q i ) = v}. Refinement easily extends to a set of configurations CX ⊆ Conf (S) with CX \γ=v = {C ∈ CX | a γ (state(C)) = v}. We allow observation from any configuration for every user, hence a behavior of a concurrent system with active attackers shuffles behaviors from L(S), observation actions from Σ * Γ and the obtained answers. To simplify notations, we assume that a query and its answer are consecutive transitions. The set of queries of a particular agent u j will be denoted by Σ Γ j . Adding the capacity to observe states of a system forces to consider runs of S containing queries followed by their answers instead of simply runs over Σ * . We will denote by S Γ the system S executed in an active environment Formally, a run of S Γ in an active setting is a sequence ρ = C 0 e1 -→ S Γ C 1 . . . e k -→ S Γ C k where C 0 , . . . , C k are usual configurations, each e i is a letter from Σ ∪ Σ Γ ∪ {tt, ff }, such that • if e k ∈ Σ Γ then C k+1 ∈ δ(C k , S, e k ). • if e k = a γ ∈ Σ Γ , then e k+1 = a γ (q k-1 )2 , and C k-1 = C k+1 . Intuitively, testing the value of a proposition does not change the current state of the system. Furthermore, playing action a γ from C k-1 leaves the system in the same configuration, but remembering that an agent just made the query a γ . We will write C k = C k-1 (a γ ) to denote this situation. The semantics of S Γ can be easily obtained from that of S. It can be defined as a new labeled transition system LT S Γ (S) = (Conf (S Γ ), -→ S Γ , C 0 ) over alphabet Σ ∪ Σ Γ ∪ {tt, ff } recognizing runs of S Γ . If LT S(S) = (Conf (S), -→) is an LTS defining runs of S, then LT S(S Γ ) can be built by adding a loop of the form C k aγ -→ S Γ C k (a γ ) aγ (q k ) -→ S Γ C k from each configuration C k in Conf (S). We denote by Runs(S Γ ) the set of runs of system S in an active setting with observation actions Σ Γ . As usual, ρ is a secret run of agent u i iff l(ρ) is recognized by automaton S i . The observation of a run ρ by user u j is a word l j (ρ) obtained by projection of l(ρ) on Σ j ∪ Σ Γ j ∪ {tt, ff }. Hence, an observation of user j is a word l j (ρ) = α 1 . . . α k where α m+1 ∈ {tt, ff } if α m ∈ Σ Γ j (α m is a query followed by the corresponding answer). Let w ∈ (Σ j .(Σ Γ j .{tt, ff }) * ) * . We denote by l -1 j (w) the set of runs of S Γ which observation by u j is w. A malicious agent can only rely on his observation of S to take the decisions that will provide him information on other users secret. Possible actions to achieve this goals are captured by the notion of strategy. Definition 4: A strategy for an user u j is a map µ j from Runs(S Γ ) to Σ j ∪ Σ Γ j ∪ { }. We assume that strategies are observation based, that is if l j (ρ) = l j (ρ ), then µ j (ρ) = µ j (ρ ). A run ρ = C 0 e1 -→ C 1 . . . C k conforms to strategy µ j iff, ∀i, µ j (l(C 0 -→ . . . C i )) = implies e i+1 = µ j (l(C 0 -→ . . . C i )) or e i+1 ∈ Σ j ∪ Σ Γ j . Intuitively, a strategy indicates to player u j the next move to choose (either an action or an observation or nothing. Even if a particular action is advised, another player can play before u j does. We will denote by Runs(S, µ j ) the runs of S that conform to µ j . Let µ j be a strategy of u j and ρ ∈ Runs(S Γ ) be a run ending in a configuration C = (q, q 1 , . . . q |U | ), we now define the set of all possible configurations in which S can be after observation l j (ρ) under strategy µ j . It is inductively defined as follows: • ∆ µj (X, S Γ , ) = X for every set of configurations X • ∆µ j (X, S Γ , w.e) =                ∆ Σ j o (∆µ j (X, S Γ , w), S Γ , e) if e ∈ Σj ∆µ j (X, S Γ , w) if e = aγ ∈ Σ Γ j , ∆µ j (X, S Γ , w) \γ(q) if e ∈ {tt, ff } and w = w .aγ for some γ ∈ Γ Now, ∆ µj ({C 0 }, S Γ , w) is the estimation of the possible set of reachable configurations that u j can build after observing w. We can also define a set of plausible runs leading to observation w ∈ (Σ j o ) * by u j . A run is plausible after w if its observation by u j is w, and at every step of the run ending in some configuration C k a test performed by u j refine u j s estimation to a set of configuration that contain C k . More formally, the set of plausible runs after w under strategy µ j is P l j (w) = {ρ ∈ Runs(S, µ j ) | l j (ρ) = w ∧ ρ is a run from C 0 to a configuration C ∈ ∆ µj ({C 0 }, S Γ , w)}. We now redefine the notion of opacity in an active context. A strategy µ j of u j to learn S i is not efficient if despite the use of µ j , there is still a way to hide S i for an arbitrary long time. In what follows, we assume that there is only one attacker of the system. Definition 5 (Opacity with active observation strategy): A secret S i is opaque for any observation strategy to user u j in a system S iff µ j and a bound K ∈ N, such that ∀ρ ∈ Runs(S, µ j ), ρ has a prefix ρ 1 of size ≤ K, l(P l(ρ 1 )) ⊆ L(S i ). A system S is opaque for any observation strategy iff ∀i = j, secret S i is opaque for any observation strategy of u j . Let us comment differences between passive (def. 2) and active opacity (def. 5). A system that is not U-opaque may leak information while a system that not opaque with active observation strategy cannot avoid leaking information if u j implements an adequate strategy. U-opaque systems are not necessarily opaque with strategies, as active tests give additional information that can disambiguate state estimation. However, if a system is U-opaque, then strategies that do not use disambiguation capacities do not leak secrets. Note also that a non-U-opaque system may leak information in more runs under an adequate strategy. Conversely, a non-opaque system can be opaque in an active setting, as the system can delay leakage of information for an arbitrary long time. Based on the definition of active opacity, we can state the following result: Theorem 6: Given a system S = (A, U ) with n agents and a set secrets S 1 , . . . S n , observation alphabets Σ 1 o , . . . Σ n o and observation capacities Σ Γ 1 , . . . , Σ Γ n , deciding whether S is opaque with active observation strategies is EXPTIMEcomplete. Proof:[sketch] An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes of the form n = (b, C, s, ES) contains a player's name b (0 or 1): intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays. Nodes also contain the current configuration C of S, the current state s of S i , an estimation ES of possible configurations of the system with secret's current state by u j , ES j = {(C 1 , s 1 ), ...(C k , s k )}. The attacker starts with an initial estimation ES 0 = {(C 0 , q S 0,i )}. Then, at each occurrence of an observable move, the state estimation is updated as follows : given a letter a ∈ Σ j o , for every pair (C k , s k ), we compute the set of pairs (C k , s k ) such that there exists a runs from C k to C k , that is labeled by a word w that is accepted from s k and leads to s k in S i and such that l j (w) = a. The new estimation is the union of all pairs computed this way. Moves in this arena represent actions of player u j (from nodes where b = 1 and actions from the rest of the system (see appendix for details). Obviously, this arena is of exponential size w.r.t. the size of configurations of S. A node n = (b, C, s, ES) is not secret if s ∈ F S i , and secret otherwise. A node is ambiguous if there exists (C p , s p ) and (C m , s m ) in ES such that s p ∈ F S i is secret and s m ∈ F S i . If the restriction of ES to it second components is contained in F S i , n leaks secret S i . The set of winning nodes in the arena is the set of nodes that leak S i . Player u j can take decisions only from its state estimation, and wins the game if it can reach a node in the winning set. This game is hence a partial information reachability game. Usually, solving such games requires computing an exponentially larger arena containing players beliefs, and then apply polynomial procedures for a perfect information reachability game. Here, as nodes already contain beliefs, there is no exponential blowup, and checking active opacity is hence in EXPTIME. For the hardness part, we use a reduction from the problem of language emptiness for alternating automata to an active opacity problem. (see appendix for details) Moving from opacity to active opacity changes the complexity class from P SP ACE-complete to EXP T IM E-complete. This is due to the game-like nature of active opacity. However, using observation capacities does not influence complexity: even if an agent u j has no capacity, the arena built to verify opacity of S i w.r.t. u j is of exponential size, and the reduction from alternating automata used to prove hardness does not assume that observation capacities are used. IV. OPACITY WITH THRESHOLD DISTANCES TO PROFILES So far, we have considered passive opacity, i.e. whether a secret can be leaked during normal use of a system, and active opacity, i.e. whether an attacker can force secret leakage with an appropriate strategy and with the use of capacities. In this setting, the behavior of agents is not constrained by any security mechanism. This means that attackers can perform illegal actions with respect to their profile without being discovered, as long as they are feasible in the system. We extend this setting to systems where agents behaviors are monitored by anomaly detection mechanisms, that can raise alarms when an user's behavior seems abnormal. Very often, abnormal behaviors are defined as difference between observed actions and a model of normality, that can be a discrete event model, a stochastic model,.... These models or profiles can be imposed a priori or learnt from former executions. This allows for the definition of profiled opacity, i.e. whether users that behave according to predetermined profile can learn a secret, and active profiled opacity, i.e. a setting where attackers can perform additional actions to refine their knowledge of the system's sate and force secret leakage in a finite amount of time without leaving their normal profile. One can assume that the behavior of an honest user u j is a distributed system is predictable, and specified by his profile P j . The definitions of opacity (def. 2) and active opacity (def. 5) do not consider these profiles, i.e. agents are allowed to perform legally any action allowed by the system to obtain information. In our opinion, there is a need for a distinction between what is feasible in a system, and what is considered as normal. For instance, changing access rights of one of his file by an agent should always be legal, but changing access rights too many times within a few seconds should be considered as an anomaly. In what follows, we will assume that honest users behave according to their predetermined regular profile, and that deviating from this profile could be an active attempt to break the system's security. Yet, even if an user is honest, he might still have possibilities to obtain information about other user's secret. This situation is captured by the following definition of opacity wrt a profile. Definition 7: A system S = (A, U ) is opaque w.r.t. profiles P 1 , . . . P n if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), w ∈ L(P j ) ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Intuitively, a system is opaque w.r.t profiles of its users if it does not leak information when users stay within their profiles. If this is not the case, i.e. when w ∈ L(P j ), then one can assume that an anomaly detection mechanism that compares users action with their profiles can raise an alarm. Definition 7 can be rewritten as ∀i = j, ∀w ∈ L(S i ) ∩ L(P j ) ∩ L(S), π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Hence, P SP ACEcompleteness of opacity in Theorem 3 extends to opacity with profiles: it suffices to find witness runs in L(S)∩L(S i )∩L(P j ). Corollary 8: Deciding whether a system S is opaque w.r.t. a set of profiles P 1 , . . . P n is PSPACE complete. If a system is U-opaque, then it is opaque w.r.t its agents profiles. Using profiles does not change the nature nor complexity of opacity question. Indeed, opacity w.r.t. a profile mainly consists in considering regular behaviors in L(P j ) instead of L(A j ). In the rest of the paper, we will however use profiles to measure how much users deviate from their expected behavior and quantify opacity accordingly. One can similarly define a notion of active opacity w.r.t. profiles, by imposing that choices performed by an attacker are actions that does not force him to leave his profile. This can again be encoded as a game. This slight adaptation of definition 5 does not change the complexity class of the opacity question (as it suffices to remember in each node of the arena a state of the profile of the attacker). Hence active opacity with profiles is still a partial information reachability game, and is also EXPTIME-complete. Passive opacity (profiled or not) holds iff certain inclusion properties are satisfied by the modeled system, and active opacity holds if an active attacker has no strategy to win a partial information reachability game. Now, providing an answer to these opacity questions returns a simple boolean information on information leakage. It is interesting to quantify the notions of profiled and active opacity for several reasons. First of all, profiles can be seen as approximations of standard behaviors: deviation w.r.t. a standard profile can be due to errors in the approximation, that should not penalize honest users. Second, leaving a profile should not always be considered as an alarming situation: if profiles are learned behaviors of users, one can expect that from time to time, with very low frequency, the observed behavior of a user differs from what was expected. An alarm should not be raised as soon as an unexpected event occurs. Hence, considering that users shall behave exactly as depicted in their profile is a too strict requirement. A sensible usage of profiles is rather to impose that users stay close to their prescribed profile. The first step to extend profiled and active opacity to a quantitative setting is hence to define what "close" means. Definition 9: Let u, v be two words of Σ * . An edit operation applied to word u consists in inserting a letter a ∈ Σ in u at some position i, deleting a letter a from u at position i, or substituting a letter a for another letter b in u at position i. Let OP s(Σ) denote the set of edit operations on Σ, and ω(.) be a cost function assigning a weight to each operation in OP s(Σ). The edit distance d(u, v) between u and v is the minimal sum of costs of operations needed to transform u in v. Several edit distances exist, the most known ones are • the Hamming distance ham((u, v)), that assumes that OP s(Σ) contains only substitutions, and counts the number of substitutions needed to obtain u from v (u, v are supposed of equal lengths). • the Levenshtein distance lev((u, v)) is defined as the distance obtained when ω(.) assigns a unit to every operation (insertion, substitution, deletion). One can notice that lev((u, v)) is equal to lev((v, u)), and that max(|u|, |v|) ≥ lev((u, v)) ≥ ||u| -|v||. For a particular distance d(.) among words, the distance between a word u ∈ Σ * and a language R ⊆ Σ * is denoted d(u, R) and is defined as d(u, R) = min{d(u, v) | v ∈ R}. We can now quantify opacity. An expected secure setting is that no secret is leaked when users have behaviors that are within or close enough from their expected profile. In other words, when the observed behavior of agents u 1 , . . . u k resemble the behavior of their profiles P 1 , . . . , P k , no leakage should occur. Resemblance of u i 's behavior in a run ρ labeled by w can be defined as the property d(w, L(P i ))) ≤ K for some chosen notion of distance d(.) and some threshold K fixed by the system designers. In what follows, we will use the Hamming and Levenshtein distances as a proximity measures w.r.t. profiles. However, we believe that this notion of opacity can be extended to many other distances. We are now ready to propose a quantified notion of opacity. Definition 10 (threshold profiled opacity): A system S is opaque wrt profiles P 1 , . . . P n with tolerance K for a distance d iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), d(w, L(P j )) ≤ K ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Threshold profiled opacity is again a passive opacity. In some sense, it provides a measure of how much anomaly detection mechanisms comparing users behaviors with their profiles are able to detect passive leakage. Consider the following situation: the system S is opaque w.r.t. profiles P 1 , . . . P n with threshold K + 1 but not with threshold K. Then it means there exists a run of the system with K + 1 anomalies of some user u j w.r.t. profile P j , but no run with K anomalies. If anomaly detection mechanisms are set to forbid execution of runs with more than K anomalies, then the system remains opaque. We can also extend the active opacity with thresholds. Let us denote by Strat K j the set of strategies that forbid actions leaving a profile P j if the behavior of the concerned user u j is already at distance K from P j (the distance can refer to any distance, e.g., Hamming or Levenshtein). Definition 11 (active profiled Opacity): A system S is opaque w.r.t. profiles P 1 , . . . P n with tolerance K iff ∀i = j, µ j ∈ Strat K j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ). Informally, definition 10 says that a system is opaque if no attacker u j of the system have a strategy that leaks a secret S i and costs less than K units to reach this leakage. Again, we can propose a game version for this problem, where attacker u j is not only passive, but also has to play his best actions in order to learn u i 's secret. A player u j can attack u i 's secret iff it has a strategy µ j to force a word w ∈ L(S i ) that conforms to µ j , such that d(w, L(P j )) ≤ K and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). This can be seen as a partial information game between u j and the rest of the system, where the exact sate of each agent is partially known to others. The system wins if it can stay forever is states where u j 's estimates does not allow to know that the secret automaton S i is in one of its accepting states. The arena is built in such a way that u j stops playing differently from its profile as soon as it reaches penalty K. This is again a partial information rechability game and that is decidable on finite arenas [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. Fortunately, we can show (in lemma 12 below) that the information to add to nodes with respect to the games designed for active opacity (in theorem 6) is finite. Lemma 12: For a given automaton G, one can compute an automaton G K that recognizes words at distance at most K of L(G), where the distance is either the Hamming or Levenshtein distance. Proof: Let us first consider the Hamming distance. For an automaton G R = (Q R , -→ R , q 0 R , F R ), we can design an automaton G K ham = (Q K , -→ K , q K 0 , F K ) that recognizes words at a distance at most K from the reference language L(G). We have Q K = Q R × {0..K}, F K = Q × {0. .K}, and q K 0 = (q 0 , 0). Last, we give the transition function: we have ((q, i), a, (q , i)) ∈-→ K iff (q, a, q ) ∈-→ R , and ((q, i), a, (q , i + 1)) ∈-→ K if (q, a, q ) ∈-→ R and i+1 ≤ K, and there exists b = a such that (q, b, q ) ∈-→ R . This way, G K ham recognizes sequences of letters that end on a state (q f , i) such that q f is an accepting state of G R , and i ≤ K. One can easily show that for any accepting path in G K ham ending on state (q f , i) recognizing word w, there exists a path in G R of identical length recognizing a word w that is at hamming distance at most K of w. Similarly, let us consider any accepting path ρ = q 0 R a1 -→ R q 1 . . . an -→ R q f of G R . Then, every path of the form ρ k = (q 0 R , 0) . . . ai1 -→ K (q i1 , 1) . . . (q ik-1 , k -1) a ik -→ K (q ik , k) . . . an -→ K (q f , i) such that i ≤ K and for every (q ij-1 , j -1) aij -→ K (q ij , j), a ij is not allowed in sate q ij is a path that recognizes a word at distance i of a word in R and is also a word of G K R . One can show by induction on the length of paths that the set of all paths recognizing words at distance at most k can be obtained by random insertion of at most k such letter changes in each path of G R . The size of G K ham is exactly |G R | × K. Let us now consider the Levenshtein distance. Similarly to the Hamming distance, we can compute an automaton G K Lev that recognizes words at distance at most K from L(G). Namely, G K Lev = (Q lev , -→ lev , q 0,Lev , F lev ) where Q lev = Q×{0..K}, q 0,lev = (q 0 , 0), F lev = F ×{0..K}. Last the transition relation is defined as ((q, i), a, (q , i)) ∈-→ lev if (q, a, q ) ∈-→, ((q, i), a, (q, i + 1)) ∈-→ lev if q , (q, a, q ) ∈-→ (this transition simulates insertion of letter a in a word), ((q, i), a, (q , i + 1)) ∈-→ lev if ∃(q, b, q ) ∈-→ with b = a (this transition simulates substitution of a character), ((q, i), , (q , i + 1)) ∈-→ lev if ∃(q, a, q ) ∈-→ (this last move simulates deletion of a character from a word in L(G). One can notice that this automaton contains transition, but after and -closure, one obtains an automaton without epsilon that recognizes all words at distance at most K from L(G). The proof of correctness of the construction follows the same lines as for the Hamming distance, with the particularity that one can randomly insert transitions in paths, by playing letters that are not accepted from a state, leaving the system in the same state, and simply increasing the number of differences. Notice that if a word w is recognized by G K Lev with a path ending in a state (q, i) ∈ F Lev , this does not mean that the Levenshtein distance from L(G) is i, as w can be recognized by another path ending in a state (q , j) ∈ F Lev with j < i. s 0 s 1 s 2 a b a, c a s 0 , 0 s 0 , 1 s 0 , 2 s 0 , 3 s 1 , 0 s 1 , 1 s 1 , 2 s 1 , 3 s 2 , 0 s 2 , 1 s 2 , 2 s One can notice that the automata built in the proof of lemma 12 are of size in O(K.|G|), even after -closure. Figure 1 represents an automaton G that recognizes the prefix closure of a.a * .b.(a + c) * , and the automaton G 3 Ham . Theorem 13: Deciding threshold opacity for the Hamming and Levenshtein distance is PSPACE complete. Proof: First of all, one can remark that, for a distance d(.), a system S is not opaque if there exists a pair of users u i , u j and a word w in L(S) ∩ L(S i ) such that d(w, L(P j )) ≤ K, and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). As already explained in the proof of theorem 3, w belongs to L(S i ) if a state q w reached by S i after reading w belongs to F S i . Still referring to the proof of Theorem 3, one can maintain online when reading letters of w the set reach j (w) of possible configurations and states of S i that are reached by a run which observation is the same as π Σ j o (w). One can also notice that lev(w, L(P j )) ≤ K iff w recognized by P K j,lev , the automaton that accepts words at Levenshtein distance at most K from a word in P j . Again, checking online whether w is recognized by P K j,Lev consists in maintaining a set of states that can be reached by P K j,Lev when reading w. We can denote by reach K j,Lev (w) this set of states. When no letter is read yet, reach K j,Lev ( ) = {q K 0 }, and if lev(w, L(P j )) > K, we have reach K j,Lev (w) = ∅, meaning that the sequence of actions played by user u j have left the profile. We can maintain similarly a set of states reach K j,Ham (w) for the Hamming distance. In what follows, we will simply use reach K j (w) to denote a state estimation using Levenstein of Hamming distance. Hence, non-opacity can be rephrased as existence of a run, labeled by a word w such that reach j (w) ⊆ F S i and reach K j (w) = ∅. The contents of reach j (w) and reach K j (w) after reading a word w can be recalled with a vector of h = |S| + |P K j | bits. Following the same arguments as in Theorem 3, it is also useless to consider runs of size greater than 2 h . One can hence non-deterministically explore the whole set of states reached by reach j (w) and reach K j (w) during any run of S by remembering h bits and a counter which value is smaller of equal to 2 h , and can hence be encoded with at most h bits. So, finding a witness for non-opacity is in NP-SPACE, and by Savitch's theorem and closure by complementation of PSPACE, opacity with a threshold K is in PSPACE. For the hardness part, it suffices to remark that profiled opacity is exactly threshold profiled opacity with K = 0. Theorem 14: Deciding active profiled opacity for the Hamming and Levenshtein distance is EXPTIME-complete. Proof:[sketch] Let us first consider the Hamming distance. One can build an arena for a pair of agents u i , u j as for the proof of theorem 6. This arena is made of nodes of the form (b, C, s, spjk, ES, d) that contain: a bit b indicating if it is u j turn to play and choose the next move, C the current configuration of S , s the current state of S i , the estimation of ES of possible pairs (C, s) of current configuration and current state of the secret by player u j , and spjk a set of states of the automaton P K j,ham that recognizes words that are at Hamming distance at most K from P j . In addition to this information, a node contains the distance d of currently played sequence w.r.t. profile P j . This distance can be easily computed: if all states of P K j,ham memorized in spjk are pairs of state and distance, i.e., spkj = {(q 1 , i 1 ), (q 2 , i 2 ), . . . , (q k , i k )} then d = min{i 1 , . . . , i k }. User u j (the attacker) has partial knowledge of the current state of the system (i.e. a configuration of S and of the state of S i ), perfect knowledge of d. User j wins if it can reach a node in which his estimation of the current state of secret S i is contained in F S i (a non-ambiguous and secret node), without exceeding threshold K. The rest of the system wins if it can prevent player u j to reach a non-ambiguous and secret node of the arena. We distinguish a particular node ⊥ reached as soon as the distance w.r.t. profile P j is greater than K. We consider this node as ambiguous, and every action from it gets back to ⊥. Hence, after reaching ⊥, player u j has no chance to learn S i anymore. The moves from a node to another are the same as in the proof for theorem 6, with additional moves from any node of the form n = (1, q, s, spjk, ES, d) to ⊥ using action a is the cost of using a from n exceeds K. We add an equivalence relation ∼, such that n = (b, q, s, spjk, ES, d) ∼ n = (b , q , s , spjk , ES , d ) iff b = b , spjk = spjk , d = d , and ES = ES . Obviously, u j has a strategy to violate u i 's secret without exceeding distance K w.r.t. its profile P j iff there is a strategy to reach W in = {(b, q, s, spjk, ES, d) | ES ⊆ S F i } for player u j with partial information that does not differentiate states in the equivalence classes of ∼. This is a partial information reachability game over an arena of size in O(2.|Conf (S)|.|S i |.2 |Conf (S)|.|Si|.K.|Pj | ), that is exponential in the size of S and of the secret S i and profile P j . This setting is a partial information reachability game over an arena of exponential size. As in the Boolean setting, the nodes of the arena already contain a representation of the beliefs that are usually computed to solve such games, and hence transforming this partial information reachability game into a perfect information game does not yield an exponential blowup. Hence, solving this reachability game is in EXPTIME. The hardness part is straightforward: the emptiness problem for alternating automaton used for the proof of theorem 6 can be recast in a profiled and quantified setting by setting each profile P i to an automaton that recognizes (Σ Γ i ) * (i.e., users have the right to do anything they want as far as they always remain at distance 0 from their profile). V. DISCOUNTING ANOMALIES Threshold opacity is a first step to improve the standard Boolean setting. However, this form of opacity supposes that anomaly detection mechanisms memorize all suspicious moves of users and never revises their opinion that a move was unusual. This approach can be too restrictive. In what follows, we propose several solutions to discount anomalies. We first start by counting the number of substitutions in a bounded suffix with respect to the profile of an attacker. A suspicion score is computed depending on the number of differences within the suffix. This suspicion score increases if the number of errors in the considered suffix is above a maximal threshold, and it is decreased as soon as this number of differences falls below a minimal threshold. As in former sections, this allows for the definition of passive and active notions of opacity, that are respectively PSPACE-complete and EXPTIME-complete. We then consider the mean number of discrepancies w.r.t. the profile as a discounted Hamming distance. A. A Regular discounted suspicion measure Let u ∈ Σ K .Σ * and let v ∈ Σ * . We denote by d K (u, v) the distance between the last K letters of word u and any suffix of v, i.e. d K (u, v) = min{d(u [|u|-K,|u|] , v ) | v is a suffix of v}. Given a regular language R we define d K (u, R) = min{d K (u, v) | v ∈ R} Lemma 15: Let R be a regular language. For a fixed K ∈ N, and for every k ∈ [0..K], one can compute an automaton C k that recognizes words which suffixes of length K are at Hamming distance k from a suffix of a word of R. We now define a cost model, that penalizes users that get too far from their profile, and decreases this penalty when getting back closer to a normal behavior. For a profile P j and fixed values α, β ≤ K we define a suspicion function Ω j for words in Σ * inductively: Ω j (w) = 0 if |w| ≤ K Ω j (a.w.b) = Ω j (a.w) + 1 if d K (w.b, P j ) ≥ β max(Ω j (a.w) -1, 0) if d K (w.b, P j ) ≤ α As an example, let us take as profile P j the automaton G of One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results. Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) . Theorem 17: Opacity with suspicion threshold for the Hamming distance is PSPACE-complete. Definition 18: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N. S is actively opaque with suspicion threshold T iff ∀i = j there exists no strategy µ j ∈ Start T such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ). Theorem 19: Active opacity with suspicion threshold for the Hamming distance is EXPTIME-complete. Proof: We build an arena that contains nodes of the form n = (b, C, ES, EC 0 , . . . EC k , sus). C is the actual current configuration of S Γ , ES is the set of pairs (C, s) of configuration and secret sates in which S Γ could be according to the actions observed by u j and according to the belief refinements actions performed by u j . Sets EC 1 . . . EC k remembers sets of states of cost automata C 0 , . . . C K . Each EC i memorizes the states in which C i could be after reading the current word. If EC i contains a final state, then the K last letters of the sequence of actions executed so far contain exactly i differences. Note that only one of these sets can contain an accepting state. Suspicion sus is a suspicion score between 0 and T . When reading a new letter, denoting by p the number of discrepancies of the K last letters wrt profiles, one can update the suspicion score using the definition of C j above, depending on whether p ∈ [0, α], p ∈ [α, β] or p ∈ [β, K]. The winning condition in this game is the set W in = {(b, C, ES, EC 0 , . . . EC k , sus) | ES ⊆ Conf (S) × F S i }. We partition the set of nodes into V 0 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 0} and V 1 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 1} . We de-fine moves from (b, C, ES, EC 0 , . . . EC k , sus) to (1b, C, ES, EC 0 , . . . EC k , sus) symbolizing the fact that it is user u j 's turn to perform an action. There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES, EC 0 , . . . EC k , sus ) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j , and a is not observable by u j . There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES , EC 0 , . . . EC k , sus) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j and a is observable by u j . We have ES = ∆ Σ j o (ES, S Γ , a). Suspicion and discrepancies observation (sets EC i ) remain unchanged as this move does not represent an action played by u j . There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (1 -b, C , ES , EC 0 , . . . EC k , sus) if b = 1 and there is a transition (q, a, q ) in S Γ performed by user u j from the current configuration. Set ES is updated as before ES = ∆ Σ j o (ES, S Γ , a) and sets EC i are updated according to transition relation δ suf i of automaton C i , i.e. EC i = δ suf i (E i , a). Similarly, sus is the new suspicion value obtained after reading a. Last, there is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b, C, ES , EC 0 , . . . EC k , sus), if there is a sequence of moves (C, a, C(a γ )).(C(a γ ), a γ(q) , C) in S Γ , ES = ES /a γ(q) , and EC i 's and sus are computed as in the former case. As for the proofs of theorems 6 and 14, opacity can be brought back to a reachability game of partial information, and no exponential blowup occurs to solve it. For the hardness, there is a reduction from active profiled opacity. Indeed, active profiled opacity can be expressed as a suspicion threshold opacity, by setting α = β = K = 0, to disallow attackers to leave their profile. B. Discounted Opacity : an open problem A frequent interpretation of discounting is that weights or penalties attached to a decision should decrease progressively over time, or according to the length of runs. This is captured by averaging contribution of individual moves. Definition 20: The discounted Hamming distance between a word u and a language R is the value d(u, R) = ham(u,R) |u| This distance measures the average number of substitutions in a word u with respect to the closest word in R. The next quantitative definition considers a system as opaque if an active attacker can not obtain a secret while maintaining a mean number of differences w.r.t. its expected behavior below a certain threshold. Let λ ∈ Q be a positive rational value. We denote by Strat λ (R) the set of strategies that does not allow an action a after a run ρ labeled by a sequence of actions w if d(w.a, R) > λ. Definition 21 (Discounted active Opacity): A system S is opaque wrt profiles P 1 , . . . P n with discounted tolerance λ iff ∀i = j, µ j ∈ Strat λ (P j ), strategy of agent u j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F i S in all runs of Runs(S, µ j ). A system is opaque in a discounted active setting iff one can find a strategy for u j to reach a state estimation that reveals the secret S i while maintaining a discounted distance wrt P j smaller than λ. At first sight, this setting resembles discounted games with partial information, already considered in [START_REF] Zwick | The complexity of mean payoff games[END_REF]. It was shown that finding optimal strategies for such mean payoff games is in N P ∩ co-N P . The general setting for mean payoff games is that average costs are values of nodes in an arena, i.e. the minimal average reward along infinite runs that one can achieve with a strategy starting from that node. As a consequence, values of nodes are mainly values on connected components of an arena, and costs of moves leading from a component to another have no impact. In out setting, the game is not a value minimization over infinite run, but rather a co-reachability game, in which at any moment in a run, one shall not exceed a mean number of unexpected moves. For a fixed pair of users u i , u j , we can design an arena with nodes of the usual form n = (b, C, ES, l, su) in which b indicates whether it is u j 's turn to play, C is the current configuration of the system, ES the estimation of the current configuration and of the current state of secret S i reached, l is the number of moves played so far, and su the number of moves that differ from what was expected in P j . As before, the winning states for u j are the states where all couples in state estimation refer to an accepting state of S i . In this arena, player u j looses if it can never reach a winning node, or if it plays an illegal move from a node n = (b, C, ES, l, su) such that su+1 l+1 > λ. One can immediately notice that defined this way, our arena is not finite anymore. Consider the arena used in theorem 6, i.e. composed of nodes of the form n = (b, C, ES) that only build estimations of the attacker. Obviously, when ignoring mean number of discrepancies, one can decide whether the winning set of nodes is reachable from the initial node under some strategy in polynomial time (wrt the size of the arena). The decision algorithm builds an attractor for the winning set (see for instance [START_REF] Grädel | Automata Logics, and Infinite Games : A Guide to Current Research[END_REF] for details), but can also be used to find short paths under an adequate strategy to reach W in (without considering mean number of discrepancies). If one of these paths keeps the mean number of discrepancies lower or equal to λ at each step, then obviously, this is a witness for non-opacity. However, if no such path exists, there might still be a way to play longer runs that decrease the mean number of discrepancies before moving to a position that requires less steps to reach the winning set. We can show an additional sufficient condition : Let ρ = n 0 .n 1 . . . n w be a path of the arena in theorem 6 (without length nor mean number of discrepancies recall) from n 0 to a winning node n w . Let d i denote the number of discrepancies with respect to profile P j at step i. Let n i be a node of ρ such that di i ≤ λ and di+1 i+1 > λ. We say that u j can enforce a decreasing loop β = n j .n j+1 . . . n j at node n j if β is a cycle that u j can enforce with an appropriate strategy, and if the mean number of discrepancies is smaller in ρ β = n 0 . . . n j .β than in n 0 . . . n j , and the mean cost of any prefix of β is smaller that λ. A consequence is that the mean cost M β of cycle β is smaller than λ. We then have a sufficient condition: Proposition 22: Let ρ be a winning path in an arena built to check active opacity for users u i , u j such that di i > λ for some i ≤ |ρ|. If there exists a node n b in ρ such that d k k ≤ λ for every k ≤ b and u j can enforce a decreasing loop at n b , then u j has a strategy to learn S i without exceeding mean number of discrepancies λ. Similarly, if B is large enough, playing any prefix of n b+1 . . . n w to reach the winning set does not increase enough the mean number of discrepancies to exceed λ. A lower bound for B such that λ is never exceeded in n 0 . . . n b .β B .n b+1 . . . n w can be easily computed. Hence, if one can find a path in a simple arena withouts mean discrepancy counts, and a decreasing loop in this path, then u j has a strategy to learn S i without exceeding threshold λ. VI. CONCLUSION We have shown several ways to quantify opacity with passive and active attackers. In all cases, checking passive opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. In active settings, opacity violation is brought back to existence of strategies in reachability games over arenas which nodes represent beliefs of agents, and is EXPTIME-complete. Suspicion can be discounted or not. Non-discounted suspicions simply counts the number of anomalies w.r.t. a profile, and raises an alarm when a maximal number K of anomalies is exceeded. We have shown that when anomalies are substitutions, deletions and insertions of actions, words with less than K anomalies w.r.t. the considered profile (words at Hamming or Levenshtein distance ≤ K) are recognized by automata of linear size. This allows to define active and passive profiled opacity, with the same PSPACE/EXPTIME-complete complexities. A crux in the proofs is that words at distance lower than K of a profile are recognized by automata. A natural extension of this work is to see how regular characterization generalizes to other distances. Discounting the number of anomalies is a key issue to avoid constantly raising false alarms. t is reasonable to consider that the contribution to suspicion raised by each anomaly should decrease over time. The first solution proposed in this paper computes a suspicion score depending on the number of discrepancies found during the last actions of an agent. When differences are only substitutions, one can use finite automata to maintain online the number of differences. This allows to enhance the arenas used in the active profiled setting without changing the complexity class of the problem (checking regular discounted suspicion remains EXPTIME-complete). Again, we would like to see if other distances (eg the Levenstein distance) and suspicion scores can be regular, which would allow for the defiition of new opacity measures. Discounted suspicion weights discrepancies between the expected and actual behavior of an agent according to run length. This suspicion measure can be seen as a quantitative game, where the objective is to reach a state leaking information without exceeding an average distance of λ ∈ Q. In our setting, the mean payoff has to be compared to a threshold at every step. This constraint can be recast as a reachability property for timed automata with one stopwatch and linear diagonal constraints on clock values. We do not know yet if this question is decidable but we provide a sufficient condition for discounted opacity violation. In the models we proposed, discounting is performed according to runs length. However, it seems natural to consider discrepancies that have occurred during the last ∆ seconds, rather than This requires in particular considering timed systems and they timed runs. It is not sure that adding timing to our setting preserves decidability, as opacity definitions rely a lot on languages inclusion, which are usually undecidable for timed automata [START_REF] Alur | A theory of timed automata[END_REF]. If time is only used to measure durations elapsed between actions of an attacker, then we might be able to recast the quantitative opacity questions in a decidable timed setting, using decidability results for timed automata with one clock [START_REF] Ouaknine | On the language inclusion problem for timed automata: Closing a decidability gap[END_REF] or event-clock timed automata. APPENDIX PROOF OF THEOREM 3 Proof: Let us first prove that U -opacity is in PSPACE. A system is not opaque if one can find a pair of users u i , u j , and a run w of S such that w ∈ L(S i ) and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). One can non-deterministically choose a pair of users u i , u j in space logarithmic in n, and check that i = j in logarithmic space. To decide whether a run of S belongs to S i , it is sufficient to know the set of states reached by S i after recognizing w. A word w belongs to L(S i ) is the state q w reached by S i after reading w belongs to F S i . Now, observe that an user u j does not have access to w, but can only observe π Σ j o (w), and may hence believe that the run actually played is any run with identical observation, i.e. any run of π -1 Σ j o (π Σ j o (w)) ∩ L(S). Let ρ be a run of S, one can build online the set of states reach j (w) that are reached by a run which observation is the same as π Σ j o (w). We have reach j ( ) = {q ∈ Q S i | ∃w, q S 0,i w -→ q ∧ π Σ j o (w) = } and reach j (w.a) = {q ∈ Q S i | ∃q ∈ reach j (w), ∃w , q w -→ q ∧ π Σ j o (w ) = a}. Obviously, a word w witnesses a secret leakage from S i to u j if reach j (w) ⊆ F S i . To play a run of S, it is hence sufficient to remember a configuration of S and a subset of states of S i . Let q ρ denote the pair (q, X) reached after playing run ρ. Now we can show that witness runs with at most K 1 = |Conf |.2 |Si| letters observable by u j suffice. Let us assume that there exists a witness ρ of size ≥ K 1 . Then, ρ can be partitioned into ρ = ρ 1 .ρ 2 .ρ 3 such that q ρ1 = q ρ1.ρ2 . Hence, ρ 1 .ρ 3 is also a run that witness a leakage of secret S i to u j , but of smaller size. Hence one can find a witness of secret leakage by a nondeterministic exploration of size at most |Conf |.2 |Si| . To find such run, one only needs to remember a configuration of S (which can be done with log(|S|) bits, all states of reach j (ρ) for the current run ρ followed in S, which can be done with |S i | bits of information, and an integer of size at most K 1 , which requires log |S|.|S i | bits. Finding a witness can hence be done in NPSPACE, and by Savitch's lemma it is in PSPACE. As PSPACE is closed by complement, deciding opacity of a system is in PSPACE. Let us now consider the hardness part. We will reduce the non-universality of any regular language to an opacity problem. As universality is in PSPACE, non-universality is also in PSPACE. The language of an automaton B defined over an alphabet Σ is not universal iff L(B) = Σ * , or equivalently if Σ * L(B). For any automaton B, one can design a system S B with two users u 1 , u 2 such that S 1 = B, L(S 2 ) = a.Σ * for some letter a, A accepts all actions, i.e. is such that L (A) = Σ * , Σ 2 o = Σ 1 o = ∅. Clearly, for every run of S, u 1 observes , and hence leakage can not occur from u 2 to u 1 (one cannot know whether a letter and in particular a was played). So the considered system is opaque iff ∀w ∈ L(S 1 ) ∩ L(S), π -1 Σ 2 o (π Σ 2 o (w)) L(S 1 ). However, as Σ 2 o = ∅, for every w, π -1 Σ 2 o (π Σ 2 o (w)) = Σ * . That is, the system is opaque iff Σ * L(B). PROOF OF THEOREM 6 Proof: An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes N 0 ∪ N 1 . Each node of the form n = (b, C, s, ES) contains : • a player's name b (0 or 1). Intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays. G ⊆ N 0 ∪ N 1 × N 0 ∪ N 1 . • (n, n ) ∈ δ G if n and n differ only w.r.t. their player's name • (n, n ) ∈ δ G if n = (0, C, s, ES) , n = (1, C , s , ES ) and there exists σ ∈ (Σ \ Σ j ) ∩ Σ j o such that C σ =⇒ C , s σ =⇒ S i ) -→ S i . • (n, n ) ∈ δ G n = (1, C, s, ES), n = (1, C, s, ES ) if there exists γ ∈ Σ Γ j such that ES is the refinement of ES by a γ (state(C)). We assume that checking the status of a proposition does not affect the secrets of other users. We says that a node n = (b, C, s, ES) is not secret if s ∈ F S i , and say that n is secret otherwise. We say that a node is ambiguous if there exists (C p , s p ) and (C m , s m ) in ES such that s p is secret and s m is not. If the restriction of ES to it second components is contained in F S i , we says that n leaks secret S i . We equip the arena with an equivalence relation ∼⊆ N 0 × N 0 ∪ N 1 × N 1 , such that n = (b, C, s, ES) ∼ n = (b , C , s , ES ) iff b = b = 1 and ES = ES . Intuitively, n ≡ n if and only if they are nodes of agent u j , and u j cannot distinguish n from n using the knowledge it has on executions leading to n and to n . Clearly, secret S i is not opaque to agent u j in S iff there exists a strategy to make a leaking node accessible. This can be encoded as a partial information reachability game G = (N 0 N 1 , δ G , ≡, W in), where W in is the set of all leaking nodes. In these games, the strategy must be the same for every node in the same class of ≡ (i.e. where u j has the same state estimation). Usually, partial information games are solved at he cost of an exponential blowup, but we can show that in our case, complexity is better. First, let us compute the maximal size of the arena. A node is of the form n = (b, C, s, ES), hence the size of the arena |G| is in O(2.|Conf |.| § i |.2 |Conf |.|Si| ) (and it can be built in time O(|Conf |.|G|). Partial information reachability games are known to be EXPTIME-complete [START_REF] Reif | Universal games of incomplete information[END_REF]. Note here that only one player is blind, but this does not change the overall complexity, as recalled by [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. However, solving games of partial information consists in computing a "belief" arena G B that explicitly represent players beliefs (a partial information on a state is transformed into a full knowledge of a belief), and then solve the complete information game on arena G B . This usually yields an exponential blowup. In our case, this blowup is not needed, and the belief that would be computed to solve a partial information game simply duplicates the state estimation that already appears in the partial information arena. Hence, deciding opacity with active observation strategies can be done with |U | 2 opacity tests (one for each pair of users) of exponential complexity, in only in EXPTIME. Let us now prove the hardness of opacity with active attackers. We reduce the problem of emptiness of alternating automata to an opacity question. An alternating automaton is a tuple A alt = (Q, Σ, δ, s 0 , F ) where Q contains two distinct subsets of states Q ∀ , Q ∃ . Q ∀ is a set of universal states, Q ∃ is a set of existential states, Σ is an alphabet, δ ⊆ (Q ∀ ∪Q ∃ )×Σ×(Q ∀ ∪Q ∃ ) is a transition relation, s is the initial state and F is a set of accepting states. A run of A alt over a word w ∈ Σ * is an acyclic graph G A alt ,w = (N, -→) where nodes in N are elements of Q × {1 . . . |w|}. Edges in the graph connect nodes from a level i to a level i+1. The root of the graph is (s, 1). Every node of the from (q, i) such that q ∈ Q ∃ has a single successor (q , i+1) such that q ∈ δ(q, w i ) where w i is the i th letter of w. For every node of the from (q, i) such that q ∈ Q ∀ , and for every q such that q ∈ δ(q, w i ), ((q, i), (q , i + 1)) is an edge. A run is complete is all its node with index in 1..|w| -1 have a successor. It is accepting if all path of the graph end in a node in F × {|w|}. Notice that due to non-deterministic choice of a successor for existential states, there can be several runs of A alt for a word w. The emptiness problem asks whether there exists a word w ∈ Σ * that has an accepting run. We will consider, without loss of generality that alternating automata are complete, i.e. all letters are accepted from any state. If there is no transition of the form (q, a, q ) from a sate q, one can nevertheless create a transition to an non-accepting absorbing state while preserving the language recognized by the alternating automaton. Let us now show that the emptiness problem for alternating automata can be recast in an active opacity question. We will design three automata A, A 1 , A 2 . The automata A 1 and A 2 are agents. Agent 1 performs actions from universal sates and agent 2 chooses the next letter to recognize and performs actions from existential states. The automaton A serves as a communication medium between agents, indicates to A 2 the next letter to recognize, and synchronizes agents 1 and 2 when switching the current state of the alternating automaton from an existential state to an universal state or conversely. We define A = (Q s , -→ s , Σ s ) with Σ s = {(end, 2 A); (end, A 1)} ∪ Σ × {2 A, A 1} × (Q ∃ ∪ U ) × {1 A, A 2, 2 A, A 2}. To help readers, the general shape of automaton A is given in Figure 3. States of A are of the form U , (U, σ), W , dU , dq i , wq i for every state in Q, and Eq i for every existential state q i ∈ Q ∃ . The initial state of A is state U if s 0 is an universal state, or s 0 if s 0 is existential. State U has |Σ| outgoing transitions of the form (U, < σ, 2 A >, (U, σ), indicating that the next letter to recognize is σ. It also has a transition of the form (U, < end, 2 A >, end 1 ) indicating that A 2 has decided to test whether A 1 is in a secret state (i.e. simulates an accepting state of A alt ). There is a single transition (end 1 , < end, A 2 >, end 2 ) from state end 1 , and a single transition (end 2 , < Ackend, A 1 >, end 3 ) indicating to A 2 that A 1 has acknowledged end of word recognition. There is a transition ((U, σ), < σ, A → 1 >, (W, σ)) for any state (U, σ), indicating to A 1 that the next letter to recognize from its current universal state is σ. In state W , A is waiting for an universal move from A 1 . Then from W , A can receive the information that A 1 has moved to an universal state, which is symbolized by a pair of transitions (W, < σ, U, 1 A >, dU )) and (dU, < again, A 2 >, U ). There is a transition (W, < σ, q i , 1 → A >, dq i ) for every existential state q i ∈ Q ∃ , followed by a transition (dq i , < σ, q i , A 2 >, Eq i ), indicating to A 2 that the system has moved to recognition of a letter from an existential state q i . There is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) from every state Eq i with q i ∈ Q ∃ and every σ ∈ Σ to indicate that the next letter to recognize is σ. Then, there is a transition ((Eq i , σ), < σ, q j , 2 A >, (W q j , σ)) for every existential move (q i , σ, q j ) ∈ δ. From every state (W q j , σ), there is a transition of the form ((W q j , σ), < σ, q j , A → 1 >, (dq j , σ)) to inform A 1 of A 2 's move. Then, from (Dq j , σ) if q j ∈ Q ∃ , there is a transition of the form ((Dq j , σ), < again, A 1 >, Eq j ) and if q j ∈ Q ∀ , a transition of the form ((dq j , σ), < again, A 1 >, U ), indicating to A 1 that the simulation of the current transition recognizing a letter is complete, and from which state the rest of the simulation will resume. Let us now detail the construction of A 2 . A description of all its transition is given in Figure 4. This automaton has one universal state U , a state W , states of the form (U, σ), a pair of states Eq i and W q i and a state (Eq i , σ) for every σ ∈ Σ and every q i ∈ Q ∃ . Last, A 1 has two states End 1 and End 2 . There is a transition (U, < σ, 2 A >, (U, σ)) from U for every σ ∈ Σ, symbolizing the choice of letter σ as the next letter to recognize when the system simulates an universal state. Note that A 2 needs not know which universal state is currently simulated. Then, there is also a transition ((U, σ), again, U ) returning to U symbolizing the end of a transition of the alternating w q j again σ, A 1, q j d qi d q i σ, 1 A, q i σ, 1 A, q i E qi E qi , σ σ, A 2, q i σ, 2 A W qj , σ σ, q j , 2 A (q j ∈ Q ∃ ) σ, q j , 2 A (q j ∈ Q ∀ ) d qj σ, q j , A 1 E qj End, 2 A End, 2 A again Fig. 3: Automaton A in the proof of theorem 6. automata that returns to an universal state (hence owned by A 2 ). From every state (U, σ) there is a transition ((U, σ), again, U ) and a transition ((U, σ), < σ, q i , A → 2 >, Eq i ) for every existential state q i that has an universal predecessor q with (q, σ, q i ) ∈ δ. From a state Eq i and for every σ ∈ Σ, there is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) symbolizing the choice to recognize σ as the next letter. Then, from every state (Eq i , σ) for every transition of the form (q i , σ, q j ) ∈ δ where q j is existential, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W q j ). For every transition of the form (q i , σ, q j ) ∈ δ where q j is universal, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W ). Last, transitions ((W q j , σ), again, Eq j ) and (W, again, U ) complete simulation of recognition of the current letter. Last, A 2 has a transition (U, < end, 2 A >, End 1 ), a transition (Eq i , < end, 2 A >, End 1 ) for every existential state q i ∈ Q ∃ and a transition (end 1 , ackend, End 2 ), symbolizing the decision to end recognition of a word. Let us detail the construction of A 1 . The general shape of this automaton is described in Figure 5. This automaton has two states of the form U q i , (U q i , σ) per universal state and for each σ ∈ Σ. Similarly A 1 has a state Eq i , (Eq i , σ) per existential state and for each σ ∈ Σ. From state U q i there is a transition (U q i , < σ, A → 1 >, (U q i , σ)) to acknowledge the decision to recognize σ. From state (U q i , σ) there exists two types of transitions. For every universal state q j such that (q i , σ, q j ) ∈ δ, Eq i , σ σ, 2 A σ, 2 A σ, q j , 2 A (q j ∈ Q ∀ ) Eq j End, 2 A End, 2 A W q j again σ, q j , 2 A (q j ∈ Q ∃ ) Fig. 4: Automaton A 2 in the proof of theorem 6, simulating existential moves . there is a transition ((U q i , σ), < σ, U, 1 A >, U q j ), symbolizing a move to universal state q j . For every existential state q j such that (q i , σ, q j ) ∈ δ, there is a transition ((U q i , σ), < σ, q j , 1 A >, Eq j ). Similarly, from a state Eq i , there exists a transition (Eq i , < σ, A 1 >, (Eq i , σ)) indicating to A 1 the letter chosen by A 2 . From state (Eq i , σ), there is a transition ((Eq i , σ), < σ, q j , A → 1 >, Eq j ) for every existential state q j such that (q i , σ, q j ) ∈ δ. There is also a transition ((Eq i , σ), < σ, U, 1 A >, U q j ) for every universal state q j such that (q i , σ, q j ) ∈ δ. Notice that the universal state reached is not detailed when A 1 sends the confirmation of a move to A. The remaining transitions are transitions of the form (Eq i , < End, A 1 >, S) and (U q i , < End, A 1 >, Sec) for every accepting state q i ∈ F . We also create transitions of the form Eq i , < End, A 1 >, Sec and U q i , < End, A 1 >, Sec for states that are not accepting. Reaching Sec indicates the failure to recognize a word chosen by A 1 along a path in which universal moves were played by A 1 and existential moves by A 2 . We define a agent u 1 s secret S 1 as the automaton that recognizes all words that allow A 1 to reach sate Sec. Now, we can prove that if a word w is accepted by A alt then the strategy in which A 2 chooses letter w i at its i t h passage through a letter choice state (U or Eq i ), existential transitions appearing in the accepting run of A alt , and then transition < end, 2 A > at the i + 1 th choice, is a strategy to force U q i U q i , σ U q j Eq j σ, 1 A σ, U, 1 A σ, q j , 1 A Eq i Eq j σ, A 1, q j (q j ∈ Q ∃ ) Eq i U q j σ, A 1, q j (q j ∈ Q ∀ ) Eq i Sec End, A 1 (q i ∈ F ) Eq i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) Fig. 5: Automaton A 1 in the proof of theorem 6, simulating Universal moves . A 1 to reach the secret state. Conversely, one can associate to every run of A, A 1 , A 2 , a word w that is read, and a path in some run that is used to recognize w. If A 2 has a strategy to force A 1 secret leakage, then all path following this strategy lead to a winning configuration. As a consequence, there is a choice of existential moves such that all states simulated along a run of the alternating automaton with these existential moves end in accepting state. Hence, L(A alt ) is empty iff the system composed of A, A 1 , A 2 is opaque. Now, the system built to simulate A alt is of polynomial size in |A alt |, so there is a polynomial size reduction from the emptiness problem for alternating automata to the active opacity question, and active opacity is EXPTIME-complete. PROOF OF LEMMA 15 Proof: One can first recall that for the Hamming and Levenshtein distances, we have d(u, v) = d(u -1 , v -1 ), where u -1 is the mirror of u. Similarly, we have d K (u, R) = d(u -1 [1,K] , R -1 ). Let G R = (Σ, Q, q 0 , δ, F ) be the automaton recognizing language R. We can build an automaton C k that recognizes words of length at least K, which suffixes of length K are at hamming distance at most k of suffixes of length K of words in R. We define C k = (Σ, Q suf k , q suf 0,k , δ suf k , F suf k ). This automaton can be computed as follows : first build G -1 R , the automaton that recognizes mirrors of suffixes of R. This can be easily done by setting as initial states the final states of R, and then reversing the transition relation. Then by adding a K-bounded counter to states of G -1 R , and setting as accepting states states of the form (q, K), we obtain an automaton B -1 that recognizes mirrors of suffixes of R of length K. Then, for every k ∈ [0..K], we can compute B k , the automaton that recognizes mirrors of words of length K that are at distance k from words in B -1 , by adding another counter to states that counts substitutions, and which final states are of the form (q, K, k). Then we can build (by sequential composition of automata for instance) the automaton C k that reads any word in Σ * and then recognizes a word in (B k ) -1 . Fig. 1 : 1 Fig. 1: An automaton G and the automaton G 3 Ham that recognizes words at Hamming distance ≤ 3 of L(G). Figure 1 .Fig. 2 : 12 Fig. 2: Evolution of suspicion wrt profile of Figure 1 when reading word w = a.a.a.c.b.b.a.c.b.a.a. distance d K (w [i.i+5] , P j ) at each letter of w (plain line), and the evolution of the suspicion function (dashed line).One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results.Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1 Proof: The winning path is of the form ρ = n 0 .n 1 . . . n b .n b+1 . . . n w . Let d b be the number of discrepancies in n 0 .n 1 . . . n b and λ b = d b b . Player u j can choose any integer value B and enforce path ρ B = n 0 .n 1 . . . n b .β B . The mean number of discrepancies in ρ B is equal to d b +B.d β i+B.|β| , i.e. as B increases, this number tends towards M β . s and ES is th set of pairs (C m , s m ) such that there exits a pair (C p , s p ) in ES, and a sequence ρ of transitions from C p to C m , labeled by a word w such that Π j (w) = σ, and one can move in S i from s p to s m by reading w. Note that this set of sequences needs not be finite, but one can find in O(|Conf |) the set of possible pairs that are accessible while reading σ.• (n, n ) ∈ δ G if n = (1, C, s, ES), n = (1, C , s , ES ) and there exists σ ∈ Σ j , a transition C σ -→ C in S, a transition (s, σ, s ) ∈-→ S i and ES is the set of pairs of the form (C m , s m ) such that there exists (C m , s m ) ∈ ES (C m , σ, C m ) ∈-→ and (s m , σ, s m • the current configuration C of S • the current state s of S i • an estimation ES of the system's configuration and secret's current state by u j ,ES j = {(C 1 , s 1 ), ...(C k , s k )}=⇒ C iff there exists a sequence of transitions of S which observation by u j is σ, and s from s to s in S i . Then we define moves among nodes as a relation δ We write C σ =⇒ S i s if there is such a sequence σ This entails that we assume that queries are faster than the rest of the system, i.e. not event can occur between a query and its answer. Hence we have L(S Γ ) ⊆ L(S) (Σ Γ .{tt.ff }) * . We could easily get rid of this hypothesis, by remembering in states of S Γ which query (if any) was sent by an user, and returning the answer at any moment.
81,439
[ "830540", "418", "959111" ]
[ "491208", "491208", "57241" ]
01758006
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01758006/file/EUCOMES2016_Nayak_Nurahmi_Caro_Wenger_HAL.pdf
Abhilash Nayak email: [email protected] Latifah Nurahmi email: [email protected] Philippe Wenger email: [email protected] Stéphane Caro email: [email protected] Comparison of 3-RPS and 3-SPR Parallel Manipulators based on their Maximum Inscribed Singularity-free Circle Keywords: 3-RPS parallel manipulator, 3-SPR parallel manipulator, operation modes, singularity analysis, maximum inscribed circle radius 1 . Then, the parallel singularities of the 3-SPR and 3-RPS parallel manipulators are analyzed in order to trace their singularity loci in the orientation workspace. An index, named Maximum Inscribed Circle Radius (MICR), is defined to compare the two manipulators under study. It is based on their maximum singularity-free workspace and the ratio between their circum-radius of the movingplatform to that of the base. Introduction Zero torsion parallel mechanisms have proved to be interesting and versatile. In this regard, the three degree of freedom lower mobility 3-RPS parallel manipulator (PM) has many practical applications and has been analyzed by many researchers [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. Interchanging the free moving platform and the fixed base in 3-RPS manipulator results in the 3-SPR manipulator as shown in figure 1, retaining three degrees of freedom. The study of 3-SPR is limited in the literature. An optimization algorithm was used in [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF] to compute the forward and inverse kinematics of 3-SPR manipulator. After the workspace generation it is proved that the 3-SPR has a bigger working space volume compared to the 3-RPS manipulator. The orthogonality of rotation matrices is exploited in [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF] to perform the forward and inverse kinematics along with the simulations of 3-SPR mechanism. Control of a hydraulic actuated 3-SPR PM is demonstrated in [START_REF] Mark | Kinematic Modeling of a Hydraulically Actuated 3-SPR-Parallel Manipulator for an Adaptive Shell Structure[END_REF] with an interesting application on adaptive shell structure. This paper focuses on the comparison of kinematics and singularities of the 3-RPS and 3-SPR parallel manipulators and is organized as follows: initially, the de-sign of 3-SPR PM is detailed and the design of the 3-RPS PM is recalled. The second section describes the derivation of the constraint equations of the 3-SPR manipulator based on the algebraic geometry approach [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements if P-joints[END_REF]. The primary decomposition is computed over these constraint equations and it shows that the 3-SPR has identical operation modes as the 3-RPS PM. Moreover, the actuation and constraint singularities are described with singularity loci plots in the orientation workspace. Finally, an index called the singularity-free maximum inscribed circle radius is introduced to compare the maximum singularity free regions of 3-RPS and 3-SPR manipulators from their home position. In [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF], maximum tilt angles for any azimuth for 3-RPS PM are plotted for different ratios of platform to base circumradii. However, these plots correspond to only one operation mode since the notion of operation modes was not considered in this paper. That being the case, this paper offers a complete singularity analysis in terms of MICR for both the manipulators. These plots are useful in the design choice of a manipulator based on their platform to base circumradii ratios and their operation modes. Manipulator architectures x 1 x 0 y 1 y 0 z 1 z 0 ∑ 0 ∑ 1 B 3 B 1 B 2 A 3 A 1 A 2 s 1 s 2 s 3 r 1 r 3 r 2 h 1 h 2 O 1 O 0 Fig. 1 3-SPR parallel manipulator h 1 h 2 x 0 y 0 z 0 ∑ 0 x 1 y 1 z 1 ∑ 1 r 1 r 3 r 2 A 3 A 1 A 2 B 1 B 3 B 2 Fig. 2 3-RPS parallel manipulator Figure 1 shows a general pose of the 3-SPR parallel manipulator with 3 identical legs each comprising of a spherical, prismatic and revolute joints. The triangular base and the platform of the manipulator are equilateral. Σ 0 is the fixed co-ordinate frame attached to the base with the origin O 0 coinciding with the circum-centre of the triangular base. The centres of the spherical joints, namely A 1 , A 2 and A 3 bound the triangular base. x 0 -axis of Σ 0 is considered along O 0 A 1 which makes the y 0 -axis parallel to A 2 A 3 and the z 0 -axis normal to the triangular base plane. h 2 is the circum-radius of the triangular base. The moving platform is bounded by three points B 1 , B 2 and B 3 that lie on the revolute joint axes s 1 , s 2 and s 3 . Moving co-ordinate frame Σ 1 is attached to the moving platform whose x 1 -axis points from the origin O 1 to B 1 , y 1 -axis being orthogonal to the line segment B 2 B 3 and the z 1 -axis normal to the triangular platform. Circum-radius of this triangle with B i (i = 1, 2, 3) as vertices is defined as h 2 . The prismatic joint of the i-th (i = 1, 2, 3) leg is always perpendicular to the respective revolute joint axis in each leg. Hence the prevailing orthogonality of A i B i to s i (i = 1, 2, 3) no matter the motion of the platform is a constraint of the manipulator. The distance between the points A i and B i (i = 1, 2, 3) is defined by the prismatic joint variables r i . The architecture of the 3-SPR PM is similar to that of the 3-RPS PM except that the order of the joints in each leg is reversed. The architecture of 3-RPS is recalled in figure 2 where the revolute joints are attached to the fixed triangular base with circum-radius h 1 while the spherical joints are attached to the moving platform with circum-radius h 2 . Constraint equations of the 3-SPR parallel manipulator The homogeneous coordinates of A i and B i in the frames Σ 0 and Σ 1 respectively are expressed as follows: r 0 A 1 = [1, h 1 , 0, 0] T , r 0 A 2 = [1, - 1 2 h 1 , - 1 2 √ 3 h 1 , 0] T , r 0 A 3 = [1, - 1 2 h 1 , 1 2 √ 3 h 1 , 0] T r 1 B 1 = [1, h 2 , 0, 0] T , r 1 B 2 = [1, - 1 2 h 2 , - 1 2 √ 3 h 2 , 0] T , r 1 B 3 = [1, - 1 2 h 2 , 1 2 √ 3 h 2 , 0] T (1) To express the coordinates of B i in the frame Σ 0 , a coordinate transformation matrix must be used. In this context, the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) is utilized and is represented as: M = x 0 2 + x 1 2 + x 2 2 + x 3 2 0 T 3×1 M T M R , M T =     -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0     , M R =     x 0 2 + x 1 2 -x 2 2 -x 3 2 -2 x 0 x 3 + 2 x 1 x 2 2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 3 + 2 x 1 x 2 x 0 2 -x 1 2 + x 2 2 -x 3 2 -2 x 0 x 1 + 2 x 3 x 2 -2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 1 + 2 x 3 x 2 x 0 2 -x 1 2 -x 2 2 + x 3 2     (2) where M T and M R represent the translational and rotational parts of the transformation matrix M respectively. The parameters x i , y i , i ∈ {0, ..., 3} are called the Study parameters. Matrix M maps every displacement SE(3) to a point in a 7dimensional projective space P 7 and this mapping is known as Study s kinematic mapping. An Euclidean transformation will be represented by a point P∈ P 7 if and only if the following equation and inequality are satisfied: x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ( 3 ) x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 (4) All the points that satisfy equation ( 3) belong to the 6-dimensional Study quadric. The points that do not satisfy the inequality (4) lie in the exceptional generator x 0 = x 1 = x 2 = x 3 = 0. To derive the constraint equations, we can express the direction of the vectors s 1 , s 2 and s 3 in homogeneous coordinates in frame Σ 1 as: s 1 1 = [1, 0, -1, 0] T , s 1 2 = [1, - 1 2 √ 3 , 1 2 , 0] T , s 1 3 = [1, 1 2 √ 3 , 1 2 , 0] T (5) In the fixed coordinate frame Σ 0 , B i and s i can be expressed using the transformation matrix M : r 0 B i = M r 1 B i ; s 0 i = M s 1 i i = 1, 2, 3 (6) As it is clear from the manipulator architecture, the vector along A i B i , namely r 0 B i -r 0 A i is orthogonal to the axis s i of the i-th revolute joint which after simplification yields the following three equations: (r 0 B i -r 0 A i ) T s i = 0 =⇒    g 1 := x 0 x 3 = 0 g 2 := h 1 x 1 2 -h 1 x 2 2 -2 x 0 y 1 + 2 x 1 y 0 + 2 x 2 -2 x 3 y 2 = 0 g 3 := 2 h 1 x 0 x 3 + h 1 x 1 x 2 + x 0 y 2 + x 1 y 3 -x 2 y 0 -x 3 y 1 = 0 (7) The actuation of prismatic joints leads to three additional constraint equations. The Euclidean distance between A i and B i must be equal to r i for the i-th leg of the manipulator. As a result, A i B i 2 = r 2 i leads to three additional equations g 4 = g 5 = g 6 = 0, which are quite lengthy and are not displayed in this paper due to space limitation. Two other equations are considered such that the solution represents a transformation in SE(3). The study-equation g 7 = 0 in Equation (3) constrains the solutions to lie on the Study quadric. g 8 = 0 is the normalization equation respecting the inequality [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. Solving these eight constraint equations provides the direct kinematic solutions for the 3-SPR parallel manipulator. g 7 := x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ; g 8 := x 0 2 + x 1 2 + x 2 2 + x 3 2 -1 = 0 (8) Operation modes Algebraic geometry offers an organized and an effective methodology to deal with the eight constraint equations. A polynomial ideal consisting of equations g i (i = 1, ..., 8) is defined with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[h 1 , h 2 , r 1 , r 2 , r 3 ] as follows: I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > (9) The vanishing set or the variety V (I) of this ideal I consists of the solution to direct kinematics as points in P 7 . However, in this context, only the number of operation modes are of concern irrespective of the joint variable values. Hence, the sub-ideal independent of the prismatic joint length, r i is considered: J =< g 1 , g 2 , g 3 , g 7 > (10) The primary decomposition of ideal J is calculated to obtain three simpler ideals J i (i = 1, 2, 3). The intersection of the resulting primary ideals returns the ideal J . From a geometrical viewpoint, the variety V (J ) can be written as the union of the varieties of the primary ideals V (J i ), i = 1, 2, 3 [START_REF] Cox | Shea: Ideals, Varieties, and Algorithms (Series: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF]. J = 3 i=1 J i or V (J ) = 3 i=1 V (J i ) (11) Among the three primary ideals obtained as a result of primary decomposition, it is important to note that J 1 and J 2 contain x 0 and x 3 as their first elements, respectively. The third ideal, J 3 is obtained as J 3 =< x 0 , x 1 , x 2 , x 3 > and is discarded as the variety V (J 3 ∪ g 8 ) is null over the field of interest C. As a result, the 3-SPR PM has two operation modes, represented by x 0 = 0 and x 3 = 0. In fact, g 1 = 0 in Equation [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF] shows the presence of these two operation modes. It is noteworthy that the 3-RPS PM also has two operation modes as described in [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. The analysis is completed by adding the remaining constraint equations to the primary ideals J 1 and J 2 . Accordingly, two ideals K 1 and K 2 are obtained. As a consequence, the ideals K i correspond to the two operation modes and can be studied separately. K i = J i ∪ < g 4 , g 5 , g 6 , g 8 > i = 1, 2 (12) The system of equations in the ideals K 1 and K 2 can be solved for a particular set of joint variables to obtain the Study parameters and hence the pose of the manipulator. These Study parameters can be substituted back in equation ( 2) to obtain the transformation matrix M. According to the theorem o f Chasles this matrix now rep-resents a discrete screw motion from the identity position (when the fixed frame Σ 0 and the moving frame Σ 1 coincide) to the moving-platform pose. The displacement about the corresponding discrete screw axis (DSA) defines the pose of the moving platform. 4.1 Ideal K 1 : Operation mode 1 : x 0 = 0 For operation mode 1, the moving platform is always found to be displaced about a DSA by 180 degrees [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 0 = 0 and solving for y 0 , y 1 , y 3 from the ideal K 1 shows that the translational motions can be parametrized by y 2 and the rotational motions by x 1 , x 2 and x 3 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. 4.2 Ideal K 2 : Operation mode 2 : x 3 = 0 For operation mode 2, the moving platform is displaced about a DSA with a rotation angle α calculated from cos( α 2 ) = x 0 . It is interesting to note that the DSA in this case is always parallel to the xy-plane [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 3 = 0 and solving for y 0 , y 2 , y 3 from the ideal K 2 shows that the translational motions can be parametrized by y 1 and the rotational motions by x 0 , x 1 and x 2 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. Singularity analysis The Jacobian of the 3-SPR manipulator in this context is defined as J i and the manipulator reaches a singular position when its determinant vanishes.: J i = ∂ g j ∂ x k , ∂ g j ∂ y k where i = 1, 2 ; j = 1, ..., 8 ; k = 0, ..., 3 (13) Actuation and constraint singularities Computing the determinant S i : det(J i ) results in a hyper-variety of degree 8 in both the operation modes: S 1 : x 3 • p 7 (x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ) = 0 and S 2 : x 0 • p 7 (x 0 , x 1 , x 2 , y 0 , y 1 , y 2 , y 3 ) = 0 (14) The 7 degree polynomials describe the actuation singularities when the prismatic joints are actuated and that exist within each operation mode whereas x 0 = x 3 = 0 describes the constraint singularity that exhibits the transition between K 1 and K 2 . Singularity Loci The actuation singularities can be expressed in the orientation workspace by parametrizing the orientation of the platform in terms of Euler angles. In particular, the Study parameters can be expressed in terms of the Euler angles azimuth (φ ), tilt (θ ) and torsion (ψ) [?]: x 0 = cos( θ 2 )cos( φ 2 + ψ 2 ) x 1 = sin( θ 2 )cos( φ 2 - ψ 2 ) x 2 = sin( θ 2 )sin( φ 2 - ψ 2 ) x 3 = cos( θ 2 )sin( φ 2 + ψ 2 ) (15) Since K 1 and K 2 are characterized by x 0 = 0 and x 3 = 0, substituting them in equation ( 15) makes the torsion angle (ψ) null, verifying the fact that, like its 3-RPS counterpart, the 3-SPR parallel manipulator is a zerotorsion manipulator. Accordingly, the x i parameters can be written in terms of tilt(θ ) and azimuth(φ ) only. The following method is used to calculate the determinant of J i in terms of θ , φ and Z, the altitude of the moving platform from the fixed base. The elements of the translational part M T of matrix M in equation ( 2) are considered as M T = [X,Y, Z] T that represent the translational displacement in the coordinate axes x, y and z respectively. Then, the constraint equations are derived in terms of X,Y, Z, x 0 , x 1 , x 2 , x 3 , r 1 , r 2 , r 3 . From these equations, the variables X,Y, r 1 , r 2 and r 3 are expressed as a function of Z and x i and are substituted in the determinant of the Jacobian. Finally, the corresponding x i are expressed in terms of Euler angles, which yields a single equation describing the actuation singularity of the 3-SPR PM in terms of Z, θ and φ . Fixing the value of Z and plotting the determinant of the Jacobian for φ ∈ [-180 0 , 180 0 ] and θ ∈ [0 0 , 180 0 ] depicts the singularity loci. The green curves in figure 3(a) and 3(b) show the singularity loci for operation mode 1 and operation mode 2 respectively with h 1 = 1, h 2 = 2 and Z = 1. Maximum Inscribed Circle Radius for 3-RPS and 3-PRS PMs From the home position of the manipulator (θ = φ = 0), a circle is drawn that has the maximum tilt value for any azimuth within the singularity-free region [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF]. The radius of this circle is called the Maximum Inscribed Circle Radius (MICR). In Figure 3, the red circle denotes the maximum inscribed circle where the value of MICR is expressed in degrees. The MICR is used as a basis to compare the 3-SPR and the 3-RPS parallel manipulators as they are analogous to each other in aspects like number of operation modes and direct kinematics. The 3-SPR PM has higher MICR values and hence larger singularity free regions compared to that of the 3-RPS PM in compliance with [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF][START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. For 3-RPS parallel manipulator, there exists rarely any difference in the MICR values for different operation modes whereas in 3-SPR PM, the second operation mode has higher values of MICR compared to operation mode 1. The values of MICR ranges from 0 0 to 130 0 in operation mode 1, but from 0 0 to 160 0 in operation mode 2 for 3-SPR PM. In addition, for 3-RPS PM, the ratio h 1 : h 2 influences operation mode 1 more than operation mode 2. It is apparent that the MICR values have a smaller range for different ratios in case of operation mode 2. On the contrary, for 3-SPR PM, high MICR values can be seen for operation mode 2, for lower ratios of h 1 : h 2 . Therefore, the MICR plots can be exploited in choosing the ratio of the platform to the base in accordance with the required application. Conclusions In this paper, 3-RPS and 3-SPR parallel manipulators were compared based on their operation modes and singularity-free workspace. Initially, the operation modes of the 3-SPR PM were enumerated. It turns out that the 3-SPR parallel manipulator has two operation modes similar to the 3-RPS PM. The parallel singularities were computed for both the manipulators and the singularity loci were plotted in their orientation workspace. Furthermore, an index called the singularity-free maximum inscribed circle radius was defined. MICR was plotted as a function of the Z coordinate of the moving-platform for different ratios of the platform circum-radius to the base circum-radius. It shows that, compared to the 3-RPS PM, the 3-SPR PM has higher MICR values and hence a larger singularity free workspace for a given altitude. For the ratios of the platform to base size, higher values of MICR are observed in operation mode 2 than in operation mode 1 for the 3-SPR mechanism and is viceversa for the 3-RPS mechanism. In fact, the singularity-free MICR curves open up many design possibilities for both mechanisms suited for a particular application. It will also be interesting to plot the MICR curves for constraint singularities and other actuation modes like 3-RPS and 3-SPR manipulators and to consider the parasitic motions of the moving-platform within the maximum inscribed circles. The investigation of MICR not started from the identity condition (θ = φ = 0 degrees) has to be considered too. Future work will deal with those issues. Fig. 3 3 - 3 Fig. 3 3-SPR singularity loci and the maximum inscribed singularity-free circle (a) Operation mode 1 (b) Operation mode 2 Z h 1 1 in Figures 4 and 5 . 115 vs MICR is plotted for different ratios of h 2 : h The maximum value of MICR is limited to 160 degrees for all the figures and Z h 1 varies from 0 to 4 while eight ratios of h 2 : h 1 are considered. The data cursor in Figures 5(a) and 5(b) correspond to the red circles with MICR = 25.22 and 30.38 degrees in Figures 3(a) and 3(b), respectively. The MICR plots give useful information on the design choice of 3-RPS or 3-SPR parallel manipulators. M 1 h2MFig. 4 14 Fig. 4 MICR vs. Z h 1 for the 3-RPS manipulator (a) Operation mode 1 (b) Operation mode 2 M 1 h2MFig. 5 15 Fig. 5 MICR vs. Z h 1 for the 3-SPR manipulator (a) Operation mode 1 (b) Operation mode 2
20,967
[ "1307880", "16879", "10659" ]
[ "111023", "473973", "481388" ]
01758038
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758038/file/ARK2016_Gagliardini_Gouttefarde_Caro_DynamicFeasibleWorkspace.pdf
Lorenzo Gagliardini email: [email protected] Marc Gouttefarde email: [email protected] S Caro Determination of a Dynamic Feasible Workspace for Cable-Driven Parallel Robots Keywords: Cable-Driven Parallel Robots, Workspace Analysis, Dynamic Feasible Workspace come L'archive ouverte pluridisciplinaire Introduction Several industries, e.g. the naval and renewable energy industries, are facing the necessity to manufacture novel products of large dimensions and complex shapes. In order to ease the manufacturing of such products, the IRT Jules Verne promoted the investigation of new technologies. In this context, the CAROCA project aims at investigating the performance of Cable Driven Parallel Robots (CDPRs) to manufacture large products in cluttered industrial environments [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfigurable cable-driven parallel robot for sandblasting and painting of large structures[END_REF]. CDPRs are a particular class of parallel robots whose moving platform is connected to the robot fixed base frame by a number of cables as illustrated in Fig. 1. CDPRs have several advantages such as a high payload-to-weight ratio, a potentially very large workspace, and possibly reconfiguration capabilities. The equilibrium of the moving platform of a CDPR is classically investigated by analyzing the CDPR workspace. In serial and rigid-link parallel robots, the workspace is commonly defined as the set of end-effector poses where a number of kinematic constraints are satisfied. In CDPRs, the workspace is usually defined as the set of poses where the CDPR satisfies one or more conditions including the static or the dynamic equilibrium of the moving platform, with the additional constraint of non-negative cable tensions. Several workspaces and equilibrium conditions have been studied in the literature. The first investigations focused on the static equilibrium and the Wrench Closure Workspace (WCW) of the moving platform, e.g. [START_REF] Fattah | Workspace and design analysis of cable-suspended planar parallel robots[END_REF][START_REF] Gouttefarde | Analysis of the wrench-closure workspace of planar parallel cable-driven mechanisms[END_REF][START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF][START_REF] Stump | Workspaces of cable-actuated parallel manipulators[END_REF][START_REF] Verhoeven | Advances in Robot Kinematics, chap. Estimating the controllable workspace of tendon-based Stewart platforms[END_REF]. Since cables can only pull on the moving platform, a pose belongs to the WCW if and only if any wrench can be applied by means of non-negative cable tensions. Feasible equilibria of the moving platform can also be analyzed using the Wrench Feasible Workspace (WFW) [START_REF] Bosscher | Wrench-feasible workspace generation for cabledriven robots[END_REF][START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Interval-analysis-based determination of the wrenchfeasible workspaceof parallel cable-driven robots[END_REF]. By definition, the WFW is the set of wrench feasible platform poses where a pose is wrench feasible when the cables can balance a given set of external moving platform wrenches while maintaining the cable tensions in between given lower and upper bounds. The Static Feasible Workspace (SFW) is a special case of the WFW, where the sole wrench induced by the moving platform weight has to be balanced [START_REF] Pusey | Design and workspace analysis of a 6-6 cablesuspended parallel robot[END_REF]. The lower cable tension bound, τ min , is defined in order to prevent the cables from becoming slack. The upper cable tension bound, τ max , is defined in order to prevent the CDPR from being damaged. The dynamic equilibrium of the moving platform can be investigated by means of the Dynamic Feasible Workspace (DFW). By definition, the DFW is the set of dynamic feasible moving platform poses. A pose is said to be dynamic feasible if a prescribed set of moving platform accelerations is feasible, with cable tensions lying in between given lower and upper bounds. The concept of dynamic workspace has already been investigated in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF] for planar CDPRs. Barrette et al. solved the dynamic equations of a planar CDPR analytically, providing the possibility to compute the boundary of the DFW. This strategy cannot be directly applied to spatial CDPRs due to the complexity of their dynamic model. In 2014, Kozlov studied in [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF] the possibility to investigate the DFW by using a tool developed by Guay et al. for the analysis of the WFW [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. However, the dynamic model proposed by Kozlov considers the moving platform as a point mass, neglecting centrifugal and Coriolis forces. This paper deals with a more general definition of the DFW. With respect to the definitions proposed in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF][START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF], the DFW considered in the present paper takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform; (iii) The centrifugal and the Coriolis forces corresponding to a given moving platform twist. The Required Wrench Set (RWS), defined here as the set of wrenches that the cables have to apply on the moving platform in order to satisfy its dynamic equilibrium, is calculated as the sum of these three contributions to the dynamic equilibrium. Then, the corresponding DFW is computed by means of the algorithm presented in [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] to analyze the WFW. Dynamic Model The CDPR dynamic model considered in this paper consists of the dynamics of the moving platform. A dynamic model taking into account the dynamics of the winches could also be considered but is not used here due to space limitations. Additionally, assuming that the diameters of the cables and the pulleys are small, the dynamics of the pulleys and the cables is neglected. The dynamic equilibrium of the moving platform is described by the following equation Wτ - I p p -C ṗ + w e + w g = 0 ( 1 ) where W is the wrench matrix that maps the cable tension vector τ into a platform wrench, and ṗ = ṫ ω p = ẗ α , (2) where ṫ = [ṫ x , ṫy , ṫz ] T and ẗ = [ẗ x , ẗy , ẗz ] T are the vectors of the moving platform linear velocity and acceleration, respectively, while ω = [ω x , ω y , ω z ] T and α = [α x , α y , α z ] T are the vectors of the moving platform angular velocity and acceleration, respectively. The external wrench w e is a 6-dimensional vector expressed in the fixed reference frame F b and takes the form w e = f T e , m T e T = [ f x , f y , f z , m x , m y , m z ] T (3) f x , f y and f z are the x, y and z components of the external force vector f e . m x , m y and m z are the x, y and z components of the external moment vector m e , respectively. The components of the external wrench w e are assumed to be bounded as follows f min ≤ f x , f y , f z ≤ f max (4) m min ≤ m x , m y , m z ≤ m max (5) According to ( 4) and ( 5), the set [w e ] r , called the Required External Wrench Set (REWS), that the cables have to balance is a hyper-rectangle. The Center of Mass (CoM) of the moving platform, G, may not coincide with the origin of the frame F p attached to the platform. The mass of the platform being denoted by M, the wrench w g due to the gravity acceleration g is defined as follows w g = MI 3 M Ŝp g ( 6 ) where I 3 is the 3 × 3 identity matrix, MS p = R [Mx p , My p , Mz p ] T is the first momentum of the moving platform defined with respect to frame F b . The vector S p = [x p , y p , z p ] T defines the position of G in frame F p . M Ŝp is the skew-symmetric matrix associated to MS p . The matrix I p represents the spatial inertia of the platform I p = MI 3 -M Ŝp M Ŝp I p (7) where I p is the inertia tensor matrix of the moving platform, which can be computed by the Huygens-Steiner theorem from the moving platform inertia tensor, I g , defined with respect to the platform CoM I p = RI g R T - M Ŝp M Ŝp M (8) R is the rotation matrix defining the moving platform orientation and C is the matrix of the centrifugal and Coriolis wrenches, defined as C ṗ = ω ωMS p ωI p ω ( 9 ) where ω is the skew-symmetric matrix associated to ω. 3 Dynamic Feasible Workspace Standard Dynamic Feasible Workspace Studies on the DFW have been realised by Barrette et al. in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF]. The boundaries of the DFW have been computed for a generic planar CDPR developing the equations of its dynamic model. Since this method cannot be easily extended to spatial CDPRs, Kozlov proposed to use the method described in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] in order to compute the DFW of a fully constrained CDPR [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF]. The proposed method takes into account the cable tension limits τ min and τ max in checking the feasibility of the dynamic equilibrium of the moving platform for the following bounded sets of accelerations ẗmin ≤ ẗ ≤ ẗmax (10) α min ≤ α ≤ α max (11) where ẗmin , ẗmax , α min , α max are the bounds on the moving platform linear and rotational accelerations. These required platform accelerations define the so-called Required Acceleration Set (RAS), [ p] r . The RAS can be projected into the wrench space by means of matrix I p , defined in [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF]. The set of wrenches [w d ] r generated by this linear mapping is defined as the Required Dynamic Wrench Set (RDWS). No external wrench is applied to the moving platform. Accordingly, the DFW is defined as follows Definition 1. A moving platform pose is said to be dynamic feasible when the moving platform of the CDPR can reach any acceleration included in [ p] r according to cable tension limits expressed by [τ] a . The Dynamic Feasible Workspace is then the set of dynamic feasible poses, [p] DFW . [ p] DFW = (t, R) ∈ R 3 × SO(3) : ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -A p = 0 (12) In the definition above, the set of Admissible Cable Tensions (ACT) is defined as [τ] a = {τ | τ min ≤ τ i ≤ τ max , i = 1, . . . , m} (13) Improved Dynamic Feasible Workspace The DFW described in the previous section has several limitations. The main drawback is associated to the fact that the proposed DFW takes into account neither the external wrenches applied to the moving platform nor its weight. Furthermore, the model used to verify the dynamic equilibrium of the moving platform neglects the Coriolis and the centrifugal wrenches associated to the CDPR dynamic model. At a given moving platform pose, the cable tensions should compensate both the contribution associated to the REWS, [w e ] r , and the RDWS, [w d ] r . The components of the REWS are bounded according to (4) and ( 5) while the components of the RDWS are bounded according to [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] and [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The dynamic equilibrium of the moving platform is described by [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF], where C is related to the Coriolis and centrifugal forces of the moving platform and w g to its weight. These terms depend only on the pose and the twist of the moving platform. For given moving-platform pose and twist, these terms are constant. Therefore, the DFW definition can be modified as follows. Definition 2. A moving platform pose is said to be dynamic feasible when, for a given twist ṗ, the CDPR can balance any external wrench w e included in [w e ] r , while the moving platform can assume any acceleration p included in [ p] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW . [p] DFW : ∀w e ∈ [w e ] r , ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -I p p-C ṗ+w e +w g = 0 (14) In this definition, we may note that the feasibility conditions are expressed according to three wrench space sets. The first set, [w d ] r , can be computed by projecting the vertices of [ p] r into the wrench space. For a 3-dimensional case study (6 DoF case), [ p] r consists of 64 vertices. The second component, [w e ] r , consists of 64 vertices as well. Considering a constant moving platform twist, the last component of the dynamic equilibrium, w c = {C ṗ + w g }, is a constant wrench. The composition of these sets generates a polytope, [w] r , defined as the Required Wrench Set (RWS). [w] r can be computed as the convex hull of the Minkowski sum over [w e ] r , [w d ] r and w c , as illustrated in Fig. 2: [w] r = [w e ] r ⊕ [w d ] r ⊕ w c (15) Thus, Def. 2 can be rewritten as a function of [w] r . Definition 3. A moving platform pose is said to be dynamic feasible when the CDPR can balance any wrench w included in [w] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW . [p] DFW : ∀w ∈ [w] r , ∃τ ∈ [τ] a s.t. Wτ -I p p + w e + w c = 0 (16) The mathematical representation in ( 16) is similar to the one describing the WFW. As a matter of fact, from a geometrical point of view, a moving platform pose will be dynamic feasible if [w] r is fully included in [w] a [w] r ⊆ [w] a (17) Consequently, the dynamic feasibility of a pose can be verified by means of the hyperplane shifting method [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF][START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The distances between the facets of the avail- -100 N ≤ f x , f y , f z ≤ 100 N (19) -1 Nm ≤m x , m y , m z ≤ 1 Nm (20) Similarly, the range of accelerations of the moving platform is limited according to the following inequalities: -2 m/s 2 ≤ ẗx , ẗy , ẗz ≤ 2 m/s 2 (21) -0.1 rad/s 2 ≤α x , α y , α z ≤ 0.1 rad/s 2 (22) For the foregoing conditions, the improved DFW of the CDPR covers the 47.96% of its volume. Figure 4(a) illustrates the improved DFW of the CDPR under study. The results have been compared with respect to the dynamic feasibility conditions described by Def. 1. By considering only the weight and the inertia of the moving platform, the DFW covers the 63.27% of the volume occupied by the DFW, as shown in Fig. 4(b). Neglecting the effects of the external wrenches and the Coriolis forces, the volume of the DFW is 32% larger than the the volume of the improved DFW. Similarly, by neglecting the inertia of the CDPR and taking into account only the external wrenches w e , the WFW occupies the 79.25% of the CDPR volume. By taking into account only the weight of the moving platform, the SFW covers 99.32% of the CDPR volume. These results are summarized in Tab. 1. Conclusion This paper introduced an improved dynamic feasible workspace for cable-driven parallel robots. This novel workspace takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform and (iii) The centrifugal and the Coriolis forces induced by a constant moving platform twist. As an illustrative example, the static, wrench-feasible, dynamic and improved dynamic workspaces of a spatial suspended cable-driven parallel robot, with the dimensions of a prototype developed in the framework of the IRT JV CAROCA project, are traced. It turns out that the IDFW of the CDPR under study is respectively 1.32 times, 1.65 times and 2.07 times smaller than its DFW, WFW and SFW. Fig. 1 1 Fig. 1 Example of a CDPR design created in the framework of the IRT JV CAROCA project. Fig. 2 2 Fig. 2 Computation of the RWS [w] r . Example of a planar CDPR with 3 actuators and 2 translational DoF. Fig. 3 3 Fig.3Layout of the CoGiRo cable-suspended parallel robot[START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] with the size of the IRT JV CAROCA prototype. Fig. 4 4 Fig. 4 (a) Improved DFW and (b) DFW of the CDPR under study covering 47.96% and 63.27% of its volume, respectively. Table 1 1 Comparison of SFW , W FW , DFW and IDFW of the CDPR under study. Covered Volume of the CDPR 99.32% 79.25% 63.27% 47.95% Workspace type SFW W FW DFW IDFW Acknowledgements This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, DCNS, AIRBUS and CNRS.
17,565
[ "923232", "170861", "10659" ]
[ "235335", "388165", "481388" ]
01758077
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01758077/file/ARK2016_Platis_Rasheed_Cardou_Caro.pdf
Angelos Platis Tahir Rasheed Philippe Cardou Stéphane Caro Isotropic Design of the Spherical Wrist of a Cable-Driven Parallel Robot Keywords: Parallel mechanism, cable-driven parallel robot, parallel spherical wrist, wrenches, dexterity Because of their mechanical properties, parallel mechanisms are most appropriate for large payload to weight ratio or high-speed tasks. Cable driven parallel robots (CDPRs) are designed to offer a large translation workspace, and can retain the other advantages of parallel mechanisms. One of the main drawbacks of CD-PRs is their inability to reach wide ranges of end-effector orientations. In order to overcome this problem, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity. Introduction Several applications could benefit from CDPRs endowed with large orientation workspaces, such as entertainment and manipulation and storage of large and heavy parts. This component of the workspace is relatively small in existing CDPR designs.To resolve this problem, a parallel spherical wrist (PSW) end-effector is introduced and connected in series with the translational 3-DOF CDPR to provide an unbounded singularity-free orientation workspace. IRCCyN, École Centrale de Nantes, 1 rue de la Noë, 44321, Nantes, France, e-mail: {Angelos.Platis, Tahir.Rasheed}@eleves.ec-nantes.fr Laboratoire de robotique, Département de génie mécanique, Université Laval, Quebec City, QC, Canada. e-mail: [email protected] CNRS-IRCCyN, 1 rue de la Noë, 44321, Nantes, France, e-mail: [email protected] 1 This paper focuses on the kinematic design and analyis of a PSW actuated by the cables of a CDPR providing the robot independent translation and orientation workspaces. CDPRs are generally capable of providing a large 3-dofs translation workspace, normally needed four cables, which enable the user to control the point where all of them are concentrated [START_REF] Bahrami | Optimal design of a spatial four cable driven parallel manipulator[END_REF], [START_REF] Hadian | Kinematic isotropic configuration of spatial cable-driven parallel robots[END_REF]. Robots that can provide large orientation workspace have been developed using spherical wrist in the past few years that allows the end-effector to rotate with unlimited rolling, in addition to a limited pitch and yaw movements [START_REF] Bai | Modelling of a spherical robotic wrist with euler parameters[END_REF], [START_REF] Wu | Dynamic modeling and design optimization of a 3-dof spherical parallel manipulator[END_REF]. Eclipse II [START_REF] Kim | Eclipse-ii: a new parallel mechanism enabling continuous 360-degree spinning plus three-axis translational motions[END_REF] is an interesting robot that can provide unbounded 3-dofs translational motions, however its orientation workspace is constrained by structural interference and rotation limits of the spherical joints. Several robots have been developed in the past having decoupled translation and rotational motions. One interesting concept of such a robot is that of the Atlas Motion Platform [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF] developed for simulation applications. Another robot with translation motions decoupled from orientation motions can be found in [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF]. The decoupled kinematics are obtained using a triple spherical joint in conjunction with a 3-UPS parallel robot. In order to design a CDPR with a large orientation workspace, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity. Manipulator Architecture The end-effector is a sphere supported by actuated omni-wheels as shown in Fig. 1. The wrist contians three passive ball joints at the bottom and three active omniwheels being driven through drums. Each cable makes several loops around each drum. Both ends are connected to two servo-actuated winches, which are fixed to the base. When two servo-actuated winches connected to the same cable turn in the same direction, the cable circulates and drives the drum and its associated omniwheel. When both servo-actuated winches turn in opposite directions, the length of the cable loop changes, and the sphere centre moves. To increase the translation workspace of the CDPR, another cable is attached, which has no participation in the omni-wheels rotation. The overall design of the manipulator is shown in Fig. 2. We have in total three frames. First, the CDPR base frame (F 0 ), which is described by its center O 0 having coordinates x 0 , y 0 , z 0 . Second, the PSW base frame (F 1 ), which has its center O 1 at the geometric center of the sphere and has coordinates x 1 , y 1 , z 1 . Third, the spherical end-effector frame (F 2 ) is attached to the end-effector. Its centre O 2 coincides with that of the PSW base frame (O 2 ≡ O 1 ) and its coordinates are x 2 , y 2 , z 2 . Exit points A i are the cable attachment points that link the cables to the base. All exit points are fixed and expressed in the CDPR reference frame F 0 . Anchor points B i are the platform attachment points. These points are not fixed as they depend to winch #1 to winch #2 actuated omni-wheel passive ball joint drum to winch #7 Fig. 1: Isotropic design of the parallel spherical wrist on the vector P, which is the vector that contains the pose of the moving platform expressed in the CDPR reference frame F 0 . The remaining part of the paper aims at finding the appropriate placement of the omni-wheels on the wrist to maximise the robot dexterity. Kinematic Analysis of the Parallel Spherical Wrist Parameterization To simplify the parameterization of the parallel spherical wrist, some assumptions are made. First, all the omni-wheels are supposed to be normal the sphere. Second, the contact points of the omni-wheels with the sphere lie in the base of an inverted cone where its end is the geometrical center of the sphere parametrized by angle α. x 0 y 0 z 0 O 0 F 0 x 1 y 1 z 1 z 2 x 2 y 2 O 1,2 F 1 F 2 A 1 A 2 A 3 A 4 B 1 B 2 B 3 B 4 1 2 3 4 5 6 7 Fig. 2: Concept idea of the manipulator Third, the three contact points form an equilateral triangle as shown in [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF][START_REF] Hayes | Atlas motion platform generalized kinematic model[END_REF]. Fourth, the angle between the tangent to the sphere and the actuation force produced by the ith actuated omni-wheel is named β i , i = 1, 2, 3, and β 1 = β 2 = β 3 = β . Figure 3 illustrates the sphere, one actuated omni-wheel and the main design variables of the parallel spherical wrist. Π i is the plane tangent to the sphere and passing through the contact point G i between the actuated omni-wheel and the sphere. ω i denotes the angular velocity vector of the ith actuated omni-wheel. s i is a unit vector along the tangent line T that is tangent to the base of the cone and coplanar to plane Π i . w i is a unit vector normal to s i . f ai depicts the transmission force lying in plane Π i due to the actuated omni-wheel. α is the angle defining the altitude of contact points G i (α ∈ [0, π]). β is the angle between the unit vectors s and v i (β ∈ [-Π 2 , Π 2 ]). As the contact points G i are the corners of an equilateral triangle, the angle between the contact point G 1 and the contact points G 2 and G 3 is equal to γ. R is the radius of the sphere. r i is radius of the i th actuated omni-wheel. φi is the angular velocity of the omni-wheel. u i , v i , n i are unit vectors at point G i and i, j, k are unit vectors along x 2 , y 2 , z 2 respectively. In order to analyze the kinematic performance of the parallel spherical wrist, an equivalent parallel robot (Fig. 4) having six virtual legs is presented, each leg having a spherical, a prismatic and another spherical joints connected in series. Three legs have an actuated prismatic joint (green), whereas the other three legs have a locked prismatic joints (red). Here, the kinematics of the spherical wrist is analyzed with screw theory and an equivalent parallel robot represented in Fig. 4. Kinematic Modeling Fig. 4(a) represents the three actuation forces f ai , i = 1, 2, 3 and the three constraint forces f ci , i = 1, 2, 3 exerted by the actuated omni-wheels on the sphere. The three constraint forces intersect at the geometric center of the sphere and prevent the latter from translating. The three actuation forces generated by the three actuated omniwheels allow us to control the three-dof rotational motions of the sphere. Fig. 4(b) depicts a virtual leg corresponding to the effect of the ith actuated omni-wheel on the sphere. The kinematic model of the PSW is obtained by using the theory of reciprocal screws [START_REF] Ball | A treatise on the theory of screws[END_REF][START_REF] Hunt | Kinematic geometry of mechanisms[END_REF] as follows: A t = B φ ( 1 ) where t is the sphere twist, φ = φ1 φ2 φ3 T is the actuated omni-wheel angular velocity vector. A and B are respectively the forward and inverse kinematic Jacobian matrices of the PSW and take the form: A = A rω A rp 0 3×3 I 3 (2) B = I 3 0 3×3 (3) 0 G 2 G 3 G 1 B 2 B 3 B 1 A 2 A 3 A 1 Locked Prismatic Joint Actuated Prismatic Joint f a1 f a2 f a3 f c1 f c2 f c3 G A rω =   R(n 1 × v 1 ) T R(n 2 × v 2 ) T R(n 3 × v 3 ) T   and A rp =   v T 1 v T 2 v T 3   (4) As the contact points on the sphere form an equilateral triangle, γ = 2π/3. As a consequence, matrices A rω and A rp are expressed as functions of the design parameters α and β : A rω = R 2    -2CαCβ -2Sβ 2SαCβ CαCβ + √ 3Sβ Sβ - √ 3CαCβ 2SαCβ CαCβ - √ 3Sβ Sβ + √ 3CαCβ 2SαCβ    (5) A rp = 1 2    -2CαSβ 2Cβ 2SαSβ CαSβ - √ 3Cβ -( √ 3CαSβ +Cβ ) 2SαSβ CαSβ + √ 3Cβ √ 3CαSβ -Cβ 2SαSβ    (6) where C and S denote the cosine and sine functions, respectively. Singularity Analysis As matrix B cannot be rank deficient, the parallel spherical wrist meets singularities if and only if (iff) matrix A is singular. From Eqs. ( 5) and ( 6), matrix A is singular Ι 0 G 2 G 3 G 1 f a1 f a2 f a3 (a) β = ±π/2 0 G 2 G 3 G 1 f a1 f a2 f a3 (b) α = π/2 and β = 0 det(A) = 3 √ 3 2 R 3 SαCβ (1 -S 2 αC 2 β ) = 0 (7) namely, if α = 0 or π; if β = ±π/2; if α = π/2 and β = 0 or ±π. Figs. 5a and 5b represent two singular configurations of the parallel spherical wrist under study. The three actuation forces f a1 , f a2 and f a3 intersect at point I in Fig. 5a. The PSW reaches a parallel singularity and gains an infinitesimal rotation (uncontrolled motion) about an axis passing through points O and I in such a configuration. The three actuation forces f a1 , f a2 and f a3 are coplanar with plane (X 1 OY 1 ) in Fig. 5b. The PSW reaches a parallel singularity and gains two-dof infinitesimal rotations (uncontrolled motions) about an axes that are coplanar with plane (X 1 OY 1 ) in such a configuration. Kinematically Isotropic Wheel Configurations This section aims at finding a good placement of the actuated omni-wheels on the sphere with regard to the manipulator dexterity. The latter is evaluated by the condition number of reduced Jacobian matrix J ω = rA -1 rω which maps angular velocities of the omni-wheels φ to the required angular velocity of the end-effector ω. From Eqs. ( 5) and ( 6), the condition number κ F (α, β ) of J ω based on the Frobenius norm [START_REF] Angeles | Fundamentals of Robotic Mechanical Systems: Theory, Methods and Algorithms[END_REF] is expressed as follows: Figure 6 depicts the inverse condition number of matrix A based on the Frobenius norm as a function of angles α and β . κ F (α, β ) is a minimum when its partial derivatives with respect to α and β vanish, namely, κ F (α, β ) = 1 3 3S 2 αC 2 β + 1 S 2 αC 2 β (1 -S 2 αC 2 β ) (8) κα (α, β ) = ∂ κ ∂ α = Cα(3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 3 αC 2 β (S 2 αC 2 β -1) 2 κ = 0 (9) κβ (α, β ) = ∂ κ ∂ β = - Sβ (3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 2 αC 3 β (S 2 αC 2 β -1) 2 κ = 0 ( 10 ) and its Hessian matrix is semi-positive definite. As a result, κ F (α, β ) is a minimum and equal to 1 along the hippopede curve, which is shown in Fig. 6 and defined by the following equation: 3S 2 αC 2 β -1 = 0 [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF] This hippopede curve amounts to the isotropic loci of the parallel spherical wrist. Figure 7 illustrates some placements of the actuated omni-wheels on the sphere leading to kinematically isotropic wheel configurations in the parallel spherical wrist. It should be noted that the three singular values of matrix A rω are equal to the ratio between the sphere radius R and the actuated omni-wheel radius r along the hippopede curve, namely, the velocity amplification factors of the PSW are the same and constant along the hippopede curve. If the rotating sphere were to carry a camera, a laser or a jet of some sort, then the reachable orientations would be limited by interferences with the omni-wheels. α = 35.26 • , β = 0 • α = 65 • , β = 50.43 • α = 50 • , β = 41.1 • α = 80 • , β = 54.11 • Fig. 7: Kinematically isotropic wheel configurations in the parallel spherical wrist Therefore, a designer would be interested in choosing a small value of alpha, so as to maximize the field of view of the PSW. As a result, the following values have been assigned to the design parameters α and β : α = 35.26 • (12) β = 0 • (13) in order to come up with a kinematically isotropic wheel configuration in the parallel spherical wrist and a large field of view. The actuated omni-wheels are mounted in pairs in order to ensure a good contact between them and the sphere. A CAD modeling of the final solution is represented in Fig. 1. Conclusion This paper presents the novel concept of mounting a parallel spherical wrist in series with a CDPR, while preserving a fully-parallel actuation scheme. As a result, the actuators always remain fixed to the base, thus avoiding the need to carry electric power to the end-effector and minimizing its size, weight and inertia. Another original contribution of this article is the determination of the kinematically isotropic wheel configurations in the parallel spherical wrist. These configurations allow the designer to obtain a very good primary image of the design choices. To our knowledge, these isotropic configurations were never reported before, although several researchers have studied and used omni-wheel-actuated spheres. Future work includes the development of a control scheme to drive the end-effector rotations while accounting for the displacements of its centre, and also making a small scale prototype of the robot. 1 Fig. 3 : 13 Fig. 3: Parameterization of the parallel spherical wrist Fig. 4 : 4 Fig. 4: (a) Actuation and constraint wrenches applied on the end-effector of the spherical wrist (b) Virtual i th leg with actuated prismatic joint Fig. 5 : 5 Fig. 5: Singular configurations of the parallel spherical wrist Fig. 6 : 6 Fig. 6: Inverse condition number of the forward Jacobian matrix A based on the Frobenius norm as a function of design parameters α and β
15,554
[ "10659" ]
[ "111023", "111023", "473973", "109505", "481388" ]
01688104
en
[ "spi" ]
2024/03/05 22:32:10
2013
https://hal.science/hal-01688104/file/tagutchou_2013.pdf
J P Tagutchou Dr L Van De Steene F J Escudero Sanz S Salvador Gasification of Wood Char in Single and Mixed Atmospheres of H 2 O and CO 2 Keywords: biomass, gasification, kinetics, mixed atmosphere, reactivity In gasification processes, char-H 2 O and char-CO 2 are the main heterogenous reactions that are responsible for carbon conversion into H 2 and CO. These two reactions are generally looked at independently without considering interactions between them. The objective of this work was to compare kinetics of each reaction alone to kinetics of each reaction in a mixed atmosphere of H 2 O and CO 2 . A char particle was gasified in a macro thermo gravimetry reactor at 900 ı C successively in H 2 O/N 2 , CO 2 /N 2 , and H 2 O/CO 2 /N 2 atmospheres. INTRODUCTION The process of biomass conversion to syngas (H 2 C CO) involves a number of reactions. The first step is drying and devolatilization of the biomass, which leads to the formation of gas (noncondensable species), tar (gaseous condensable species), and a solid residue called char. Gas and tar are generally oxidized to produce H 2 O and CO 2 . The solid residue (the subject of this work) is converted to produce syngas (H 2 C CO) thanks to the following heterogeneous reactions: C C H 2 O ! CO C H 2 ; (1) C C CO 2 ! 2CO; (2) C C O 2 ! CO=CO 2 : (3) Many studies have been conducted on char gasification in reactive H 2 O, CO 2 , or O 2 atmospheres. The reactivity of char during gasification processes depends on the reaction temperature and on the concentration of the reactive gas. Additionally, these heterogeneous reactions are known to be surface reactions, involving a so-called "reactive surface." While the role of temperature and reactive gas partial pressure are relatively well understood, clearly defining and quantifying the reactive surface remains a challenge. The surface consists of active sites located at the surface of pores where the adsorption/desorption of gaseous molecules takes place. The difficulty involved in determining this surface can be explained by a number of physical and chemical phenomena that play an important role in the gasification process: (i) The whole porous surface of the char may not be accessible to the reactive gas, and may itself not be reactive. The pore size distribution directly influences the access of reactive gas molecules to active sites [START_REF] Roberts | A kinetic analysis of coal char gasification reactions at high pressures[END_REF]. It has been a common practice to use the total specific surface area measured using the standard BET test as the reactive surface. However, it has been established that a better indicator is the surface of only pores that are larger than several nm or tens of nm [START_REF] Commandré | The high temperature reaction of carbon with nitric oxide[END_REF]. (ii) As the char is heated to high temperatures, a reorganization of the structure occurs. The concentration of available active sites of carbon decreases and this has a negative impact on the reactivity of the char. This phenomenon is called thermal deactivation. (iii) The minerals present in the char have a catalytic effect on the reaction and help increase the reactivity of the char. Throughout the gasification process, there is a marked increase in the mass fraction of catalytic elements contained in the char with a decrease in the mass of the carbon. Due to the complexity of the phenomena and the difficulty to distinguish the influence of each phenomenon on reactivity, a surface function (referred to as SF in this article) is usually introduced in models to describe the gasification of carbon and to globally account for all of the physical phenomenon [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF][START_REF] Gobel | Dynamic modelling of char gasification in a fixed-bed[END_REF]. While single H 2 O and CO 2 atmospheres have been extensively studied, only a few authors have studied the gasification of a charcoal biomass in mixed atmospheres. Kinetic model classically proposed for the gasification of carbon residues is as follows: d m.t/ dt D R.t/:m.t/: (4) The reactivity of charcoal with a reactant j is often split into intrinsic reactivity r j , which only depends on temperature T and partial pressure p of the reactive gas, and the surface function F: R.t/ D F .X.t//:r j .T:p/: (5) As discussed above, the surface function F depends on many phenomena. In a simplifying approach, many authors express it as a function of the conversion X. METHODOLOGY Using a thermogravimetry (macro-TG) apparatus, gasification of char particles was characterized in three different reactive atmospheres: single H 2 O atmosphere, single CO 2 atmosphere, and a mixed atmosphere containing both CO 2 and H 2 O. Experimental Set-up The macro-TG reactor used in this work is described in detail in [START_REF] Mermoud | Influence of the pyrolysis heating rate on the steam gasification rate of large wood char particles[END_REF] N 2 -at a controlled temperature. The particles are continuously weighed to monitor conversion of the charcoal. The particles were left in the hot furnace swept by nitrogen and maintained until their weight stabilized, attesting to the removal of possible residual volatile matter or re-adsorbed species. The atmosphere then turned into a gasifying atmosphere, marking the beginning of the experiment. Preparation and Characterization of the Samples The material used in this study was charcoal from maritime pine wood chips. Charcoal was produced using a pilot scale screw pyrolysis reactor. The pyrolysis operating conditions were chosen to produce a char with high fixed carbon content, i.e., a temperature of 750 ı C, a 1 h residence time, and 15 kg/h of flow rate in a 200-mm internal diameter electrically heated screw. Based on previous studies, the heating rate in the reactor was estimated to be 50 ı C/min [START_REF] Fassinou | Pyrolysis of Pinus pinaster in a two-stage gasifier: Influence of processing parameters and thermal cracking of tar[END_REF]. After pyrolysis, samples with a controlled particle size were prepared by sieving, and the thickness of particles was subsequently measured using an electronic calliper. Particles with a thickness of 1.5 and 5.5 mm were selected for all the experiments. Table 1 lists the results of proximate and ultimate analysis of the charcoal particles. The amount of fixed carbon was close to 90%, attesting to the high quality of the charcoal. The amount of ash, a potential catalyzer, was 1.4%. GASIFICATION OF CHARCOAL IN SINGLE ATMOSPHERES Operating Conditions All experiments were carried out at a temperature of 900 ı C and at atmospheric total pressure. For each gasifying atmosphere, the mole fraction was chosen to cover values encountered in industrial reactors; experiments were performed at respectively 10, 20, and 40% mole fraction, respectively, for both H 2 O and CO 2 . In order to deal with the variability of the composition of biomass chips, each experiment was carried out with three to five particles in the grid basket. Care was taken to ensure there was no interaction between the particles. Each experiment was repeated at least three times. Results and Interpretations From the mass m(t) at any time, the conversion progress X was calculated according to Eq. ( 6): where m 0 and m ash represent, respectively, the initial mass of the char and the mass of ash at the end of the process. Figure 2 shows the conversion progress versus time for all the experiments. For char-H 2 O experiments, good repeatability was observed. Before 50% conversion, dispersion was small (<5%), while after 50% conversion, it could reach 10%. An average gasification rate was calculated for each experiment at X D 0:5 as 0:5=t (in s 1 ). It was 2.5 times larger in 40% steam than in 10% steam. X.t/ D m 0 m.t/ m 0 m ash ; (6) For char-CO 2 experiments, much larger dispersion was observed. It is difficult to give an explanation for this result. The gasification rate was 2.4 times higher in 40% CO 2 than in 10% CO 2 . Moreover, the results revealed a strange evolution in 20% CO 2 : the reaction was considerably slowed down after 60% conversion. This was also observed by [START_REF] Standish | Gasification of single wood charcoal particles in CO 2[END_REF] during their experiments on gasification of charcoal particles in CO 2 at a concentration of 20% CO 2 . At a given concentration (for instance 40%) steam gasification was on average three times faster than CO 2 gasification. Determination of Surface Functions (SF) In practice, the SF can be derived without using a model by plotting R=R 50 (where R 50 is the reactivity for X D 50%). The reactivity R was obtained by derivation of the X curves. It was not possible to plot the values of SF when X tends towards 1 because by the end of the experiment, the decrease in mass was very small leading to a too small signal/noise ratio to enable correct derivation of the signal and calculation of R. At the beginning of the experiments, the derivative was also too noisy for accurate determination. Thus, for small values of X ranging from zero to 0.15, F .X/ was assumed to be constant and equal to F .X D 0:15/. In addition, from a theoretical point of view, F .X/ should be determined using intrinsic values of R, i.e., from experiments in which no limitation by heat or mass transfer occurs. In practice, it has been shown in the literature that experiments with larger particles can be used [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF]. It is shown in Figure 3 that the results obtained for small particles (1.5 mm thickness) were similar to those for larger particles (5.5 mm thickness). All results are plotted as F .X/ versus X in Figure 4 for the two reactant gases. For the atmospheres with 10 and 40% CO 2 , it is interesting to note that good repeatability was obtained for the SF when the evolution of X over time showed bad repeatability. While the reactivity of the three samples differed, the SF remained the same. Conversely, in 20% CO 2 , the repeatability of the test appeared to be good in the X D f .t/ plot (Figure 2), but results led to quite different shapes for the SF after 60% conversion. An average value for repeatability experiments was then determined and is plotted in Figure 5. From these results, polynomials were derived for F .X/, as shown in Table 2. It was clearly observed that the 5th order was the most suitable to fit simultaneously all the experimental results of F .X/ in the different atmospheres with the best correlation coefficients. The results show that except in 20% CO 2 , the SF are monotonically increasing functions. For this representation, where the SF are normalized to 1 at X D 0:5, the plots indicate a small increase (from 0.6 to 1) when X increases from 0.1 to 0.5, and a very strong increase (to 4 or 5) when X tends towards 0.9. In experiments with 10, 20, and 40% H 2 O, the SF appeared not to be influenced by the concentration of steam. When CO 2 was the gasifying agent, a strong influence of the concentration was observed, confirming the strange behavior observed in Figure 2 in 20% CO 2 . The function for 10% CO 2 was similar to that of H 2 O (whatever the concentration). A decreasing SF was found with 20% CO 2 for X between 0.6 and 0.75. This evolution has never previously been reported in the literature. Referring to the discussion about the phenomena that are taken into account in the SF, it is not possible to attribute this irregular shape to a physical phenomenon. Figure 6 plots several SF from the literature, normalized at X D 0:5 to enable comparison. Expressions, such as ˛-order of .1 X/, and polynomial forms commonly used for biomass were retained. The SF obtained in 10% H 2 O, which is similar to that obtained in 40% CO 2 , has been added in the figure. It can be observed that up to 50% conversion, most of the SF published in the literature are similar. At higher conversions, all SF follow an exponential type function, but differ significantly in their rate of increase. The results of the authors' experiments (10% H 2 O) are within the range of values reported in the literature. GASIFICATION OF CHARCOAL IN H 2 O C CO 2 ATMOSPHERES To investigate mixed atmospheres, experiments were conducted using 20% H 2 O with the addition of alternatively 10, 20, and 40% CO 2 . The results of conversion versus time are plotted in Figure 7. For each mixed atmosphere, the average results obtained in the single atmospheres are given as references. Rather good repeatability was observed. It can be seen that adding CO 2 to H 2 O accelerated steam gasification. Indeed, mixing, respectively, 10, 20, and 40% of CO 2 with 20% of H 2 O increased the rate of gasification by 20, 33, and 57%, respectively, compared to the rate of gasification in 20% H 2 O alone. This is a new result, since in the literature, studies on biomass gasification concluded on that steam gasification was inhibited by CO 2 [START_REF] Ollero | The CO 2 gasification kinetics of olive residue[END_REF]. In the 20% H 2 O C 10% CO 2 atmosphere, the average gasification rate was 0.745 10 3 s 1 , which is approximately equal to the sum of the gasification rates obtained in the two separate atmospheres: 0.740 10 3 s 1 . This was also the case for the mixed atmosphere 20% H 2 O C 20% CO 2 . In the 20% H 2 O C 40% CO 2 atmosphere, the average gasification rate was 1.19 10 3 s 1 , i.e., 20% higher than the sum of the gasification rates obtained in the two single atmospheres. In other words, cooperation between CO 2 and H 2 O led to unexpected behaviors. A number of considerations can help interpret this result. First, the geometrical structure of the two molecules-polar and non-linear for H 2 O and linear and apolar for CO 2 -predestines them to different adsorption mechanisms on potentially different active carbon sites [START_REF] Slasli | Modelling of water adsorption by activated carbons: Effects of microporous structure and oxygen content[END_REF]. The presence of hydrophilic oxygen, such as [-O], at the surface of char leads to the formation of hydrogen bonds, which could hinder H 2 O adsorption and favor that of CO 2 [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF]. In the same way, as it is a non-organic molecule, H 2 O can only access hydrophobic sites while CO 2 , which is an organic molecule, can access both hydrophilic and hydrophobic sites. According to [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF], due to constriction or molecular sieve effects, CO 2 molecules have access to micropores of materials while those of H 2 O, which are assumed to be bigger, do not. For one of the previous reasons or for any other reason, CO 2 molecules can access internal micropores more easily than H 2 O molecules, and can therefore open certain pores, making them accessible to H 2 O molecules. The assumption that H 2 O and CO 2 molecules reacted with different sites and that no competition occurred is not sufficient to explain the increase of 20% in the gasification rate under mixed atmospheres. Point 0 can be proposed as an explanation, but a more precise explanation requires further research work. [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] recently concluded that CO 2 has an inhibitory effect on H 2 O gasification, in contradiction to the authors' results. It is believed that the conclusions of [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] are valid in their experimental conditions only, and with a little hindsight may be called into question. Figure 8 gives the plots of SF obtained with the three mixed atmospheres and for all repeatability tests. Again, the repeatability of experiments was excellent until X D 0:6; this attests to the good quality of experiments and confirms that the variations in SF after 60% conversion are due to specific phenomena. Figure 9 compares all the average SF obtained in mixed atmosphere. From these curves, it can be seen that the curve is similar when the amount of CO 2 was modified from 10 to 40%. Thus, an average 5th-order polynomial expression for mixed atmosphere is given in Eq. ( 7): F .X/ D 130:14X 5 264:67X 4 C 192:38X 3 57:90X 2 C 7:28X C 0:25: (7) CONCLUSION The gasification of wood char particles during gasification in three atmospheres, i.e., H 2 O, CO 2 , and H 2 O/CO 2 , was experimentally investigated. The formulation adopted enables to split the reactivity R.t/ into kinetic parameters, r j , and all physical aspects, i.e., reactive surface evolution, thermal annealing, catalytic effects, into a surface function SF, F .X/, as follows: The repeatability of the derived SF was always very good until X D 0:6, which attests to the good quality of the experiments. For higher values of X, significant dispersion was observed, despite the use of several particles for each experiment. The SF depends on the nature of the reactant gas, and-in the case of CO 2 -on the concentration of the gas. A SF that surprisingly decreased with increasing X in the range 0.6-0.75 was obtained with CO 2 atmosphere in this work. d An important result of this article is that the addition of CO 2 in a H 2 O atmosphere led to an acceleration of gasification kinetic. In a mixture of 20% H 2 O and 40% CO 2 , the gasification rate was 20% higher than the sum of the gasification rates in the two single atmospheres. FIGURE 2 2 FIGURE 2 Conversion progress versus time during gasification at 900 ı C in single atmospheres (10, 20, and 40% H 2 O and 10, 20, and 40% CO 2 ). (color figure available online) FIGURE 3 3 FIGURE 3 SF for the two cases of 1.5 mm and 5.5 mm particles in steam atmosphere. (color figure available online) FIGURE 4 4 FIGURE 4 SF for each experimental result obtained in a single atmosphere. FIGURE 5 5 FIGURE 5 Average SF obtained in each single atmosphere. (color figure available online) FIGURE 7 7 FIGURE 7 Experimental results obtained in mixed atmospheres (A: 10% H 2 O and 20% CO 2 ; B: 20% H 2 O and 20% CO 2 ; and C: 20% H 2 O and 40% CO 2 ). For each mixed atmosphere, the corresponding average experimental results for single atmospheres are shown in thick solid line (20% H 2 O single atmosphere) and in thick dashed lines (CO 2 single atmospheres). (color figure available online) FIGURE 8 8 FIGURE 8 SF obtained in different mixed atmospheres for all experimental repeatability tests. FIGURE 9 9 FIGURE 9 Average SF obtained in the different mixed atmospheres. and is presented in Figure1. It consists of positioning several charcoal particles in a grid basket inside the reactor at atmospheric pressure. The reactor is swept by the oxidizing agent-H 2 O or CO 2 in FIGURE 1 Macro thermogravimetry experimental apparatus. (1) Electric furnace; (2) Quartz tube; (3) Extractor; (4) Preheater; (5) Evaporator; (6) Water feeding system; (7) Water flow rate; (8) Leakage compensation; (9) Suspension basket; (10) Weighing system; (T i ) Regulation thermocouples; (M i ) Mass flow meter. TABLE 1 1 Proximate and Ultimate Analysis of Charcoal from Maritime Pine Wood Chips Proximate Analysis, Mass % Ultimate Analysis, Mass % M VM (dry) FC(dry) Ash (dry) C˙0.3% H ˙0.3% O ˙0.3% N ˙0.1% S ˙0.005% 1.8 4.9 93.7 1.4 89.8 2.2 6.1 0.1 0.01 M: Moisture content; VM: Volatile matter; FC: Fixed carbon. m.t/ dt D R.t/:m.t/ with R.t/ D F .X.t//:r j .T:p/:
19,792
[ "996971", "19516", "17552" ]
[ "11574", "11574", "242220", "242220" ]
01758141
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758141/file/DesignRCDPRs_Gagliardini_Gouttefarde_Caro_Final_HAL.pdf
Lorenzo Gagliardini email: [email protected] Marc Gouttefarde email: [email protected] Stéphane Caro email: [email protected] Design of Reconfigurable Cable-Driven Parallel Robots This chapter is dedicated to the design of Reconfigurable Cable-Driven Parallel Robots (RCDPRs) where the locations of the cable exit points on the base frame can be selected from a finite set of possible values. A task-based design strategy for discrete RCDPRs is formulated. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into parts. Each part shall be covered by one configuration of the RCDPR. Placing the cable exit points on a grid of possible locations, numerous CDPR configurations can be generated. All the possible configurations are analysed with respect to a set of constraints in order to determine the parts of the prescribed workspace or trajectory that can be covered. The considered constraints account for cable interferences, cable collisions, and wrench feasibility. The configurations satisfying the constraints are then compared in order to find the combinations of configurations that accomplish the required task while optimising one or several objective function(s). A case study comprising the design of a RCDPR for sandblasting and painting of a three-dimensional tubular structure is finally presented. Cable exit points are reconfigured, switching from one side of the tubular structure to another, until three external sides of the structure are covered. The optimisation includes the minimisation of the number of cable attachment/detachment operations required to switch from one configuration to another one, minimisation of the size of the RCDPR, and the maximisation of the RCDPR stiffness. Introduction Cable-Driven Parallel Robots (CDPRs) form a particular class of parallel robots whose moving platform is connected to a fixed base frame by cables. Hereafter, the connection points between the cables and the base frame will be referred to as exit points. The cables are coiled on motorised winches. Passive pulleys may guide the cables from the winches to the exit points. A central control system coordinates the motors actuating the winches. Thereby, the pose and the motion of the moving platform are controlled by modifying the cable lengths. An example of CDPR is shown in Fig. 1. CDPRs have several advantages such as a relatively low mass of moving parts, a potentially very large workspace due to size scalibility, and reconfiguration capabilities. Therefore, they can be used in several applications, e.g. heavy payload handling and airplane painting [START_REF] Albus | The NIST spider, a robot crane[END_REF], cargo handling [START_REF] Holland | Cable array robot for material handling[END_REF], warehouse applications [START_REF] Hassan | Analysis of large-workspace cable-actuated manipulator for warehousing applications[END_REF], large-scale assembly and handling operations [START_REF] Pott | Large-scale assembly of solar power plants with parallel cable robots[END_REF][START_REF] Williams | Contour-crafting-cartesian-cable robot system concepts: Workspace and stiffness comparisons[END_REF], and fast pick-and-place operations [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF][START_REF] Maeda | On design of a redundant wire-driven parallel robot WARP manipulator[END_REF][START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. Other possible applications include the broadcasting of sporting events, haptic devices [START_REF] Fortin-Coté | An admittance control scheme for haptic interfaces based on cable-driven parallel mechanisms[END_REF][START_REF] Gallina | 3-DOF wire driven planar haptic interface[END_REF][START_REF] Rosati | Design, implementation and clinical test of a wire-based robot for neurorehabilitation[END_REF], support structures for giant telescopes [START_REF] Yao | Dimensional optimization design for the four-cable driven parallel manipulator in FAST[END_REF][START_REF] Yao | A modeling method of the cable driven parallel manipulator for FAST[END_REF], and search and rescue deployable platforms [START_REF] Merlet | Kinematics of the wire-driven parallel robot MARIONET using linear actuators[END_REF][START_REF] Merlet | A portable, modular parallel wire crane for rescue operations[END_REF]. Recent studies have been performed within the framework of an ANR Project CoGiRo [2] where an efficient cable layout has been proposed [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] and used on a large CDPR prototype called CoGiRo. CDPRs can be used successfully if the tasks to be fulfilled are simple and the working environment is not cluttered. When these conditions are not satisfied, Reconfigurable Cable-Driven Parallel Robots (RCDPRs) may be required to achieve the prescribed goal. In general, several parameters can be reconfigured, as described in Section 2. Moreover, these reconfiguration parameters can be selected in a discrete or a continuous set of possible values. Preliminary studies on RCDPRs were performed in the context of the NIST RoboCrane project [START_REF] Bostelman | Cable-based reconfigurable machines for large scale manufacturing[END_REF]. Izard et al. [START_REF] Izard | A reconfigurable robot for cable-driven parallel robotic research and industrial scenario proofing[END_REF] also studied a family of RCDPRs for industrial applications. Rosati et al. [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF][START_REF] Zanotto | Sophia-3: A semiadaptive cable-driven rehabilitation device with a tilting working plane[END_REF] and Zhou et al. [START_REF] Zhou | Tension distribution shaping via reconfigurable attachment in planar mobile cable robots[END_REF][START_REF] Zhou | Stiffness modulation exploiting configuration redundancy in mobile cable robots[END_REF] focused their work on planar RCDPRs. Recently, Nguyen et al. [START_REF] Nguyen | On the analysis of large-dimension reconfigurable suspended cable-driven parallel robots[END_REF][START_REF] Nguyen | Study of reconfigurable suspended cable-driven parallel robots for airplane maintenance[END_REF] proposed reconfiguration strategies for large-dimension suspended CDPRs mounted on overhead bridge cranes. Contrary to these antecedent studies, this chapter considers discrete reconfigurations where the locations of the cable exit points are selected from a finite set (grid) of possible values. Hereafter, reconfigurations are limited to the cable exit point locations and the class of RCDPRs whose exit points can be placed on a grid of positions is defined as discrete RCDPRs. Figure 2 shows the prototype of a reconfigurable cable-driven parallel robot developed at IRT Jules Verne within the framework of CAROCA project. This prototype is reconfigurable for the purpose of being used for industrial operations in a cluttered environment. Indeed, its pulleys can be displaced onto the robot frame faces such that the collisions between the cables and the environment can be avoided during operation. The prototype has eight cables, can work in both suspended and fully constrained configurations and can carry up to 400 kg payloads. It contains eight motor-geardhead-winch sets. The nominal torque and velocity of each motor are equal to 15.34 Nm and 2200 rpm, respectively. The ratio of the twp-stage gearheads is equal to 40. The diameter of the Huchez TM industrial winches is equal to 120 mm. The CAROCA prototype is also equipped with 6 mm non-rotating steel cables and a B&R control board using Ethernet Powerlink TM communication. To the best of our knowledge, no design strategy has been formulated in the literature for discrete RCDPRs. Hence, Section 4 presents a novel task-based design strategy for discrete RCDPRs. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into n t parts. Each part will be covered by one and only one configuration of the RCDPR. Then, for each configuration, the designer selects a cable layout, parametrising the position of the cable exit points. The grid of locations where the cable exit points can be located is defined by the designer as well. Placing the exit points on the provided set of possible locations, it is possible to generate many CDPR configurations. All the possible configurations are analysed with respect to a set of constraints in order to verify which parts of the prescribed workspace or trajectory can be covered. The configurations satisfying the constraints are compared in order to find the combinations of n t configurations that accomplish the required task and optimise at the same time one or several objective function(s). A set of objective functions, dedicated to RCD-PRs, is provided in Section 4.2. These objective functions aim at maximising the productivity (production cycle time) and reducing the reconfiguration time of the cable exit points. Let us note that if the design strategy introduced in Section 4 does not produce satisfactory results, the more advanced but complex method recently introduced by the authors in [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF] can be considered. In order to analyse the advantages and limitations of the proposed design strategy, a case study is presented in Section 5. It involves the design of an RCDPR for sandblasting and painting of a three-dimensional tubular structure. The tools performing these operations are embarked on the RCDPR moving platform, which follows the profile of the tubular structure. Each side of the tubular structure is associated to a single configuration. Cable exit points are reconfigured switching from one side of the tubular structure to another, until three external sides of the structure are sandblasted and painted. The cable exit point locations of the three configurations to be designed are optimised so that the number of cable attachment/detachment operations required to switch from a configuration to another is minimised. The size of the RCDPR is also minimised while its stiffness is maximised along the trajectories to be followed. Classes of RCDPRs CDPRs usually consist of several standard components: A fixed base, a moving platform, a set of m cables connecting the moving platform to the fixed base through a set of pulleys, a set of m winches, gearboxes and actuators, and a set of internal and external sensors. These components are usually dimensioned in such a way that the geometry of the CDPR does not vary during the task. However, by modifying the CDPR geometry, the capabilities of CDPRs can be improved. RCDPRs are then defined as CDPRs whose geometry can be adapted by reconfiguring part of their components. RCDPRs can then be classified according to the components, which are reconfigured and the nature of the reconfigurations. Fig. 3: CableBot designs with cable exit points fixed to a grid (left) and with cable exit points sliding on rails (right). Courtesy of the European FP7 Project CableBot. Reconfigurable Elements and Technological Solutions Part of the components of an RCDPR may be reconfigured in order to improve its performances. The geometry of the RCDPRs is mostly dependent on the locations of the cable exit points, the locations of the cable attachment points on the moving platform, and the number of cables. The locations of the cable exit points A i , i = 1, . . . , m have to be reconfigured to avoid cable collisions when the environment is strongly cluttered. Indeed, modifying the cable exit point locations can increase the RCDPR workspace size. Furthermore, the reconfiguration of cable exit points provides the possibility to modify the layout of the cables and improve the performance of the RCDPR (such as its stiffness). From a technological point of view, the cable exit points A i are displaced by moving the pulleys orienting the cables and guiding them to the moving platform. Pulleys are connected on the base of the RCDPR. They can be displaced by sliding them on linear guides or fixing them on a grid of locations, as proposed in the concepts of Fig. 3. These concepts have been developed in the framework of the European FP7 Project CableBot [7, [START_REF] Nguyen | On the study of large-dimension reconfigurable cable-driven parallel robots[END_REF][START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF]. Alternatively, pulleys can be connected to several terrestrial or aerial unmanned vehicles, as proposed in [START_REF] Jiang | The inverse kinematics of cooperative transport with multiple aerial robots[END_REF][START_REF] Manubens | Motion planning for 6D manipulation with aerial towed-cable systems[END_REF][START_REF] Zhou | Analysis framework for cooperating mobile cable robots[END_REF]. The geometry of the RCDPR and the cable layout can be modified as well by displacing the cable anchor points on the moving platform, B i , i = 1, . . . , m. Changing the locations of points B i allows the stiffness of the RCPDR as well as its wrench (forces and moments) capabilities to be improved. A modification of the cable anchor points may also result in an increase of the workspace dimensions. The reconfiguration of points B i can be performed by attaching and detaching the cables at different locations on the moving platform. The number m of cables has a major influence on performance of the RCDPR. Using more cables than DOFs can enlarge the workspace of suspended CDPRs [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] or yields fully constrained CDPRs where internal forces can reduce vibrations, e.g. [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF]. However, the larger the number of cables, the higher the risk of collisions. In this case, the reconfiguration can be performed by attaching or detaching one or several cable(s) to/from the moving platform and possibly to/from a new set of exit points. Furthermore, by attaching and detaching one or several cable(s), the Discrete and Continuous Reconfigurations According to the reconfigured components and the associated technology, reconfiguration parameters can be selected over a continuous or discrete domain of values, as summarised in Table 1. Reconfigurations performed over a discrete domain consist of selecting the reconfigurable parameters within a finite set of values. Modifying the number of cables is a typical example of a discrete reconfiguration. Discrete reconfigurations also apply to cable anchor points, when the cables can be installed on the moving platform at a (discrete) number of specific locations, e.g. its corners. Another example of discrete RCDPR is represented in Fig. 3 (left). In this concept, developed in the framework of the European FP7 Project CableBot, cable exit points are installed on a predefined grid of locations on the ceiling. Discrete reconfigurations are performed off-line, interrupting the task the RCDPR is executing. For this reason, the set up time for these RCDPRs can be relative long. On the contrary, RCDPRs with discrete reconfigurations can use the typical control schemes already developed for CDPRs. Furthermore, they do not require to motorise the cable exit points, thereby avoiding a large increase of the CDPR cost. Reconfigurations performed over a continuous domain provide the possibility of selecting the geometric parameters over a continuous set of values delimited by upper and lower bounds. A typical example of continuous RCDPR is represented in Fig. 3 (right), which illustrates another concept developed in the framework of the European FP7 Project CableBot. In this example, the cable exit points slide on rails fixed on the ceiling. Reconfigurations can be performed on-line, by continuously modifying the reconfigurable parameters during the task execution. The main advantages of continuous reconfigurations are the reduced set-up time and the local optimisation of the RCDPR properties. However, modifying the locations of the exit points in real time may require the design of a complex control scheme. Furthermore, the cost of RCDPRs with continuous reconfigurations is significantly higher than the cost of discrete RCDPRs when the movable pulleys are actuated. Nomenclature for RCDPRs Similarly to CDPRs, an RCDPR is mainly composed of a moving platform connected to the base through a set of cables, as illustrated in Fig. 4. The moving platform is driven by m cables, which are actuated by winches fixed on the base frame of the robot. The cables are routed by means of pulleys to exit points from which they extend toward the moving platform. The main difference between this chapter and previous works on CDPRs is the possibility to displace the cable exit points on a grid of possible locations. As illustrated in Fig. 4, F b , of origin O b and axes x b , y b , z b , denotes a fixed reference frame while F p of origin O p and axes x p , y p and z p , is fixed to the moving platform and thus called the moving platform frame. The anchor points of the ith cable on the platform are denoted as B i,c , where c represents the configuration number. For the c-th configuration, the exit point of the i-th cable is denoted as A i,c , i = 1, . . . , m. The Cartesian coordinates of each point A i,c , with respect to F b , are given by the vector a b i,c while b b i,c is the position vector of point B i,c expressed in F b . Neglecting the cable mass, the vector l b i,c directed along the i-th cable from point B i,c to point A i,c can be written as: l b i,c = a b i,c -t -Rb p i,c i = 1, . . . , m ( 1 ) where t is the moving platform position, i.e. the position vector of O p in F b , and R is the rotation matrix defining the orientation of the moving platform, i.e. the orientation of F p with respect to F b . The length of the i-th cable is then defined by the 2-norm of the cable vector l b i,c , namely, l i,c = l b i,c 2 , i = 1, . . . , m. In order to balance an external wrench (combination of a force and a moment), each cable generates on the moving platform a wrench proportional to its tension τ i = 1, . . . , m. The cables balance the external wrench w e , according to the following equation [START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF]: Wτ + w e = 0 (2) The cable tensions are collected into the vector τ = [τ 1 , . . . , τ m ] and multiplied by the wrench matrix W whose columns are composed of the unit wrenches w i exerted by the cables on the platform: W = d b 1,c d b 2,c . . . d b m,c Rb p 1,c × d b 1,c Rb p 2,c × d b 2,c . . . Rb p m,c × d b m,c (3) where d b i,c , i = 1, . . . , m are the unit cable vectors associated with the c-th configuration: d b i,c = l b i,c l i,c , i = 1, . . . , m (4) Design Strategy for RCDPRs Similarly to CDPRs, the design of RCDPRs requires the dimensioning of all its components. In this chapter, the design of RCDPRs focuses on the selection of the cable exit point locations. The other components of the RCDPR are required to be chosen in advance. Design Problem Formulation The RCDPR design strategy proposed in this section consists of ten steps. The design can be formulated as a mono-objective or hierarchical multi-objective optimisation problem. The designer defines a prescribed workspace or moving platform trajectory and divides it into n t parts. Each part should be covered by one and only one configuration. The design variables are the locations of the cable exit points for the n t configurations covering the n t parts of the prescribed workspace or trajectory. The global objective functions investigated in this chapter (Section 4.2) aim to reduce the overall complexity of the RCDPR and the reconfiguration time. The optimisation is performed while verifying a set of user-defined constraints such as those presented in Section 4.3. Step I. Task and Environment. The designer describes the task to be performed. He/She specifies the nature of the problem, defining if the motion of the moving platform is static, quasi-static or dynamic. According to the nature of the problem, the designer defines the external wrenches applied to the moving platform and, possibly, the required moving platform twist and accelerations. The prescribed workspace or trajectory of the moving platform is given. A description of the environment is provided as well, including the possible obstacles encountered during the task execution. Step II. Division of the Prescribed Trajectory. Given the prescribed workspace or moving platform trajectory, the designer divides it into n t parts, assuming that each of them is accessible by one and only one configuration of the RCDPR. The division may be performed by trying to predict the possible collisions of the cables and the working environment. Step III. Constant Design Parameters. The designer defines a set of constant design parameters and their values. The parameters are collected in the constant design parameter vector q. Step IV. Design Variables and Layout Parametrisation. For each part of the prescribed workspace or moving platform trajectory, the designer defines the cable layout of the associated configuration. The cable layout associated with the t-th part of the prescribed workspace or trajectory defines the locations of the cable exit points, parametrised with respect to a set of n t,v design variables, u t,v , v = 1, . . . , n t,v . The design variables are defined as a discrete set of ε t,v values, [u] t,v , v = 1, . . . , n t,v . Step V. RCDPR Configuration Set. For each part of the prescribed trajectory, the possible configurations, which can be generated combining the values They analyse the properties of the combination of n t configurations comprising the RCDPR. If several global objective functions are to be solved simultaneously, the optimisation problem can be classically reduced to a mono-objective optimisation according to: [u] t,v , v = 1, . . . , V = n V ∑ t=1 µ t V t , µ t ∈ [0, 1], n V ∑ t=1 µ t = 1 (5) The weighting factors µ t ,t = 1, . . . , n V , are defined according to the prior- ity assigned to each objective function V t , the latter lying between 0 and 1. If several global optimisation functions have to be solved hierarchically, the designer will define those functions according to their order of priority, t = 1, . . . , n V , where V 1 has the highest priority and V n V the lowest one. Step X. Discrete Optimisation Algorithm. The design problem is formulated as an optimisation problem and solved by analysing all the n C set of feasible configurations. The analysis is performed with respect to the global objective functions defined at Step IX. The sets of n t configurations with the best global objective function value are determined. If a hierarchical multi-objective optimisation is required, the following procedure is applied: a. The algorithm analyses the n C sets of feasible configurations with respect to the global objective function which currently has the highest priority, V t (the procedure is initialised with t = 1). b. If only one set of configuration optimises V t , this solution is considered as the optimum. On the contrary, if n C ,t multiple solutions optimise V t , the algorithm proceeds to the following step. c. The algorithm analyses the n C ,t sets of optimal solutions with respect to the global objective function with lower priority, V t+1 . Then, t = t + 1 and the procedure moves back to Step b. Global Objective Functions The design strategy proposed in the previous section aims to optimise the characteristics of the RCDPR. The optimisation may be performed with respect to one or several global objective functions. The objective functions used in this chapter are described hereafter. RCDPR Size The design optimisation problem may aim to minimise the size of the robot, defined as the convex hull of the cable exit points. The Cartesian coordinates of exit point A i,c are defined as a b i,c = [a x i,c , a y i,c , a z i,c ] T . The variables s x , s y and s z denote the lower bounds on the Cartesian coordinates of the cable exit points along the axes x b , y b and z b , respectively: s x = min a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (6) s y = min a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (7) s z = min a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (8) The upper bounds on the Cartesian coordinates of the RCDPR cable exit points, along the axes x b , y b , z b , are denoted by sx , sy and sz , respectively. sx = max a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (9) sy = max a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (10) sz = max a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (11) Hence, the objective function related to the size of the robot is expressed as follows: V = ( sx -s x )( sy -s y )( sz -s z ) (12) Number of Cable Reconfigurations According to the reconfiguration strategy proposed in this chapter, reconfiguration operations require the displacement of the cable exit points, and consequently attaching/detaching operations of the cables. These operations are time consuming. Hence, an objective can be to minimise the number of reconfigurations, n r , defined as the number of exit point changes to be performed in order to switch from configuration C i to configuration C j . By reducing the number of cable attaching/detaching operations, the RCDPR set up time could be significantly reduced. Number of Configuration Changes During the reconfiguration of the exit points, the task executed by the RCDPR has to be interrupted. These interruptions impact the task execution time. Therefore, it may be necessary to minimise the number of interruptions, n i , in order to improve the effectiveness of the RCDPR. The objective function V = n i associated with this goal measures the number of configuration changes, n i , to be performed during a prescribed task. RCDPR Complexity The higher the number of configuration sets n C allowing to cover the prescribed workspace or trajectory, the more complex the RCDPR. When the RCDPR requires a large number of configurations, the base frame of the CDPR may become complex. In order to minimise the complexity of the RCDPR, an objective can be to minimise the overall number of exit point locations, V = n e , required by the n C configuration sets. Therefore, the optimisation aims to maximise the number of exit point locations shared among two or more configurations. Constraint Functions Any CDPR optimisation problem has to take into account some constraints. Those constraints represent the technical limits or requirements that need to be satisfied. The constraints used in this chapter are described hereafter. Wrench Feasibility Since cables can only pull on the platform, the tensions in the cables must always be non-negative. Moreover, cable tensions must be lower than an upper bound, τ max , which corresponds either to the maximum tension τ max1 the cables (or other me- chanical parts) can bear, or to the maximum tension τ max2 the motors can provide. The cable tension bounds can thus be written as: 0 ≤ τ i ≤ τ max , ∀i = 1, . . . , m (13) where τ max = min {τ max1 , τ max2 }. Due to the cable tension bounds, RCDPRs can balance only a bounded set of external wrenches. In this chapter, the set of external wrenches applied to the platform and that the cables have to balance is called the required external wrench set and is denoted [w e ] r . Moreover, the set of of admissible cable tensions is defined as: [τ] = {τ i | 0 ≤ τ i ≤ τ max , i = 1, . . . , m} (14) A pose (position and orientation) of the moving platform is then said to be wrench feasible if the following constraint holds: ∀w e ∈ [w e ] r , ∃τ ∈ [τ] such that Wτ + w e = 0 (15) Eq. ( 15) can be rewritten as follows: Cw e ≤ d, ∀w e ∈ [w e ] r ( 16 ) Methods to compute matrix C and vector d are presented in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Cable Lengths Due to technological reasons, cable lengths are bounded between a minimum cable length, l min , and a maximum cable length, l max : l min ≤ l i,c ≤ l max , ∀i = 1, . . . , m (17) The minimum cable lengths are defined so that the RCDPR moving platform is not too close to the base frame. The maximum cable lengths depend on the properties of the winch drums that store the cables, in particular their lengths and their diameters. Cable Interferences A second constraint is related to the possible collisions between cables. If two or more cables collide, the geometric and static models of the CDPR are not valid anymore and the cables can be damaged or their lifetime severely reduced. In order to verify that cables do not interfere, it is sufficient to determine the distances between them. Modeling the cables as linear segments, the distance d cc i, j between the i-th cable and the j-th cable can be computed, e.g. by means of the method presented in [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. There is no interference if the distance is larger than the diameter of the cables, φ c : d cc i, j ≥ φ c ∀i, j = 1, . . . , m, i = j ( 18 ) The number of possible cable interferences to be verified is equal to C m 2 = m! 2!(m-2)! . Note that, depending on the way the cables are routed from the winches to the moving platform, possible interferences of the cable segments between the winches and the pulleys may have to be considered. Collisions between the Cables and the Environment Industrial environments may be cluttered. Collisions between the environment and the cables of the CDPR should be avoided. In general, for fast collision detection, the environment objects (obstacles) are enclosed in bounding volumes such as spheres and cylinders. When more complex shapes have to be considered, their surfaces are approximated with polygonal meshes. Thus, collision analysis can be performed by computing the distances between the edges of those polygons and the cables, e.g. by using [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. Many other methods may be used, e.g., those described in [START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF]. In the case study presented in Section 5, a tubular structure is considered. The ith cable and the k-th structure tube will not collide if the distance between the cable and the axis (straight line segment) of the structure tube is larger than the sum of the cable radius φ c /2 and the tube radius φ s /2, i.e.: d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , m, ∀k = 1, . . . , n st ( 19 ) where n st denotes the number of tubes composing the structure. Pose Infinitesimal Displacement Due to the Cable Elasticity Cables are not perfectly rigid body. Under load, they are notably subjected to elongations that may induce some moving platform displacements. In order to quantify the stiffness of the CDPR, an elasto-static model may be used: δ w e = Kδ p = K δ t δ r ( 20 ) where δ w e is the infinitesimal change in the external wrench applied to the platform, δ p is the infinitesimal displacement screw of the moving platform and K is the stiffness matrix whose computation is explained in [START_REF] Behzadipour | Stiffness of cable-based parallel manipulators with application to stability analysis[END_REF]. δ t = [δt x , δt y , δt z ] T is the variation in the moving platform position and δ r = [δ r x , δ r y , δ r z ] T is the vector of the infinitesimal (sufficiently small) rotations of the moving platform around the axes x b , y b and z b . The pose variation should be bounded by the positioning error threshold vector, δ t = [δt x,c , δt y,c , δt z,c ], where δt x,c , δt y,c and δt z,c are the bounds on the positioning errors along the axes x b , y b and x b , and the orientation error threshold vector, δ φ = [δ γ c , δ β c , δ α c ], where δ γ c , δ β c and δ α c are the bounds on the platform orientation errors about the axes x b , y b and z b , i.e.: -[δt x,c , δt y,c , δt z,c ] ≤ [δt x , δt y , δt z ] ≤ [δt x,c , δt y,c , δt z,c ] (21) -[δ γ c , δ β c , δ α c ] ≤ [δ γ, δ β , δ α] ≤ [δ γ c , δ β c , δ α c ] (22) 5 Case Study: Design of a RCDPRs for Sandblasting and Painting of a Large Tubular Structure Problem Description The necessity to improve the production rate of large tubular structures has incited companies to investigate new technologies. These technologies should be able to reduce manufacturing time associated with the assembly of the structure parts or the treatment of their surfaces. Painting and sandblasting operations over wide tubular structures can be realised by means of RCDPRs, as illustrated in the present case study. Task and Environment The tubular structure selected for the given case study is 20 m long, with a cross section of 10 m x 10 m. The number of tubes to be painted is equal to twenty. Their diameter, φ s , is equal to 0.8 m. The sandblasting and painting operations are realised indoor. The structure lies horizontally in order to reduce the dimensions of the painting workshop. The whole system can be described with respect to a fixed reference frame, F b , of origin O b and axes x b , y b , z b , as illustrated in Fig. 6. Sandblasting and painting tools are embarked on the RCDPR moving platform. The Center of Mass (CoM) of the platform follows the profile of the structure tubes and the tools perform the required operations. The paths to be followed, P 1 , P 2 and P 3 , are represented in Fig. 6. Note that each path P i , i = 1, . . . , 3 is discretised into 38 points P j,i , j = 1, . . . , 38 i = 1, . . . , 3 and that n p denotes the corresponding total number of points. The offset between paths P i , i = 1, . . . , 3 and the structure tubes is equal to 2 m. No path will be assigned to the lower external side of the structure, since it is sandblasted and painted from the ground. Division of the Prescribed Workspace In order to avoid collisions between the cables and structure, reconfigurations of the cable exit points are necessary. Each external side of the structure should be painted by only one robot configuration. Three configurations are necessary to work on the outer part of the structure, configuration C i being associated to path P i , i = 1, two and three, in order not to interrupt the painting and sandblasting operations during their execution. Passing from one configuration to another, one or more cables are disconnected from their exit points and connected to other exit points located elsewhere. For each configuration, the locations of the cable exit points are defined as variables of the design problem. In the present case study, the dimensions of the platform as well as the position of the cable anchor points on the platform are fixed. Constant Design parameters The number of cables, m = 8, the cable properties, and the dimensions of the platform are given. Those parameters are the same for the three configurations. The moving platform of the RCDPR analysed in this case study is driven by steel cables. The maximum allowed tension in the cables, τ max , is equal to 34 950 N and we have: 0 < τ i ≤ τ max , ∀i = 1, . . . , 8 (23) Moreover, l p , w p and h p denote the length, width and height of the platform, respectively: l p = 30 cm, w p = 30 cm and h p = 60 cm. The mass of the moving platform is m MP = 60 kg. The design (constant) parameter vector q is expressed as: q = [m, φ c , k s , τ max , l p , w p , h p , m MP ] T (24) Constraint Functions and Configuration Analysis The design problem aims to identify the locations of points A i,c for the configurations C 1 , C 2 and C 3 . At first, in order to identify the set of feasible locations for the exit points A i,c , the three robot configurations are parameterised and analysed separately in the following paragraphs. A set of exit points is feasible if the design constraints are satisfied along the whole path to be followed by the moving platform CoM. The analysed constraints are: wrench feasibility, cable interferences, cable collisions with the structure, and the maximum moving platform infinitesimal displacement due to the cable elasticity. Both suspended and fully constrained eight-cable CDPR architectures are used. In the suspended architecture, gravity plays the role of an additional cable pulling the moving platform downward, thereby keeping the cables under tension. The suspended architecture considered in this work is inspired by the CoGiRo CDPR prototype [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. For the non-suspended configuration, note that eight cables is the smallest possible even number of cables that can be used for the platform to be fully constrained by the cables. Collisions between the cables as well as collisions between the cables and structure tubes should be avoided. Since sandblasting and painting operations are performed at low speed, the motion of the CDPR platform can be considered quasistatic. Hence, only the static equilibrium of the robot moving platform will be considered. The wrench feasibility constraints presented in Section 4.3 are considered such that the required external wrench set [w e ] r is an hyperrectangle defined as: -50 N ≤ f x , f y , f z ≤ 50 N (25) -7.5 Nm ≤m x , m y , m z ≤ 7.5 Nm (26) where w e = [ f x , f y , f z , m x , m y , m z ] T , f x , f y and f z being the force components of w e and m x , m y , and m z being its moment components. Besides, the moving platform infinitesimal displacements, due to the elasticity of the cables, are constrained by: An advantage of this configuration is a large workspace to footprint ratio. The exit points A i,2 have been arranged in a parallelepiped layout. The Cartesian coordinates a i,c are defined as follows: Variables v i , i = 1, . . . , 5 are equivalent for configuration C 2 to variables u i , i = 1, . . . , 5, describing configuration C 1 . The layout of this configuration is illustrated in Fig. 8. The design variables of configuration C 2 are collected into the vector x 2 : -5 cm ≤ δt x , δt y , δt z ≤ 5 cm (27) -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (28) a b 1,2 = a b 2,2 = [v 1 -v 4 , v 2 -v 5 , v 3 ] T ( 38 ) a b 3,2 = a b 4,2 = [v 1 -v 4 , v 2 + v 5 , v 3 ] T ( 39 ) a b 5,2 = a b 6,2 = [v 1 + v 4 , v 2 + v 5 , v 3 ] T (40) a b 7,2 = a b 8,2 = [v 1 + v 4 , v 2 -v 5 , v 3 ] T (41) x 2 = [v 1 , v 2 , v 3 , v 4 , v 5 ] T (42) Note that this configuration is composed of couples of exit points theoretically connected to the same locations: {A 1,2 , A 2,2 }, {A 3,2 , A 4,2 }, {A 5,2 , A 6,2 }, and {A 7,2 , A 8,2 }. From a technical point of view, in order to avoid any cable interference, the coupled exit points should be separated by a certain distance. For the design problem at hand, this distance has been fixed to v 0 = 5 mm. a b 1,2 = v 1 -v ′ 4 , v 2 -v 5 , v 3 T ( 43 ) a b 2,2 = v 1 -v 4 , v 2 -v ′ 5 , v 3 T ( 44 ) a b 3,2 = v 1 -v 4 , v 2 + v ′ 5 , v 3 T ( 45 ) a b 4,2 = v 1 -v ′ 4 , v 2 + v 5 , v 3 T ( 46 ) a b 5,2 = v 1 + v ′ 4 , v 2 + v 5 , v 3 T ( 47 ) a b 6,2 = v 1 + v 4 , v 2 + v ′ 5 , v 3 T ( 48 ) a b 7,2 = v 1 + v 4 , v 2 -v ′ 5 , v 3 T ( 49 ) a b 8,2 = v 1 + v ′ 4 , v 2 -v 5 , v 3 T ( 50 ) where v ′ 4 = v 4v 0 and v ′ 5 = v 5v 0 The Cartesian coordinates of points B i,2 are defined as: b b 1,2 = 1 2 [l p , -w p , h p ] T , b b 2,2 = 1 2 [-l p , w p , -h p ] T (51) b b 3,2 = 1 2 [-l p , -w p , h p ] T , b b 4,2 = 1 2 [l p , w p , -h p ] T (52) b b 5,2 = 1 2 [-l p , w p , h p ] T , b b 6,2 = 1 2 [l p , -w p , -h p ] T (53) b b 7,2 = 1 2 [l p , w p , h p ] T , b b 8,2 = 1 2 [-l p , -w p , -h p ] T (54) Table 2 describes the lower and upper bounds as well as the number of values considered for the configuration C 2 . Combining these values, 22275 configurations have been generated. Among these configurations, only 5579 configurations are feasible. Configuration C 3 The configuration C 3 follows the path P 3 . This path is symmetric to the path P 1 with respect to the plane y b O b z b . Considering the symmetry of the tubular structure, configuration C 3 is thus selected as being the same as configuration C 1 . The discretised set of design variables chosen for the configuration C 3 is described in Table 2. The design variables for the configuration C 3 are collected into the vector x 3 : x 3 = [w 1 , w 2 , w 3 , w 4 , w 5 ] T ( 55 ) where the variables w i , i = 1, . . . , 5 amount to the variables u i , i = 1, . . . , 5, describing configuration C 1 . Therefore, the Cartesian coordinates of the exit points A i,3 are expressed as follows: a b 1,3 = [w 1 + w 4 , w 2 + w 5 , -w 3 ] T a b 2,3 = [w 1 + w 4 , w 2 + w 5 , w 3 ] T (56) Objective Functions and Design Problem Formulation The RCDPR should be as simple as possible so that the minimisation of the total number of cable exit point locations, V 1 = n e , is required. Consequently, the number of exit point locations shared by two or more configurations should be maximised. The size of the robot is also minimised to reduce the size of the sandblasting and painting workshop. Finally, the mean of the moving platform infinitesimal displacement due to cable deformations is minimised. The optimisations are performed hierarchically, by means of the procedure described in Section 4.1 and the objective functions collected in Section 4.2. Hence, the design problem of the CDPR is formulated as follows: minimise          V 1 = n e V 2 = ( sx -s x )( sy -s y )( sz -s z ) V 3 = δ t 2 n p over x 1 , x 2 , x 3 subject to: ∀P m,n , m = 1, . . . , 38 n = 1, . . . , 3                  Cw ≤ d, ∀w ∈ [w e ] r d cc i, j ≥ φ c ∀i, j = 1, . . . , 8, i = j d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , 8, ∀k = 1, . . . , 20 -5 cm ≤ δt x , δt y , δt z ≤ 5 cm -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (60) Once the set of feasible solutions have been obtained for each path P i , a list of RCDPRs with a minimum number of exit points, n c , is extracted from the list of feasible RCDPRs. Finally, the most compact and stiff RCDPRs from the list of RCDPRs with a minimum number of exit points are the desired optimal solutions. Optimisation Results The feasible robot configurations associated with paths P 1 , P 2 and P 3 have been identified. For each path, a configuration is selected, aiming to minimise the total number of exit points required by the RCDPR to complete the task. These optimal solutions have been computed in two phases. At first, the 4576 feasible robot configurations for path P 1 are compared with the 5579 feasible robot configurations for path P 2 looking for the couple of configurations having the minimum total number of exit points. The resulting couple of configurations is then compared to the feasible robot configurations for path P 3 , and the sets of robot configurations that minimise the overall number n e of exit points along the three paths are retained. According to the discrete optimisation analysis, 16516 triplets of configurations minimise this overall number of exit points. A generic CDPR composed of eight cables requires eight exit points A i = 1, . . . , 8 on the base. It is the case for the fully constrained configurations C 1 and C 3 . The suspended CDPR presents four coincident couples of exit points. Hence, in the present case study, the maximum overall number of exit points of the RCDPR is equal to 20. The best results provide a reduction of four points. Regarding the configurations C 1 and C 2 , points A 5,2 and A 7,2 can be coincident with points A 3,1 and A 5,1 , respectively. Alternatively, points A 5,2 and A 7,2 can be coincident with points A 1,1 and A 7,1 . As far as configurations C 2 and C 3 are concerned, points A 1,2 and A 3,2 can be coincident with points A 8,3 and A 2,3 , respectively. Likewise, points A 1,2 and A 3,2 can be coincident with points A 4,3 and A 6,3 , respectively. The total volume of the robot has been computed for the 16516 triplets of configurations minimising the overall number of exit points. Ninety six RCDPRs amongst the 16516 triplets of configurations have the smallest size, this minimum size being equal to 5104 m 3 . Selection of the best solutions has been promoted through the third optimisation criterion based on the robot stiffness. Twenty solutions provided a minimum mean of the moving platform displacement equal to 1.392 mm. An optimal solution is illustrated in Fig. 9. The corresponding optimal design parameters are given in Table 3. Figure 10 illustrates the minimum degree of constraint satisfaction s introduced in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] and computed thereafter along the paths P 1 , P 2 , and P 3 , which were discretised into 388 points. It turns out that the moving platform is in a feasible static equilibrium along all the paths because the minimum degree of constraint satisfaction remains negative. Referring to [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], the minimum degree of constraint satisfaction can be used to test wrench feasibility since it is negative when a platform pose is wrench feasible. Configurations C 1 and C 3 maintain their degree of satisfaction lower than -400 N. On the contrary, configuration C 2 is often close to 0. The poses where s vanishes are such that two cables of the suspended CDPR of configuration C 2 are slack. The proposed RCDPR design strategy yielded good solutions, but it is time consuming. The whole procedure, performed on an Intel Core TM i7-3630QM 2.40 GHz, required 19 h of computations, on Matlab 2013a. Therefore, the development of more efficient strategies for the design of RCDPRs will be part of our future work. Moreover, the mass of the cables may have to be taken into account. Conclusions When the task to be accomplished is complicated, and the working environment is extremely cluttered, CDPRs may not succeed in the task execution. The problem can be solved by means of RCDPRs. This chapter focused on RCDPRs whose cable exit points on the base frame can be located on a predefined grid of possible positions. A design strategy for such discrete RCDPRs was introduced. This design strategy assumes that the number of configurations needed to complete the task is defined by the designer according to its experience. The designer divides the prescribed trajec-Fig. 9: Optimal Reconfigurable Cable-Driven Parallel Robot. tory or workspace into a set of partitions. Each partition has to be entirely covered by one configuration. The position of the cable exit points, for all the configurations, is computed by means of an optimisation algorithm. The algorithm optimises one or more global objective function(s) while satisfying a set of user-defined constraints. Examples of possible global objective functions include the RCDPR size, the overall number of exit points, and the number of cable reconfiguration. A case study was presented in order to validate the RCDPR design strategy. The RCDPR has to paint and sandblast three of the four external sides of a tubular structure. Each of these three sides is covered by one configuration. The design strategy provided several optimal solutions to the case study, minimising hierarchically the overall number of cable exit points, the size of the RCDPR, and the moving platform displacements due to the elasticity of the cables. The computation of the optimal solution Fig. 10: Minimum degree of constraint satisfaction [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The analysis has been performed by discretising the paths P 1 , P 2 , and P 3 into 388 points. required nineteen hours of computation. More complicated tasks may thus require higher computation times. An improvement of the proposed RCDPR design strategy should be investigated in order to reduce this computational effort. Fig. 1 : 1 Fig. 1: Architecture of a CDPR developed in the framework of the IRT Jules Verne CAROCA project. Fig. 2 : 2 Fig. 2: CAROCA prototype: a reconfigurable cable-driven parallel robot working in a cluttered environment (Courtesy of IRT Jules Verne and STX France). Fig. 4 : 4 Fig. 4: Schematic of a RCDPR. The red points represent the possible locations of the cable exit points, where the pulleys can be fixed. Fig. 5 : 5 Fig. 5: Design strategy for RCDPRs. Fig. 6 : 6 Fig. 6: Case study model and prescribed paths P 1 , P 2 and P 3 of the moving platform CoM. Fig. 7 : 7 Fig. 7: Design variables parametrising the configuration C 1 . Fig. 8 : 8 Fig. 8: Design variables parametrising the configuration C 2 . a b 3 , 3 = [w 1 - 331 w 4 , w 2 + w 5 , -w 3 ] T a b 4,3 = [w 1w 4 , w 2 + w 5 , w 3 ] T (57) a b 5,3 = [w 1w 4 , w 2w 5 , -w 3 ] T a b 6,3 = [w 1w 4 , w 2w 5 , w 3 ] T (58) a b 7,3 = [w 1 + w 4 , w 2w 5 , -w 3 ] T a b 8,3 = [w 1 + w 4 , w 2w 5 , w 3 ] T (59) Table 1 : 1 CDPR reconfigurable parameter classification. Reconfigurable Parameter Discrete Domain Continuous Domain Exit Point Locations Yes Yes Platform Anchor Point Locations Yes Yes Cable Number Yes No architecture of the RCDPRs can be modified, permitting both suspended and fully constrained CDPR configurations. n t,v of the n t,v design variables, are computed. Therefore, n t,C = ∏ n t,v v=1 ε t,v possible configurations are generated for the t-th part of the prescribed workspace or trajectory.Step VI. Constraint Functions. The user defines a set of n φ constraint functions, φ k , k =, 1, . . . , n φ . These functions are applied to all possible configurations associated to the n t parts of the prescribed workspace or trajectory.Step VII. Configuration Analysis. For each portion of the prescribed workspace or trajectory, all the possible configurations generated at Step V with respect to the n φ user-defined constraint functions are tested. The n f ,t configurations satisfying the constraints all over the t-th part of the prescribed workspace or trajectory are defined hereafter as feasible configurations. Step VIII. Feasible Configuration Combination. The set of n t configurations that lead to the achievement of the prescribed task are computed. Each set is composed by selecting one of the n f ,t feasible configurations for each part of the prescribed workspace or trajectory. The number of feasible configuration sets generated during this step is equal to n C . Step IX. Objective Functions. The designer defines one or more global objective function(s), V t ,t =, 1, . . . , n V , where n V is equal to the number of global objective functions taken into account. The global objective functions associated with RCDPRs do not focus solely on a single configuration. Table 2 : 2 Design variables associated with configurations C 1 , C 2 and C 3 . Variables Lower Bounds Upper Bounds Number of values u 1 5.5 7.5 9 u 2 8.0 12.0 9 C 1 u 3 6 10 5 u 4 0.5 2.5 9 u 5 10 14 5 v 1 -1 1 9 v 2 8.0 12.0 5 C 2 v 3 7 11 9 v 4 5 7.5 11 v 5 10 14 5 w 1 -7.5 -5.5 9 w 2 8.0 12.0 9 C 3 w 3 6 10 5 w 4 0.5 2.5 9 w 5 10 14 5 Table 3 : 3 Design parameters of the selected optimum RCDPR. Conf. var.1 var.2 var.3 var.4 var.5 x 1 6.25 10.0 8.0 1.0 11.0 x 3 0 10.0 8.0 5.25 11.0 x 3 -6.25 10.0 8.0 1.0 11.0 Acknowledgements This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, Naval Group, AIRBUS and CNRS. Configuration C 1 A fully-constrained configuration has been assigned to configuration C 1 . The exit points A i,1 have been arranged in a parallelepiped layout. The edges of the parallelepiped are aligned with the axes of frame F b . This layout can be fully described by means of five variables: u 1 , u 2 and u 3 define the Cartesian coordinates of the parallelepiped center, while u 4 and u 5 denote the half-lengths of the parallelepiped along the axes x b and y b , respectively. Therefore, the Cartesian coordinates of the exit points A i,1 are expressed as follows: The layout of the first robot configuration is described in Fig. 7. The corresponding design variables are collected into the vector x 1 : The Cartesian coordinates of the anchor points B i,1 of the cables on the platform are expressed as: A discretised set of design variables have been considered. The lower and upper bounds as well as the number of values for each variable are given in Table 2. 18225 robot configurations have been generated with those values. It turns out that 4576 configurations satisfy the design constraints along the 38 discretised points of path P 1 . Configuration C 2 A suspended redundantly actuated eight-cable CDPR architecture has been attributed to the configuration C 2 in order to avoid collisions between the cables and the tubular structure. The selected configuration is based on CoGiRo, a suspended CDPR designed and built in the framework of the ANR CoGiRo project [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF].
54,140
[ "170861", "10659" ]
[ "235335", "388165", "441569", "481388", "473973", "441569" ]
01758178
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758178/file/Sensitivity%20Analysis%20of%20the%20Elasto-Geometrical%20Model%20of%20Cable-Driven%20Parallel%20Robots%20-%20Cablecon2017.pdf
Sana Baklouti Stéphane Caro Eric Courteille Sensitivity Analysis of the Elasto-Geometrical Model of Cable-Driven Parallel Robots This paper deals with the sensitivity analysis of the elasto-geometrical model of Cable-Driven Parallel Robots (CDPRs) to their geometric and mechanical uncertainties. This sensitivity analysis is crucial in order to come up with a robust model-based control of CDPRs. Here, 62 geometrical and mechanical error sources are considered to investigate their effect onto the static deflection of the movingplatform (MP) under an external load. A reconfigurable CDPR, named ``CAROCA´´, is analyzed as a case of study to highlight the main uncertainties affecting the static deflection of its MP. Introduction In recent years, there has been an increasing number of research works on the subject of Cable-Driven Parallel Robots (CDPRs). The latter are very promising for engineering applications due to peculiar characteristics such as large workspace, simple structure and large payload capacity. For instance, CDPRs have been used in many applications like rehabilitation [START_REF] Merlet | MARIONET, a family of modular wire-driven parallel robots[END_REF], pick-and-place [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], sandblasting and painting [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF] operations. Many spatial prototypes are equipped with eight cables for six Degrees of Freedom (DOF) such as the CAROCA prototype, which is the subject of this paper. Sana Baklouti Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: [email protected] Stéphane Caro CNRS, Laboratoire des Sciences du Numérique de Nantes, UMR CNRS n6004, 1, rue de la Noë, 44321 Nantes, France, e-mail: [email protected] Eric Courteille Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: [email protected] 1 To customize CDPRs to their applications and enhance their performances, it is necessary to model, identify and compensate all the sources of errors that affect their accuracy. Improving accuracy is still possible once the robot is operational through a suitable control scheme. Numerous control schemes were proposed to enhance the CDPRs precision on static tasks or on trajectory tracking [START_REF] Jamshidifar | Adaptive Vibration Control of a Flexible Cable Driven Parallel Robot[END_REF][START_REF] Fang | Motion control of a tendonbased parallel manipulator using optimal tension distribution[END_REF][START_REF] Zi | Dynamic modeling and active control of a cable-suspended parallel robot[END_REF]. The control can be either off-line through external sensing in the feedback signal [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], or on-line control based on a reference model [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. This paper focuses on the sensitivity analysis of the CDPR MP static deflection to uncertain geometrical and mechanical parameters. As an illustrative example, Fig. 1: CAROCA prototype: a reconfigurable CDPR (Courtesy of IRT Jules Verne, Nantes) a suspended configuration of the reconfigurable CAROCA prototype, shown in Fig. 1, is studied. First, the manipulator under study is described. Then, its elastogeometrical model is written while considering cable mass and elasticity in order to express the static deflection of the MP subjected to an external load. An exhaustive list of geometrical and mechanical uncertainties is given. Finally, the sensitivity of the MP static deflection to these uncertainties is analyzed. Parametrization of the CAROCA prototype The reconfigurable CAROCA prototype illustrated in Fig. 1 was developed at IRT Jules Verne for industrial operations in cluttered environment such as painting and sandblasting large structures [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF][START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. This prototype is reconfigurable because its pulleys can be displaced in a discrete manner on its frame. The size of the latter is 7 m long, 4 m wide and 3 m high. The rotation-resistant steel cables Carl Stahl Technocables Ref 1692 of the CAROCA prototype are 4 mm diameter. Each cable consists of 18 strands twisted around a steel core. Each strand is made up of 7 steel wires. The cable breaking force is 10.29 kN. ρ denotes the cable linear mass and E the cable modulus of elasticity. In this section, both sag-introduced and axial stiffness of cables are considered in the elasto-geometrical modeling of CDPR. The inverse elasto-geometrical model and the direct elasto-geometrical model of CDPR are presented. Then, the variations in static deflection due to external loading is defined as a sensitivity index. Inverse Elasto-Geometric Modeling (IEGM) The IEGM of a CDPR aims at calculating the unstrained cable length for a given pose of its MP. If both cable mass and elasticity are considered, the inverse kinematics of the CDPR and its static equilibrium equations should be solved simultaneously. The IEGM is based on geometric closed loop equations, cable sagging relationships and static equilibrium equations. The geometric closed-loop equations take the form: b p = b b i + b l i -b R p p a i , (1) where b R p is the rotation matrix from F b to F p and l i is the cable length vector. The cable sagging relationships between the forces i f i = [ i f xi , 0, i f zi ] applied at the end point A i of the ith cable and the coordinates vector i a i = [ i x Ai , 0, i z Ai ] of the same point resulting from the sagging cable model [START_REF] Irvine | Cable structures[END_REF] are expressed in F i as follows: i x Ai = i f xi L usi ES + | i f xi | ρg [sinh -1 ( i f zi f C i xi ) -sinh -1 ( i f zi -ρgL usi i f xi )], (2a) i z Ai = i f xi L usi ES - ρgL 2 usi 2ES + 1 ρg [ i f xi 2 + i f zi 2 -i f xi 2 + ( i f zi -ρgL usi ) 2 ], (2b) where L usi is the unstrained length of ith cable, g is the acceleration due to gravity, S is the cross sectional area of the cables. The static equilibrium equations of the MP are expressed as: Wt + w ex = 0, (3) where W is the wrench matrix, w ex is the external wrench vector and t is the 8dimensional cable tension vector. Those tensions are computed based on the tension distribution algorithm described in [START_REF] Mikelsons | A real-time capable force calculation algorithm for redundant tendon-based parallel manipulators[END_REF]. Direct elasto-geometrical model (DEGM) The direct elasto-geometrical model (DEGM) aims to determine the pose of the mobile platform for a given set of unstrained cable lengths. The constraints of the DEGM are the same as the IEGM, i.e, Eq. ( 1) to Eq. ( 3). If the effect of cable weight on the static cable profile is non-negligible, the direct kinematic model of CDPRs will be coupled with the static equilibrium of the MP. For a 6 DOFs CDPR with 8 driving cables, there are 22 equations and 22 unknowns. In this paper, the non-linear Matlab function ``lsqnonlin´´is used to solve the DEGM. Static deflection If the compliant displacement of the MP under the external load is small, the static deflection of the MP can be calculated by its static Cartesian stiffness matrix [START_REF] Carbone | Stiffness analysis and experimental validation of robotic systems[END_REF]. However, once the cable mass is considered, the sag-introduced stiffness should be taken into account. Here, the small compliant displacement assumption is no longer valid, mainly for heavy or/and long cables with light mobile platform. Consequently, the static deflection can not be calculated through the Cartesian stiffness matrix. In this paper, the IEGM and DEGM are used to define and calculate the static deflection of the MP under an external load. The CDPR stiffness is characterized by the static deflection of the MP. Note that only the positioning static deflection of the MP is considered in order to avoid the homogenization problem [START_REF] Nguyen | Stiffness Matrix of 6-DOF Cable-Driven Parallel Robots and Its Homogenization[END_REF]. As this paper deals with the sensitivity of the CDPR accuracy to all geometrical and mechanical errors, the elastic deformations of the CDPR is involved. This problem is solved by deriving the static deflection of the CDPR obtained by the subtraction of the poses calculated with and without an external payload. For a desired pose of the MP, the IEGM gives a set of unstrained cable lengths L us . This set is used by the DEGM to calculate first, the pose of the MP under its own weight. Then, the pose of the MP is calculated when an external load (mass addition) is applied. Therefore, the static deflection of the MP is expressed as: dp j,k = p j,k -p j,1 , (4) where p j,1 is the pose of the MP considering only its own weight for the j th pose configuration and p j,k is the pose of the MP for the set of the j th pose and k th load configuration. Error modeling This section aims to define the error model of the elasto-geometrical CDPR model. Two types of errors are considered: geometrical errors and mechanical errors. Geometrical errors The geometrical errors of the CDPR are described by δ b i , the variation in vector b i , δ a i , the variation in vector a i , and δ g, the uncertainty vector of the gravity center position; So, 51 uncertainties. The geometric errors can be divided into base frame geometrical errors and MP geometrical errors and mainly due to manufacturing errors. Base frame geometrical errors The base frame geometrical errors are described by vectors δ b i , (i=1..8). As the point B i is considered as part of its correspondent pulley, it is influenced by the elasticity of the pulley mounting and its assembly tolerance. b i is particularly influenced by pulleys tolerances and reconfigurability impact. Moving-platform geometrical errors The MP geometrical errors are described by vectors δ a i , (i=1..8), and δ g. The gravity center of the MP is often supposed to coincide with its geometrical center P. This hypothesis means that the moments generated by an inaccurate knowledge of the gravity center position or by its potential displacement are neglected. The Cartesian coordinate vector of the geometric center G does not change in frame F p , but strongly depends on the real coordinates of exit points A i that are related to uncertainties in mechanical welding of the hooks and in MP assembly. Mechanical errors The mechanical errors of the CDPR are described by the uncertainty in the MP mass (δ m) and the uncertainty on the cables mechanical parameters (δ ρ and δ E). Besides, uncertainties in the cables tension δ t affect the error model. As a result, 11 mechanical error sources are taken into account. End-effector mass As the MP is a mechanically welded structure, there may be some differences between the MP mass and inertia matrix given by the CAD software and the real ones. The MP mass and inertia may also vary in operation In this paper, MP mass uncertainty δ m is about ± 10% the nominal mass. Cables parameters Linear mass: The linear mass ρ of CAROCA cables is equal to 0.1015 kg/m. The uncertainty of this parameter can be calculated from the measurement procedure as: δ ρ = m c δ L + L δ m c L 2 , where m c is the measured cable mass for a cable length L. δ L and δ m c are respectively the measurement errors of the cable length and mass. Modulus of elasticity: This paper uses experimental hysteresis loop to discuss the modulus of elasticity uncertainty. Figure 3 shows the measured hysteresis loop of the 4 mm cable where the unloading path does not correspond to the loading path. The area in the center of the hysteresis loop is the energy dissipated due to internal friction in the cable. It depicts a non-linear correlation in the lower area between load and elongation. Based on experimental data presented in Fig. 3, Table 2 presents the modulus of elasticity of a steel wire cable for different operating margins, when the cable is in loading or unloading phase. This modulus is calculated as follows: E p-q = L c F q% -F p% S(x q -x p ) , ( 5 ) where S is the metallic cross-sectional area, i.e. the value obtained from the sum of the metallic cross-sectional areas of the individual wires in the rope based on their nominal diameters. x p and x q are the elongations at forces equivalent to p% and q% (F p% and F q% ), respectively, of the nominal breaking force of the cable measured during the loading path (Fig. 3). L c is the measured initial cable length. For a given range of loads (Tab. 2), the uncertainty on the modulus of elasticity depends only on the corresponding elongations and tensions measurements. In this case, the absolute uncertainty associated with applied force and resulting elongation measurements from the test bench outputs is estimated to be ± 1 N and ± 0.03 mm, respectively; so, an uncertainty of ± 2 GPa can be applied to the calculation of the modulus of elasticity. According to the International Standard ISO 12076, the modulus of elasticity of a steel wire cable is E 10-30 . However, the CDPR cables do not work always between F 10% and F 30% in real life and the cables can be in loading or unloading phase. The mechanical behavior of cables depends on MP dynamics, which affects the variations in cable elongations and tensions. From Table 2, it is apparent that the elasticity moduli of cables change with the operating point changes. For the same applied force, the modulus of elasticity for loaded and unloaded cables are not the same. While the range of the MP loading is unknown, a large range of uncertainties on the modulus of elasticity should be defined as a function of the cable tensions. Tension distribution Two cases of uncertainties of force determination can be defined depending on the control scheme: The first case is when the control scheme gives a tension set-point to the actuators resulting from the force distribution algorithm. If there is no feedback about the tensions measures, the range of uncertainty is relatively high. Generally, the effort of compensation does not consider dry and viscous friction in cable drum and pulleys. This non-compensation leads to static errors and delay [START_REF] De Wit | Robust adaptive friction compensation[END_REF] that degrade the CDPR control performance. That leads to a large range of uncertainties in tensions. As the benefit of tension distribution algorithm used is less important in case of a suspended configuration of CDPR than the fully-constrained one [START_REF] Lamaury | Contribution a la commande des robots parallles a cbles redondance d'actionnement[END_REF], a range of ± 15 N is defined. The second case is when the tensions are measured. If measurement signals are very noisy, amplitude peaks of the correction signal may lead to a failure of the force distribution. Such a failure may also occur due to variations in the MP and pulleys parameters. Here, the deviation is defined based on the measurement tool precision. However, it remains lower than the deviation of the first case by at least 50%. Sensitivity Analysis Due to the non-linearities of the elasto-geometrical model, explicit sensitivity matrix and coefficients [START_REF] Zi | Error modeling and sensitivity analysis of a hybrid-driven based cable parallel manipulator[END_REF][START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF] cannot be computed. Therefore, the sensitivity of the elastogeometrical model of the CDPR to geometrical and mechanical errors is evaluated statistically. Here, MATLAB has been coupled with modeFRONTIER, a process integration and optimization software platform [17] for the analysis. The RMS (Root Mean Square) of the static deflection of CAROCA MP is studied. The nominal mass of the MP and the additional mass are equal to 180 kg and 50 kg, respectively. Influence of mechanical errors In this section, all the uncertain parameters of the elasto-geometrical CAROCA model are defined with uniformly distributed deviations. The uncertainty range and discretization step are given in Tab. 3. In this basis, 2000 SOBOL quai-randm observations are created. m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 18 ± 0.015 ± 0.03 ± 15 Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1 In this configuration, the operating point of the MP is supposed to be unknown. A large variation range of the modulus of elasticity is considered. The additional mass corresponds to a variation in cable tensions from 574 N to 730 N, which corresponds to a modulus of elasticity of 84.64 GPa. Thus, while the operating point of the MP is unknown, an uncertainty of ± 18 GPa is defined with regard to the measured modulus of elasticity E= 102 GPa. Figure 4a displays the distribution fitting of the static deflection RMS. It shows that the RMS distribution follows a quasi-uniform law whose mean µ 1 is equal to 1.34 mm. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.12 mm and a maximum value RMS max equal to 1.63 mm; a variation of 0.51 mm under all uncertainties, which presents 38% of the nominal value of the static deflection. Figure 4b depicts the RMS of the MP static deflection as a function of variations in E and ρ simultaneously, whose values vary respectively from 0.09135 to 0.11165 kg/m and from 84.2 to 120.2 GPa. The static deflection is very sensitive to cables mechanical behavior. The RMS varies from 0.42 mm to 0.67 mm due to the uncertainties of these two parameters only. As a matter of fact, the higher the cable modulus of elasticity, the smaller the RMS of the MP static deflection. Conversely, the smaller the linear mass of the cable, the smaller the RMS of the MP static deflection. Accordingly, the higher the sag-introduced stiffness, the higher the MP static deflection. Besides, the higher the axial stiffness of the cable, the lower the MP static deflection. Figure 4c illustrates the RMS of the MP static deflection as a function of variations in ρ and m, whose value varies from 162 kg to 198 kg. The RMS varies from 0.52 mm to 0.53 mm due to the uncertainties of these two parameters only. The MP mass affects the mechanical behavior of cables: the heavier the MP, the larger the axial stiffness, the smaller the MP static deflection. Therefore, a fine identification of m and ρ is very important to establish a good CDPR model. Comparing to the results plotted in Fig. 4b, it is clear that E affects the RMS of the MP static deflection more than m and ρ. As a conclusion, the integration of cables hysteresis effects on the error model is necessary and improves force algorithms and the identification of the robot geometrical parameters [START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF]. Influence of geometrical errors In this section, the cable tension set-points during MP operation are supposed to be known; so, the modulus of elasticity can be calculated around the operating point and the confidence interval is reduced to ± 2 GPa. The uncertainty range and the discretization step are provided in Tab. 4. Figure 5a displays the distribution fitting of the MP static deflection RMS. It shows that the RMS distribution follows a normal law whose mean µ 2 is equal to 1.32 mm and its standard deviation σ 2 is equal to 0.01 mm. This deviation is relatively small, which allows to say that the calibration through static deflection is not obvious. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.28 mm and a maximum value RMS max equal to 1.39 mm; a variation of 0.11 mm under all uncertainties. The modulus of elasticity affects the static compliant of the MP, which imposes to always consider E error while designing a CDPR model. The bar charts plotted in Fig. 5b and Fig. 5c present, respectively, the effects of the uncertainties in a i and b i , (i=1..8), to the static deflection of the CAROCA for symmetric (0 m, 0 m, 1.75 m) and non-symmetric (3.2 m, 1.7 m, 3 m) robot configurations. These effects are determined based on t-student index of each uncertain parameter. This index is a statistical tool that can estimate the relationships between outputs and uncertain inputs. The t-Student test compares the difference between the means of two samples of designs taken randomly in the design space: • M + is the mean of the n + values for an objective S in the upper part of domain of the input variable, • M -is the mean of the n -values for an objective S in the lower part of domain of the input variable. The t-Student is defined as t = |M -- M + | V 2 g n - + V 2 g n + , where V g is the general variance [START_REF] Courteille | Design optimization of a deltalike parallel robot through global stiffness performance evaluation[END_REF]. When the MP is in a symmetric configuration, all attachment points have nearly the same effect size. However, when it is located close to points B 2 and B 4 , the effect size of their uncertainties becomes high. Moreover, the effect of the corresponding mobile points (A 2 and A 4 ) increases. It means that the closer the MP to a given point, the higher the effect of the variations in the Cartesian coordinates of the corresponding exit point of the MP onto its static deflection. That can be explained by the fact that when some cables are longer than others and become slack for a non-symmetric position, the sag effect increases. Consequently, a good identification of geometrical parameters is highly required. In order to minimize these uncertainties, a good calibration leads to a better error model. Conclusion This paper dealt with the sensitivity analysis of the elasto-geometrical model of CDPRs to mechanical and geometrical uncertainties. The CAROCA prototype was used as a case of study. The validity and identifiability of the proposed model are verified for the purpose of CDPR model-based control. That revealed the importance of integrating cables hysteresis effect into the error modeling to enhance the knowledge about cables mechanical behavior, especially when there is no feedback about tension measurement. It appears that the effect of geometrical errors onto the static deflection of the moving-platform is significant too. Some calibration [START_REF] Dit Sandretto | Certified calibration of a cable-driven robot using interval contractor programming[END_REF][START_REF] Joshi | Calibration of a 6-DOF cable robot using two inclinometers[END_REF] and self-calibration [START_REF] Miermeister | Auto-calibration method for overconstrained cable-driven parallel robots[END_REF][START_REF] Borgstrom | Nims-pl: A cable-driven robot with self-calibration capabilities[END_REF] approaches were proposed to enhance the CDPR performances. More efficient strategies for CDPR calibration will be performed while considering more sources of errors in a future work. Fig. 2 : 2 Fig. 2: The ith closed-loop of a CDPR 5 A 5 6 0.2 0.15 -0.125 B 7 -3.5 -2 3.5 A 7 0.2 -0.15 -0.125 B 8 3.5 -2 3.5 A 8 -0.2 -0.15 0.125 3 Elasto-geometric modeling Fig. 3 : 3 Fig. 3: Load-elongation diagram of a steel wire cable measured in steady state conditions at the rate of 0.05 mm/s Fig. 4 : 4 Fig. 4: (a) Distribution of the RMS of the MP static deflection (b) Evolution of the RMS under a simultaneous variations of E and ρ (c) Evolution of the RMS under a simultaneous variations of m and ρ Fig. 5 : 5 Fig. 5: (a) Distribution of the RMS of the MP static deflection (b) Effect of uncertainties in a i (c) Effect of uncertainties in b i Table 1 : 1 Cartesian coordinates of anchor points A i (exit points B i , resp.) expressed in F p (in F b , resp.) Table 2 : 2 Modulus of elasticity while loading or unloading phase Modulus of elasticity (GPa) E 1-5 E 5-10 E 5-20 E 5-30 E 10-15 E 10-20 E 10-30 E 20-30 Loading 72.5 83.2 92.7 97.2 94.8 98.3 102.2 104.9 Unloading 59.1 82.3 96.2 106.5 100.1 105.1 115 126.8 Table 3 : 3 Uncertainties and steps used to design the error model Parameter Table 4 : 4 Uncertainties and steps used to design the error model Parameter m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 2 ± 0.015 ± 0.03 ± 15 Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1 number of observations 1.26 1.28 1.3 1.32 1.34 1.36 1.38 1.4 Static deflection RMS (mm)
24,960
[ "173154", "10659", "173925" ]
[ "25157", "481388", "25157" ]
01758205
en
[ "info" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01758205/file/2018_tro_mani.pdf
A Direct Dense Visual Servoing Approach using Photometric Moments Manikandan Bakthavatchalam, Omar Tahri and Franc ¸ois Chaumette Abstract-In this paper, visual servoing based on photometric moments is advocated. A direct approach is chosen by which the extraction of geometric primitives, visual tracking and image matching steps of a conventional visual servoing pipeline can be bypassed. A vital challenge in photometric methods is the change in the image resulting from the appearance and disappearance of portions of the scene from the camera field of view during the servo. To tackle this issue, a general model for the photometric moments enhanced with spatial weighting is proposed. The interaction matrix for these spatially weighted photometric moments is derived in analytical form. The correctness of the modelling, effectiveness of the proposed strategy in handling the exogenous regions and improved convergence domain are demonstrated with a combination of simulation and experimental results. Index Terms-image moments, photometric moments, dense visual servoing, intensity-based visual servoing I. INTRODUCTION Visual servoing (VS) refers to a wide spectrum of closedloop techniques for the control of actuated systems with visual feedback [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. A task function is defined from a set of selected visual features, based on the currently acquired image I(t) and the reference image I ⇤ learnt from the desired robot pose. In a typical VS pipeline, the image stream is subjected to an ensemble of measurement processes, including one or more image processing, image matching and visual tracking steps, from which the visual features are determined. Based on the nature of the visual features used in the control law, VS methods can be broadly classified into geometric and photometric approaches. The earliest geometric approaches employ as visual features parameters observed in the image of geometric primitives (points, straight lines, ellipses, cylinders) [START_REF] Espiau | A new approach to visual servoing in robotics[END_REF]. These approaches are termed Image-based Visual Servoing (IBVS). In Pose-based Visual Servoing [START_REF] Wilson | Relative end-effector control using cartesian position based visual servoing[END_REF], geometric primitives are used to reconstruct the camera pose which is then used as input for visual servoing. These approaches are thus dependent on the reliable detection, extraction and subsequent tracking of the aforesaid primitives. While PBVS may be affected by instabilities in pose estimation, IBVS designed from image points may be subject to local minima, singularity, inadequate robot trajectory and limited convergence domain, when the six degrees of freedom are controlled and when the image error is large and/or when the robot has a large displacement to achieve to reach the desired pose [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. This is due to the Parts of this work have been presented in [START_REF] Bakthavatchalam | Photometric moments: New promising candidates for visual servoing[END_REF] and [START_REF] Bakthavatchalam | An improved modelling scheme for photometric moments with inclusion of spatial weights for visual servoing with partial appearance/disappearance[END_REF]. Manikandan Bakthavatchalam and Franc ¸ois Chaumette are with Inria, Univ Rennes, CNRS, IRISA, Rennes, France. e-mail: [email protected], [email protected] Omar Tahri is with INSA Centre Val de Loire, Université d'Orléans, PRISME EA 2249, Bourges, France. email: [email protected] strong non linearities and coupling in the interaction matrix of image points. To handle these issues, geometric moments were introduced for VS in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], which allowed obtaining a large convergence domain and adequate robot trajectories, thanks to the reduction of the non linearities and coupling in the interaction matrix of adequate combinations of moments. However, these methods are afflicted by a serious restriction: their dependency on the availability of well-segmented regions or a set of tracked and matched points in the image. Breaking this traditional dependency, the approach proposed in this paper embraces a more general class, known as dense VS, in which the extraction, tracking and matching of set of points or well-segmented regions is not necessary. In another suite of geometric methods, an homography and a projective homography are respectively used as visual features in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF] and [START_REF] Silveira | Direct visual servoing: Vision-based estimation and control using only nonmetric information[END_REF], [START_REF] Silveira | On intensity-based nonmetric visual servoing[END_REF]. These quantities are estimated by solving a geometric or photo-geometric image registration problem, carried out with non-linear iterative methods. However, these methods require a perfect matching of the template considered in the initial and desired images, which strongly limits their practical relevance. The second type of methods adopted the photometric approach by avoiding explicit geometric extraction and resorting instead to use the image intensities. A learning-based approach was proposed in [START_REF] Nayar | Subspace methods for robot vision[END_REF], where the intensities were transformed using Principal Component Analysis to a reduced dimensional subspace. But it is prohibitive to scale this approach to multiple degrees of freedom [START_REF] Deguchi | A direct interpretation of dynamic images with camera and object motions for vision guided robot control[END_REF]. The set of intensities in the image were directly used as visual features in [START_REF] Collewet | Photometric visual servoing[END_REF] but the high nonlinearity between the feature space and the state space limits the convergence domain of this method and does not allow obtaining adequate robot trajectories. This direct approach was later extended to omnidirectional cameras [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF] and to depth map [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF]. In this work, instead of using directly the raw luminance of all the pixels, we investigate the usage of visual features based on photometric moments. As it has been shown in [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] that considering geometric moments (built from a set of image points) provides a better behavior than considering directly a set of image points, we will show that considering photometric moments (built from the luminance of the pixels in the image) provides a better behavior than considering directly the luminance of the pixels. These moments are a specific case of the Kernel-based formulation in [START_REF] Kallem | Kernelbased visual servoing[END_REF] which synthesized controllers only for 3D translations and rotation around the optic axis. Furthermore, the analytical form of the interaction matrix of the features proposed in [START_REF] Kallem | Kernelbased visual servoing[END_REF] has not been determined, which makes impossible the theoretical sta-bility analysis of the corresponding control scheme. Different from [START_REF] Kallem | Kernelbased visual servoing[END_REF], the interaction matrix is developed in closed-form in this paper, and most importantly taking into account all the six degrees of freedom, which is the first main contribution of this work. It is shown that this is more general as well as consistent with the current state-of-the-art. Furthermore, an important practical (and theoretical) issue that affects photometric methods stem from the changes in the image due to the appearance of new portions of the scene or the disappearance of previously viewed portions from the camera field-of-view (FOV). This means that the set of measurements varies along the robot trajectory, with a potential large discrepancy between the initial and desired images, leading to an inconsistency between the set of luminances I(t) in the current image and the set I ⇤ in the desired image, and thus also for the photometric moments computed in the current and desired images. In practice, such unmodelled disturbances influence the system behaviour and may result in failure of the control law. Another original contribution of this work is an effective solution proposed to this challenging problem by means of a spatial weighting scheme. In particular, we determine a weighting function so that a closed-form expression of the interaction matrix can be determined. The main contributions of this paper lie in the modelling issues related to considering photometric moments as inputs of visual servoing and in the study of the improvements it brings with respect to the pure luminance method. The control scheme we have used to validate these contributions is a classical and basic kinematic controller [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. Let us note that more advanced control schemes, such as dynamic controllers [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF]- [START_REF] Wang | Adaptive visual tracking for robotic systems without imagespace velocity measurement[END_REF], could be designed from these new visual features. The sequel of the paper is organized as follows: in Section II, the modelling aspects of photometric moments and the associated weighting strategy are discussed in depth. In Section III, the visual features adopted and the control aspects are discussed. Sections IV and V are devoted to simulations and experimental results. Finally, the conclusions drawn are presented in Section VI. II. MODELLING Generalizing the classical definition of image moments, we define a weighted photometric moment of order (p + q) as: m pq = Z Z ⇡ x p y q w (x) I (x, t) dx dy (1) where x = (x, y) is a spatial point on the image plane ⇡ where the intensity I(x, t) is measured at time t and w(x) is a weight attributed to that measurement. By linking the variations of these moments to the camera velocity v c , the interaction matrix of the photometric moments can be obtained. ṁpq = L mpq v c (2) where L mpq = ⇥ L vx mpq L vy mpq L vz mpq L !x mpq L !y mpq L !z mpq ⇤ . Each L v/! mpq 2 R is a scalar with the superscripted v denoting translational velocity and ! the rotational velocity along or around the axis x, y or z axis of the camera frame. Taking the derivative of the photometric moments in (1), we have ṁpq = Z Z ⇡ x p y q w(x) İ(x, y) dx dy (3) The first step is thus to model the variations in the intensity İ(x, y) that appear in (3). In [START_REF] Collewet | Photometric visual servoing[END_REF] which aimed to use raw luminance directly as visual feature, the intensity variations were modelled using the Phong illumination model [START_REF] Phong | Illumination for computer generated pictures[END_REF] resulting in an interaction matrix with parts corresponding to the ambient and diffuse terms. In practice, use of light reflection models requires cumbersome measurements for correct instantiation of the models. Besides, a perfect model should take into account the type of light source, attenuation model and different possible configurations between the light sources, the vision sensor and the target object used in the scene. Since VS is robust to modelling errors, adding such premature complexity to the models can be avoided. Instead, this paper adopts a simpler and more practical approach by using the classical brightness constancy assumption [START_REF] Horn | Determining optical flow[END_REF] to model the intensity variations, as done in [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. This assumption considers that the intensity of a moving point x = (x, y) remains unchanged between successively acquired images. This is encapsulated in the following well-known equation I(x + x, t + t) = I(x, t) (4) where x is the infinitesimal displacement undergone by the image point after an infinitesimal increment in time t. A first order Taylor expansion of (4) around x leads to rI > ẋ + İ = 0 (5) known as the classical optic flow constraint equation (OFCE), where rI > = h @I @x @I @y i = ⇥ I x I y ⇤ is the spatial gradient at the image point x. Further, the relationship linking the variations in the coordinates of a point in the image with the spatial motions of a camera is well established [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: ẋ = L x v c where L x =  1 Z 0 x Z xy (1 + x 2 ) y 0 1 Z y Z 1 + y 2 xy x (6) In general, the depth of the scene points can be considered as a polynomial surface expressed as a function of the image point coordinates [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. 1 Z = X p 0,q 0,p+qn A pq x p y q (7) where n is the degree of the polynomial with n = 1 for a planar scene. Equation ( 7) is a general form with the only assumption that the depth is continuous. In this work however, for simplifying the analytical forms presented, only planar scenes have been considered in the modelling 1 . We will see in Section V-D that this simplification is not crucial by considering non planar environments. Therefore, with n = 1, (7) becomes 1 Z = Ax + By + C (8) where A(= A 10 ), B(= A 01 ), C(= A 00 ) are scalar parameters that describe the configuration of the plane in the camera frame. From (5), we can write: İ(x, y) = rI > ẋ (9) By plugging ( 8) and ( 6) in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF], we obtain İ(x, y) = L I v c = rI > L x v c ( 10 ) where L I = rI > L x is given by: L > I = 2 Substituting ( 10) into (3), we see that ṁpq = Z Z ⇡ x p y q w(x) L I v c dx dy (12) By comparing with (2), we can then identify and write down the interaction matrix of the photometric moments as L mpq = Z Z ⇡ x p y q w(x) L I dx dy (13) Direct substitution of (11) into the above equation gives us L vx mpq = Z Z ⇡ x p y q w(x)I x (Ax + By + C) dx dy L vy mpq = Z Z ⇡ x p y q w(x)I y (Ax + By + C) dx dy L vz mpq = Z Z ⇡ x p y q w(x)( xI x yI y )(Ax + By + C) dx dy L !x mpq = Z Z ⇡ x p y q w(x)( xyI x (1 + y 2 )I y ) dx dy L !y mpq = Z Z ⇡ x p y q w(x)((1 + x 2 )I x + xyI y ) dx dy L !z mpq = Z Z ⇡ x p y q w(x)(xI y yI x ) dx dy We see that the interaction matrix consists of a set of integrodifferential equations. For convenience and fluidity in the ensuing developments, the following compact notation is introduced. m rx pq = Z Z ⇡ x p y q w(x) I x dx dy (14a) m ry pq = Z Z ⇡ x p y q w(x) I y dx dy (14b) Each component of the interaction matrix in ( 13) can be easily re-arranged and expressed in terms of the above compact notation as follows: L vx mpq = A m rx p+1,q + B m rx p,q+1 + C m rx p,q L vy mpq = A m ry p+1,q + B m ry p,q+1 + C m ry p,q L vz mpq = A m rx p+2,q B m rx p+1,q+1 C m rx p+1,q A m ry p+1,q+1 B m ry p,q+2 C m ry p,q+1 L !x mpq = m rx p+1,q+1 m ry p,q m ry p,q+2 L !y mpq = m rx p,q + m rx p+2,q + m ry p+1,q+1 L !z mpq = m rx p,q+1 + m ry p+1,q (15) The terms m rx pq and m ry pq have to be evaluated to arrive at the interaction matrix. This in turn would require the computation of the image gradient terms I x and I y , an image processing step performed using derivative filters, which might introduce an imprecision in the computed values. In the following, it is shown that a clever application of the Green's theorem can help subvert the image gradients computation. The Green's theorem helps to compute the integral of a function defined over a subdomain ⇡ of R 2 by transforming it into a line (curve/contour) integral over the boundary of ⇡, denoted here as @⇡: Z Z ⇡ ( @Q @x @P @y )dx dy = I @⇡ P dx + I @⇡ Qdy (16) With suitable choices of functions P and Q, we aim to transform the terms m rx pq and m ry pq . To compute m rx pq , we let Q = x p y q w(x) I(x) and P = 0. We have @P @y = 0 and @Q @x = px p 1 y q w(x)I(x)+x p y q @w @x I(x)+x p y q w(x)I x [START_REF] Kallem | Kernelbased visual servoing[END_REF] Substituting this back into [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF], we can write Z Z ⇡ h p x p 1 y q w(x)I(x) + x p y q @w @x I(x) + x p y q w(x) I x i dxdy = I @⇡ x p y q w(x) I(x)dy (18) Recalling our compact notation in (14a) and rearranging [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF], we obtain m rx pq = Z Z ⇡ ⇣ p x p 1 y q w(x)I(x) + x p y q @w @x I(x) ⌘ dx dy + I @⇡ x p y q w(x)I(x)dy Applying (1) to the first term in the RHS, we have m rx pq = p m p 1,q Z Z ⇡ x p y q @w @x I(x) dx dy + I @⇡ x p y q w(x) I(x) dy In the same manner, the computation of the term m ry pq is again simplified by employing the Green's theorem with P = x p y q w(x) I(x) and Q = 0. m ry pq = q m p,q 1 Z Z ⇡ x p y q @w(x, y) @y I(x) dx dy I @⇡ x p y q w(x) I(x) dx (20) The results ( 19) and ( 20) are generic, meaning there are no explicit conditions on the weighting except that the function is differentiable. Clearly, depending on the nature of the weighting chosen for the measured intensities in (1), different analytical results can be obtained. In the following, two variants of the interaction matrix are developed corresponding to two different choices for the spatial weighting. A. Uniformly Weighted Photometric Moments (UWPM) First, the interaction matrix is established by attributing the same importance to all the measured intensities on the image plane. These moments are obtained by simply fixing w(x, t) = 1, 8 x 2 ⇡ leading to @w @x = @w @y = 0. Subsequently, ( 19) and ( 20) get reduced to 8 < : m rx pq = p m p 1,q + H @⇡ x p y q I(x, y) dy m ry pq = q m p,q 1 H @⇡ x p y q I(x, y) dx (21) The second terms in m rx pq and m ry pq are contour integrals along @⇡. These terms represent the contribution of information that enter and leave the image due to camera motion. They could be evaluated directly but for obtaining simple closedform expressions, the conditions under which they vanish are studied. Let us denote I @⇡ = H @⇡ x p y q I(x, y) dy. The limits y = y m and y = y M are introduced at the top and bottom of the image respectively (see Fig 1a). Since y(= y M ) is constant along C1 and y(= y m ) is constant along C3, it is sufficient to integrate along C2 and C4. Along C 2 , y varies from y M to y m while x remains constant at x M . Along C 4 , y varies from y m to y M while x remains constant at x m . Introducing these limits, we get I @⇡ = x p M ym Z y M y q I(x M , y)dy + x p m y M Z ym y q I(x m , y)dy If I(x M , y) = I(x m , y) = I, 8y, then we have I @⇡ = (x p M x p m ) I ym Z y M y q dy Since we want I @⇡ = 0, the only solution is to have I = 0, that is when the acquired image is surrounded by a uniformly colored black2 background. This assumption, named information persistence (IP) was already implicitly done in [START_REF] Kallem | Kernelbased visual servoing[END_REF] [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. It does not need not be strictly enforced. In fact, mild violations of the IP assumption were deliberately introduced in experiments (refer IV-B) and this was quite acceptable in most cases, as evidenced by our results. This assumption gets naturally eliminated when appropriate weighting functions are introduced in the moments formulation as shown in II-B. Substituting ( 22) into (15), we get the final closed form expression for the interaction matrix. L vx mpq = A(p + 1)m pq Bpm p 1,q+1 Cpm p 1,q L vy mpq = Aqm p+1,q 1 B(q + 1)m p,q Cqm p,q 1 L vz mpq = A (p + q + 3) m p+1,q + B(p + q + 3) m p,q+1 + C(p + q + 2) m pq L !x mpq = q m p,q 1 + (p + q + 3) m p,q+1 L !y mpq = p m p 1,q (p + q + 3) m p+1,q L !z mpq = p m p 1,q+1 q m p+1,q 1 (23) The interaction matrix in ( 23) has a form which is exactly identical to those developed earlier for the geometric moments [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. A consistency with previously developed results is thus observed even though the method used for the modelling developments differ completely from [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. Consequently, all the useful results available in the state of the art with regards to the developments of visual features [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] [8] are applicable as they are for the proposed photometric moments. Unlike [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the image gradients do not appear anymore in the interaction matrix. Their computation is no longer necessary. The developments presented have led to the elimination of this image processing step required by pure luminance-based visual servoing [START_REF] Collewet | Photometric visual servoing[END_REF]. The computation of the interaction matrix is now reduced to a simple and straight-forward computation of the moments on the image plane. Note also that in order to calculate L mpq , only moments of order upto p + q + 1 are required. In addition, we note that as usual in IBVS, the interaction matrix components corresponding to the rotational degrees of freedom are free from 3D parameters. B. Weighted Photometric Moments (WPM) In order to remove the IP assumption we do not attribute anymore an equal contribution to all the measured intensities (w(x) 6 = 1, 8x 2 @⇡), as was done in Sec II-A. Instead, a lesser importance is attributed to peripheral pixels, on which the appearance and disappearance effects are pronounced. To achieve this, the spatial weighting function is made to attribute maximal importance to the pixels in the area around the image center and smoothly reducing it radially outwards towards 0 at the image periphery. If w(x, y) = 0, 8x 2 @⇡, this still ensures I @⇡ = 0 obviating the need to have any explicit IP assumption anymore. Weighting scheme: The standard logistic function l(x) = 1 1+e x smoothly varies between 0 and 1 and has simple derivatives. It is a standard function that is used in machine learning. However, if used to design w(x), it is straight-forward to check that the interaction matrix cannot be expressed as functions of the weighted photometric moments. To achieve this, we propose to use functions with the general structure: F(x) = K exp p(x) (24) with p(x) = a 0 + a 1 x + 1 2 a 2 x 2 + 1 3 a 3 x 3 + ... + 1 n a n x n . Indeed, functions of this structure possess the interesting property that their derivatives can be expressed in terms of the function itself. It is given by: F 0 (x) = K exp p(x) p 0 (x) = p 0 (x)F(x) with p 0 (x) = a 1 + a 2 x + a 3 x 2 + ... + a n x n 1 . In line with the above arguments, we propose the following custom exponential function (see Fig 1b) w(x, y) = K exp a(x 2 +y 2 ) 2 ( 25 ) where K is the maximum value that w can attain and a can be used to vary the area which receives maximal and minimal weights respectively. This choice allows the interaction matrix to be obtained directly in closed-form as a function of the weighted moments. Therefore, no additional computational overheads are introduced since nothing other than weighted moments upto a specific order are required. In addition, the symmetric function to which the exponential is raised ensures that the spatial weighting does not alter the behaviour of weighted photometric moments to planar rotations. The spatial derivatives of (25) are as follows: 8 > < > : @w @x = 4ax(x 2 + y 2 ) w(x) @w @y = 4ay(x 2 + y 2 ) w(x) (26) Substituting ( 26) into ( 19) and ( 20), we obtain ⇢ m rx pq = p m p 1,q + 4a (m p+3,q + m p+1,q+2 ) m ry pq = q m p,q 1 + 4a (m p,q+3 + m p+2,q+1 ) (27) By combining (27) with the generic form in [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the interaction matrix of photometric moments w L mpq weighted with the radial function ( 25) is obtained. w L mpq = h w L vx mpq w L vy mpq w L vz mpq w L !x mpq w L !y mpq w L !z mpq i (28) with w L vx mpq = L vx mpq + 4 a A (m p+4,q + m p+2,q+2 ) + 4 a B (m p+3,q+1 + m p+1,q+3 ) + 4 a C (m p+3,q + m p+1,q+2 ) w L vy mpq = L vy mpq + 4 a A (m p+3,q+1 + m p+1,q+3 ) + 4 a B (m p,q+4 + m p+2,q+2 ) + 4 a C (m p,q+3 + m p+2,q+1 ) w L vz mpq = L vz mpq 4 a A (m p+5,q + 2m p+3,q+2 + m p+1,q+4 ) 4 a B (m p+4,q+1 + 2m p+2,q+3 + m p,q+5 ) 4 a C (m p+4,q + 2m p+2,q+2 + m p,q+4 ) w L !x mpq = L !x 1mpq 4 a(m p+4,q+1 + 2 m p+2,q+3 + m p,q+3 + m p+2,q+1 + m p,q+5 ) w L !y mpq = L !y 1mpq + 4 a(m p+3,q + m p+1,q+2 + m p+5,q + 2 m p+3,q+2 + m p+1,q+4 ) w L !z mpq = L !z mpq = pm p 1,q+1 qm p+1,q 1 We note that the interaction matrix can be expressed as a matrix sum w L mpq = L mpq + 4aL w (29) where L mpq has the same form as [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. We note however that the moments are now computed using the weighting function in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. The matrix L w is tied directly to the weighting function. Of course if a = 0 which means w(x) = 1, 8x 2 ⇡, we find w L mpq = L mpq . To compute L mpq , moments of order upto (p + q + 1) are required whereas L w is a function of moments m tu , where t + u  p + q + 5. This is in fact a resultant of the term (x 2 + y 2 ) 2 to which the exponential is raised (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]). On observation of the last component of w L mpq , we see that it does not contain any new terms when compared to [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. That is, the weighting function has not induced any extra terms, thus retaining the invariance of the classical moment invariants to optic axis rotations. This outcome was of course desired from the symmetry of the weighting function. On the other hand, if we consider the other five components, additional terms are contributed by the weighting function. As a result, moment polynomials developed from the classical moments [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] will not be invariant to translational motions when used with WPM. Thus, there is a need to develop new invariants for use with WPM such that they would retain their invariance to translations. This is an open problem that is not dealt with in this paper. Finally and as usual, the components of the interaction matrix corresponding to the rotational motions are still free from any 3D parameters. Weighted photometric moments allow visual servoing on scenes prone to appearance and disappearance effects. Moreover, the interaction matrix has been developed in closed-form in order to facilitate detailed stability and robustness analyses. The above developments would be near-identical for other weighting function choices of the form given by ( 24) [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. III. VISUAL FEATURES AND CONTROL SCHEME The photometric moments are image-based measurements m(t) = (m 00 (t), m 10 (t), m 01 (t), ...) obtained from the image I(t). To control n ( 6) degrees of freedom of the robot, a large set of k (> n) individual photometric moments could be used as input s to the control scheme: s = m(t). However, this would lead to redundant features, for which it is well known that, at best, only the local asymptotic stability can be demonstrated [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. That is why we prefer to use the same strategy as in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], that is, from the set of available measurements m(t), we design a set of n visual features s = s(m(t)) so that L s is of full rank n and has nice decoupling properties. The interaction matrix L s can easily be obtained from the matrices L mpq 2 R 1⇥6 modelled in the previous section. Indeed, we have: L s = @s @m L m ( 30 ) where L m is the matrix obtained by stacking the matrices L mpq . Then, the control scheme with the most basic and classical form has been selected [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: v c = c L s 1 (s s ⇤ ) (31) where s ⇤ = s(m ⇤ ) and c L s is an estimation or an approximation of L s . Such an approximation or estimation is indeed necessary since, as detailed in the previous section, the translational components of L mpq are function of the 3D parameters A pq describing the depth map of the scene. Classical choices are c L s = L s (s(t), b Z(t)) where Z = (A, B, C) when an estimation of Z is available, c L s = L s (s(t), c Z ⇤ ), or even c L s = L s (s ⇤ , c Z ⇤ ). Another classical choice is to use the mean c L s = 1 2 ⇣ L s (s(t), b Z(t)) + L s (s ⇤ , c Z ⇤ ) ⌘ or c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ since it was shown to be efficient for very large camera displacements [START_REF] Malis | Improving vision-based control using efficient second-order minimization techniques[END_REF]. With such a control scheme, it is well known that the global asymptotic stability (GAS) of the system in the Lyapunov sense is ensured if the following sufficient condition holds [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: L s c L s 1 > 0 (32) Of course, in case c L s = L s , the system is GAS if L s is never singular, and a perfect decoupled exponential decrease of the error s s ⇤ is obtained. Such a perfect behavior is not obtained as long as c L s 6 = L s , but the error norm will decrease and the system will converge if condition (32) is ensured. This explains the fact that a non planar scene can be considered in practice (see Section V-D), even if the modelling developed in the previous section was limited to the planar case. A. Control of SCARA motions Photometric moments-based visual features can be used to control not only the subset of SE(3) motions considered in [START_REF] Kallem | Kernelbased visual servoing[END_REF] but also full 6 dof motions. In the former case, the robot is configured for SCARA (3T+1R, n = 4) type actuation to control only the planar translation, translation along the optic axis and rotation around the optic axis. The camera velocity is thus reduced to v cr = (v x , v y , v z , ! z ). Similarly to [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], the following set of 4 visual features is used to control these 4 dofs. s r = (x n , y n , a n , ↵) where x n = x g a n , y n = y g a n , a n = Z ⇤ q m ⇤ From the simple relations between s r and m pq , (p + q < 3), it is quite simple to determine the analytical form of the interaction matrix L sr using (30). When the target is parallel to the image plane (A = B = 0), the following sparse matrix is obtained for UWPM. L sr = 2 6 6 4 L xn L yn L an L ↵ 3 7 7 5 = 2 6 6 4 1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (35) Let us note that the current value of the depth does not appear anywhere in L sr and only the desired value Z ⇤ intervenes indirectly through x n and y n , and thus in L sr . This nice property and the sparsity in (35) justify the choice of s r . Following the line of analysis at the start of this section, we infer that the control law using d L sr = L sr (s r (t), Z ⇤ ) is GAS since L sr is always of full rank 4 and L sr d L sr 1 = I 4 when c Z ⇤ = Z ⇤ . Let us now consider the more general case where c Z ⇤ 6 = Z ⇤ . From (35), it is straight-forward to obtain L s c L s 1 = 2 6 6 4 1 0 0 Y 0 1 0 X 0 0 1 0 0 0 0 1 3 7 7 5 (36) where Y = ( b Z ⇤ Z ⇤ 1)y n and X = (1 b Z ⇤ Z ⇤ )x n . The eigen values of the symmetric part of the above matrix product are given by = {1, 1, 1 ± p X 2 +Y 2 2 }. For (32) to hold, all eigen values have to be positive, that is, p X 2 +Y 2 2 < 1 , X 2 + Y 2 < 4. Back-substitution of X and Y yields the following bounds for system stability: 1 2 p x 2 n + y 2 n < b Z ⇤ Z < 1 + 2 p x 2 n + y 2 n ( 37 ) which are easily ensured in practice since x n and y n are small (0.01 typically). Let us now consider the case where c L s = I 4 , which is a coarse approximation. In that case, we obtain L s c L s 1 = 2 6 6 4 1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (38) Then, proceeding as previously leads to the following condition for GAS x 2 n + y 2 n < 4 (39) which, once again, is always ensured in practice. Note that these satisfactory theoretical results have not been reported previously and are an original contribution of this work. Unfortunately, exhibiting similar conditions for the WPM case is not so easy since the first three columns of L sr are not as simple as (35) due to the loss of invariance property of WPM. B. 6 dof control To control all the 6 dof, two more features in addition to (33) are required. In moments-based VS methods, these features are chosen as ratios of moment polynomials which are invariant to 2D translations, planar rotation and scale. In [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], several moment invariants-based visual features have been introduced. In principle, all these previous results could be adopted for use with the photometric moments proposed in this work. Certainly, an exhaustive exploration of all these choices is impractical. Based on several simulations and experimental convergence trials (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]), the following visual feature introduced in [START_REF] Tahri | Visual servoing based on shifted moments[END_REF] was selected: r = 1 / 2 (40) with ⇢ 1 = 3μ 30 μ12 + μ2 30 + 3μ 03 μ21 + μ2 03 2 = μ30 μ12 + μ2 21 μ03 μ21 + μ2 12 ( 41 ) where μpq is the shifted moment of order p + q with respect to shift point x sh (x sh , y sh ) defined by [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]: μpq = Z Z (x x g + x sh ) p (y y g + y sh ) q w(x)I(x) dx dy To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the following set of visual features for controlling the 6 dof is obtained: s = (x n , y n , a n , r P1 , r P2 , ↵) (44) The interaction matrix developments of r P1 and r P2 are provided in Appendix A. When UWPM are used, the interaction matrix L s exhibits the following sparse structure when the sensor and target planes are parallel. The matrix E is non-singular if its left 2 ⇥ 2 submatrix has a non-zero determinant. When the interaction matrix is computed with moments from shift points (P 1 6 = P 2 ) as described above, this condition is effortlessly ensured. As a result, the interaction matrix L || s is non-singular [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]. On the other hand, when the features are built from WPM, the sparsity in (45) cannot be achieved anymore. This is because L s has a more complex form, except for its last column which remains exactly the same (since behaviour with respect to optic axis rotations is not altered). Nevertheless, the obtained results were quite satisfactory for a variety of scenes and camera displacements, as shown in the next section. L || s =  I 3 D 0 3 E ( IV. VALIDATION RESULTS FOR UWPM A. Modelling Validation and Comparison to Pure Luminance In this section, simulation results of 6 dof positioning tasks are presented to demonstrate the correctness of the modelling of the UWPM proposed in Sec.II-A and to compare their behavior to pure luminance. The initial and desired images are shown in Figs. 3a and3b respectively. The background is empty without appearance or disappearance of scene portions in the camera view. The initial pose is chosen far away from the desired one such that the image overlap is small. The displacements required for convergence are a translation of t = [1.0m, 1.0m, 1.0m] and a rotation of R = [25 , 10 , 55 ]. The control law in (31) is used with c L s = L s (s(t), Z(t)). This control law is expected to result in a pure exponential decrease of the errors to 0. In simulation, the depths Z(t) are readily available from ground truth and need not be estimated. A gain of = 1.0 was used for this experiment. As seen from Fig 3c, a perfect exponential decrease of the errors is indeed obtained as expected. Furthermore, the camera traces a straight-forward path to the goal pose as shown in Fig 3d . This demonstrates the validity of the modelling steps and the design of the visual features. Let us note that no image processing (image matching or visual tracking) were used with the photometric moments in the reported experiments. Comparison to pure luminance: Then, the same control law configuration was tested using pure luminance directly as visual feature, that is using v B. Experimental Results with UWPM Experiments were performed at video rate on a Viper850 6 dof robot. Unlike in Sec IV-A, mild violations of the IP assumption are deliberately allowed. The photometric moments are tested first on SCARA-type motions and then with 6 dof. 1) SCARA motions: For this experiment, the features in (33) are used with their current interaction matrix b L s = c L s (s(t), c Z ⇤ ), with c Z ⇤ = (0, 0, 1/ Ẑ⇤ ), Ẑ⇤ roughly approximated with depth value at the desired pose. A gain of = 1.5 was used. The desired image is shown in Figure 5b. The initial pose is chosen such that the image in 5a is observed by the camera. The target is placed such that very small portions of its corners are slightly outside the field of view (see Fig 5a). Furthermore, the background is not perfectly black, thereby non-zero. It can be observed from Fig 6c that the decrease in errors is highly satisfactory while we recall that only the interaction matrix at the desired configuration and approximate depth were employed. The generated velocity profiles are also smooth as shown in Fig. 6d. Clearly, the camera spatial trajectory is close to a geodesic as shown in Figure IV-B2. Further, an accuracy of [ 0.56mm, 0.08mm, 0.14mm] in translation and [ 0.01 , 0.04 , 0.03 ] in rotation was obtained. The above experimental results showed results with UWPM where there are only mild violations of the IP assumption. Next, we show results on more general scenes with WPM where this restrictive assumption (black background) has been eliminated. V. VALIDATION RESULTS FOR WPM For all the experiments presented in this section, the parameter K = 1 is fixed, so maximum weight a pixel can have is 1. Then, a is chosen with a simple heuristic, that 40% of the image pixels will be assigned a weight greater than 0.5 and around 90% a weight greater than 0.01. This is straightforward to compute from the definition of w(x, y). For an image resolution of 640 ⇥ 480 for example, with K = 1, a = 650 satisfies the above simple heuristic. The surface of w(x, y) with these parameters is depicted in Fig. 1b. Let us note that the tuning of these parameters is not crucial. In our case, changing a by ±200 does not introduce any drastic changes in the results. A. Validation of WPM In this section, the modelling of WPM is validated using 6 dof positioning tasks in simulation. No specific backgrounds are considered anymore since the WPM designed in Section II-B are equipped to handle such scenarios. Comparisons to both the pure luminance feature and to moments without the weighting strategy are made. The image learnt from the desired pose is shown in Fig 7b . In the image acquired from the initial robot pose (see Fig 7a ), a large subset of pixels not present in the desired image have appeared. In fact, there is no clear distinction of which pixels constitute the background. These scenarios are more representative of camera-mounted robotic arms interacting with real world objects. For the control, the set of visual features (44) is adopted with the current interaction matrix L s (s(t), Z(t)). The depths are not estimated but available from the ground truth data. A gain of = 1.5 was used for all the experiments. The resulting behaviour is very satisfactory. The errors in the visual features decrease exponentially as shown in Figures 7c and7d. This confirms the correctness of the modelling steps used to obtain the interaction matrix of WPM. Naturally, the successful results also imply the correctness of the visual features obtained from the weighted moments. Comparison with UWPM: For the comparison, the same experiment is repeated with the same control law but without the weighting strategy. In this case, the errors appear to decrease initially (see Figs 8a and8b). However, after about 25 iterations the system diverges (see Fig 8c) and the servo is stopped after few iterations. As expected, the system in this case is clearly affected by the appearance and disappearance of parts of the scene. Comparison to pure luminance: Next, we also compared the WPM with the pure luminance feature. Also in this case, the effect of the extraneous regions is severe and the control law does not converge to the desired pose. The generated velocities do not regulate the errors satisfactorily (see Fig 8d). The error This can be compared to the case of the WPM where the error norm decreases exponentially as shown in Figure 8e. Also, as mentioned previously, the visual features are redundant and there is no mapping of individual features to the actuated dof. The servoing behaviour depends on the profile of the cost function, which is dependent on all the intensities in the acquired image. The appearance and disappearance of scene portions thus also affects the direct visual servoing method. Thus, we see that the extraneous regions have resulted in the worst case effect namely non-convergence to the desired pose in both the UWPM as well as when using the pure luminance. Next, we discuss results obtained from servoing on a scene different from the one used in this experiment. B. Robustness to large rotations In this simulation, we consider 4 dof and very large displacements such that large scene portions enter and leave the camera field of view (see Figures 9a and9b). A rotation of 100 around the optic axis and translational displacement of c⇤ t c = [5cm, 4cm, 25cm] are required for convergence. For this experiment, the VS control law in (31) with the features in (33) is used with a gain of = 2. For this difficult task, the mean c has been selected in the control scheme. Note that the depths are not updated at each iteration and only approximated using Z ⇤ = 1. This choice was on purpose to show that online depth estimation is not necessary and an approximation of its value at the desired pose is sufficient for convergence. The visual servoing converged to the desired pose with an accuracy of 0.29 in rotation and [ 0.07mm, 0.48mm, 0.61mm] in translation. The control velocities generated are shown in Fig. 9d and the resulting Cartesian trajectories are shown in Fig. 9e. This experiment demonstrates the robustness of the WPM to very large displacements even when there is appearance and disappearance of huge parts of the image. This affirms also that the convergence properties are improved with the proposed WPM. L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ ( C. Empirical Convergence Analysis In this section, we compare through simulations the convergence domain of WPM with pure luminance and UWPM. For this, we considered the 4dof case as in [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. Artificially generated synthetic scenes in which polygonal blocks are sprinkled at the image periphery were employed. As seen from Fig 10, this allows to simulate in varying degrees the appearance and disappearance of scene portions in the camera FOV. For this analysis, the desired pose to be attained is fixed at 1.8m. Positioning tasks starting from 243 different initial poses consisting of 3 sets of 81 poses each, conducted at 3 different depths of 1.8m, 1.9m and 2.0m were considered. In all these initial poses, the camera is subjected to a rotation of 25 around the optic axis while the x and y translations vary from 0.2m to 0.2m. The interaction matrix c L s = L s (s ⇤ , c Z ⇤ ) is chosen in the control scheme, just like in previous works on convergence analysis [START_REF] Collewet | Photometric visual servoing[END_REF] [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF]. We consider an experiment to have converged if the task error kek is reduced to less than 1e 10 in a maximum of 300 iterations. In addition to this condition, we also impose that the SSD error defined by e SSD = P x [I(x) I ⇤ (x)] 2 /N pix between the final and learnt images is less than 1.0. This criterion ensures that a non-desired equilibrium point is not considered wrongly as converged. In the reported converged experiments, the final accuracy in pose is less than 1mm for translations and less than 1 for the planar rotation. The UWPM met with failure in all the cases. No segmentation or thresholding is employed and the servo is subjected to appearance and disappearance effects at the image periphery. A dismal performance resulted as expected without the weighting strategy since the model is not equipped to handle the energy inflow and outflow at respect to UWPM, the same set of experiments was repeated using a dense texture (see Fig. 11), where the WPM yield a better result than non-weighted moments. The non-weighted moments have converged on an average only in 55% of the cases. Also note that this is different from the synthetic case at 0%, that is they were completely unable to handle the entry and exit of extraneous regions. In comparison, for WPM, only 3 cases failed to converge out of 243 total runs with a very satisfactory convergence rate of 98%. In fact, in the first two sets of experiments, WPM converged for all the generated poses yielding a 100% convergence rate. No convergence to any undesired equilibrium points were observed, thanks to the textured object. The final accuracies for all the converged experiments was less than 1mm in translation and less then 1 in rotation. Based on the clear improvements in convergence rate, we conclude that WPM are effective as a solution to the problem of extraneous image regions and result in a larger convergence domain in comparison to classical nonweighted moments. We have finally to note that for larger lateral displacements, all methods fail since initial and desired images do not share sufficient common information. D. Robustness to non planar environments In this section, visual servoing with WPM is demonstrated on a non planar scene with the Viper850 robot by considering 4 dof as previously. A realistic scenario is emulated by placing five 3D objects of varying shape, size and color in the scene as shown in c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ has been selected in the control scheme. The depth distributions in the scene are not estimated nor known apriori. An approximation c Z ⇤ = (0, 0, 1/ Ẑ⇤ ) with Ẑ⇤ = 0.5m was used. A gain of = 0.4 was employed. The control law generates camera velocities that decrease exponentially (see Fig 12d), which causes a satisfactory decrease in the feature errors (see Fig 12c). The average accuracy in positioning in translations is 0.6mm while the rotational accuracy is 0.15 . The camera spatial trajectory is satisfactory as seen from Fig. 12e. The simplification [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] of planar scene introduced in the modelling (see Section II) is therefore a reasonable tradeoff of complexity, even if it is not possible to demonstrate that the sufficient stability condition (32) is ensured since c L s 6 = L s . This demonstrates the robustness of visual servoing with respect to (moderate) modelling approximations. E. 6 dof experimental results Several 6dof positioning experiments were conducted on the ViPER 850 robot. A representative one is presented below while the others can be consulted in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. For this experiment, the desired robot pose is such that the camera is at 0.5m in a frontoparallel configuration in front of the target. The image learnt from this pose is shown in Fig 13b. L s as the mean of the desired and current interaction matrices. No pose estimation is performed and the depth is approximated roughly as 0.5m. The appearance of new scene portions in the camera view from the left side of the image does not affect the convergence of the visual servo. This influx of information is handled gracefully thanks to the improved modelling used by the WPM. The error in features related to control of rotational motions is very satisfactory (see Fig13e). On the other hand, from the error decrease in features related to control of translational motions in Figure 13d, it can be seen that the error in feature a n is noisy. This feature is based on the area moment m 00 directly related to the quantity of pixels in the image. Since the lighting conditions are not controlled, this might sometimes contribute to some noise in the features. It is also to be noted that when the interaction matrix is updated at each iteration (for the mean configuration in this case), this noise in the features sometimes make the velocities noisy as well (see Figure 13f). However, this noise does not affect the satisfactory convergence as evidenced by our results. A satisfactory Cartesian behaviour was obtained as shown in Fig 13g . The final accuracy in translations is [ 0.05mm, 1.1mm, 0.08mm] and for the rotations is [0.18 , 0.006 , 0.019 ]. Let us finally note that a superior strategy would be to use the photometric moments during the beginning of the servo and to switch over to the pure luminance feature near convergence (when the error norm is below a certain lower bound). This strategy would ensure both enhanced convergence domain thanks to photometric moments and excellent accuracies at convergence thanks to luminance feature. Let us finally note that it is possible to use a normalized intensity level in order to be robust to global lighting variations. Such a normalization can be easily obtained by computing in a first step the smallest and highest values observed in the image. This simple strategy does not modify any modelling step presented in this paper as long as the parts of the scene corresponding to these extremal values do not leave the image (or new portions with higher or smaller intensities do not enter in the camera field of view), which would thus allow obtaining exactly the same results in that case. On the other hand, if the extremal values do not correspond to the same parts of the scene, the induced perturbations may cause the failure of the servoing. VI. CONCLUSION This paper proposed a novel visual servoing scheme based on photometric moments, which capture the image intensities in the form of image moments. The analytical form of the interaction matrix has been derived for these new features. Visual servoing is demonstrated on scenes which do not contain a discrete set of points or monotone segmented objects. Most importantly, the proposed enhanced model takes into account the effect of the scene portions which appear and disappear from the camera field of view during the visual servoing. Existing results based on moment invariants are then exploited to obtain visual features from the photometric moments. The control using these visual features is performant for large SCARA motions (where the images acquired during the servo have very less overlap with the desired image), with a large convergence domain in comparison to both the pure luminance feature and to features based on nonweighted moments. The proposed approach can also be used with non planar environments. This paper thus brings notable improvements over the pure luminance feature and existing moments-based VS methods. The 6 dof control using weighted photometric moments yielded satisfactory results for small displacements to be realized. The control can be rendered suitable for large displacements if the alteration in invariance properties induced by the weighting function can be prevented. So, an important future direction of work would be about the formulation of alternate weighting strategies that preserve the invariance properties as in the non-weighted moments. This is an open and challenging problem that, once solved, would ease a complete theoretical stability and robustness analysis. Also, it is to be noted that the method will certainly fail when the shared portions between the initial and desired images are too low. Another distinction with respect to geometric approaches is that the performance depends on the image contents and hence large uniform portions with poorly texture scenes might pose issues for the servoing. Despite these obvious shortcomings, we believe that direct approaches will become more commonplace and lead to highly performant visual servoing methods. APPENDIX A. Interaction matrix of r P1 and r P2 In (42), on expanding the terms (x x g + x sh ) p and (y y g + y sh ) q using the binomial theorem, the shifted moments can be expressed in terms of the centred moments: Fig. 1 . 1 Fig. 1. a) Evaluation of contour integrals in the interaction matrix developments, b) Custom exponential function w(x, y) = exp 650(x 2 +y 2 ) 2 in the domain 0.4  x  0.4 and 0.3  y  0.3. Gradual reduction in importance from maximum (dark red) in the centre outwards to minimum (blue) at the edges With the same line of reasoning, the contour integral in m ry pq also vanishes. Then (21) transforms to the following simple form:⇢ m rx pq = p m p 1,q m ry pq = q m p,q 1 2 arctan ⇣ 2µ 11 µ 20 µ 02 ⌘µ 20 = m 20 m 00 x 2 g µ 02 = m 02 m 00 y 2 g µ 11 = 21102202211 00m00 with x g = m 10 /m 00 and y g = m 01 /m 00 the centre of gravity coordinates, Z ⇤ the desired depth and finally ↵ = 1 is made of centred moments given by: m 11 m 00 x g y g Fig. 2 . 2 Fig. 2. Shift points P 1 (xg + x sh1 ) and P 2 (xg + x sh2 ) with respect to which the shifted moments are computed. As shown in Fig 2, one shift point is selected along the major orientation (✓ = ↵) and the second point orthogonal to the previous (✓ = ↵ + ⇡ 2 ) such that we have : P 1 [x g + p m 00 cos(↵), y g + p m 00 sin(↵)] and P 2 [x g + p m 00 cos(↵ + ⇡ 2 ), y g + p m 00 sin(↵ + ⇡ 2 )]. To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the c = c L I + (I I ⇤ ). The velocity profiles generated are shown in Figure 4c. The pure luminance experiment is not successful as it results in an enormous final error e ⇡ 10 9 , as seen from Fig 4a. In direct Fig. 3 .Fig. 4 . 34 Fig. 3. Simulation results with UWPM in perfect conditions Fig. 5 . 5 Fig. 5. Experimental results in SCARA mode actuation pertaining to Section IV-B1. Fig. 6 . 6 Fig. 6. 6 dof experimental results pertaining to Section IV-B2. Fig. 7 . 7 Fig. 7. Simulation V-A : 6 dof VS with WPM pertaining to Section V-A. Fig. 8 . 8 Fig.8. Simulation V-A : 6 dof VS comparison to UWPM and pure luminance (see Fig.7). Fig. 9. 4 dof simulation results under large rotations (see Section V-B). Fig. 10 .Fig. 11 . 1011 Fig. 10. Desired image in (a) and a sampling of different images from the 243 generated initial poses are shown in (b)-(d) Fig 12f. In the initial acquired image (see Fig 12a), 3 out of these 5 objects are not fully visible. The WPM were in fact conceived for use in such scenarios. Rotational displacement of 10 around the optic axis and translations of c⇤ t c = [1.5cm, 1cm, 8cm] are required for convergence. Once again, the mean interaction matrix The initial pose is chosen such that the image in Fig 13a is observed. Let us note that Lauren Bacal present in the left part of the desired image is completely absent from the initial image. The corresponding difference image is shown in Fig 13c. There is no monotone segmented object and the assumption about uniform black background is clearly not valid in this case. Nominal displacements of [ 0.35cm, 1.13cm, 6.67cm] in translation and [0.33 , 1.05 , 12.82 ] in rotation are required for convergence. The control law in (31) with the features in (44) is used, with b Fig. 12 . 12 Fig. 12. 4 dof experimental results with a non planar scene (see Section V-D). Fig. 13. WPM 6 dof experimental results (see Section V-E). lx k sh y l 1 p 1 g ) p (y y g ) q w(x)I(x)dxdy (47) Differentiating (46) will yield the interaction matrix of the shifted moments.L μpq = L x sh p sh µ p k,q l + L µ p k,q l p m00 L m00 p m 00 sin ✓L ✓ L y sh = 1 2 sin ✓ p m00 L m00 + p m 00 cos ✓L ✓ (49) Note that the general analytic form of Lm pq could be obtained with n > 1 for non planar scenes, as was done in[START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF] for the geometric moments or white with the intensity I redefined to Imax I and the rest of the developments remain identical. with ✓ = ↵ for shift point P 1 and ✓ = ↵+ ⇡ 2 for shift point P 2 . Further, by differentiating (47), we obtain with r = p + q k l. Knowing (48) and (49), the interaction matrix for any shifted moment of order p + q can be obtained. The next step is to compute L 1 and L 2 by differentiating (41). Finally, the interaction matrix L r is directly obtained by differentiating (40).
58,913
[ "753133", "15722" ]
[ "303079", "525244" ]
01758280
en
[ "info" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01758280/file/2017IMTA0032_AflatoonianAmin.pdf
Dr Karine Guil An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based NBI allowing opening up a secured BYOC-enabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. Delegating the control of all or a part of a service introduces some potential valueadded services. Security applications are one of these BYOC-based services that might be provided by an SP. We discuss their feasibility through a BYOC-based Intrusion Prevention System (IPS) service example. v Résumé Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. À contrario l'évolution permanente du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services et conduit à une évolution des services réseau traditionnels vers de nouveaux services réseau à la demande. Ceux-ci permettent aux clients du SP de déployer et de gérer leurs services de manière autonome et optimale grâce à l'ouverture par le SP d'une interface bien définie sur sa plate-forme. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le SP doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plateforme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Nous proposons une caractérisation préalable de la classe de services réseau à la demande, qui en fixe le périmètre. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui précise chacune de ses étapes. L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous l'étendons à travers un Framework original permettant la gestion de toutes les étapes identifiées dans le cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. Sa mise en oeuvre nécessite une encapsulation du contrôleur SDN. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé. La maitrise par le SP de l'ouverture contrôlée de la face nord du SDN devrait être profitable tant au SP qu'à ses clients. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. L'ouverture d'une interface de contrôle offrant un accès de granularité variable à l'infrastructure sous-jacente, nous conduit à prendre vi en compte certaines exigences incontournables telles que le multi-tenancy ou la sécurité, au niveau de l'interface Northbound (NBI) du contrôleur SDN. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous présentons un NBI basé sur XMPP permettant l'ouverture d'une API BYOC sécurisée. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. La délégation du contrôle de tout ou partie d'un service permet d'enrichir certains services d'une valeur ajoutée supplémentaire. Les applications de sécurité font partie des services BYOC pouvant être fournis par un SP. Nous illustrons leur faisabilité par l'exemple du service IPS (système de prévention d'intrusion) décline en BYOC. iii Abstract Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of an SP depends on its network which is evaluated by its reliability, availability and ability to deliver new services. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers and to provide on-demand network capabilities, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology. Indeed, the SDN controller can be used to provide an interface to service customers where they could on-demand subscribe to new services and modify or retire existing ones. To this end we first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps. The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. Providing to the SP the mastering of SDN's openness on its northbound side should largely be profitable to both SP and customers. We therefore propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an on-demand service by the delegation of part of its control to an external third party. Opening a control interface and offering a granular access to the underlying infrastructure leads us to take into account some characteristics, such as multi-tenancy or security, at the Northbound Interface (NBI) level of the SDN controller. Dans cette thèse nous nous intéressons à la gestion des services de télécommunication dans un environnement contrôlé. L'exemple de la gestion d'un service de connectivité (MPLS xxii VPN) enrichi d' un contrôle de la qualité de service (QoS) centralisé, nous sert de fil conducteur pour illustrer notre analyse. Au cours de la dernière décennie, les réseaux MPLS ont évolué et sont devenus critiques pour les fournisseurs de services. MPLS est utilisé à la fois pour une utilisation optimisée des ressources et pour l'établissement de connexions VPN. List of Tables À mesure que la transformation du réseau devient réalité et que la numérisation modifie les méthodes de gestion des services, les services de réseau traditionnels sont progressivement remplacés par les services de réseau à la demande. Les services à la demande permettent aux clients de déployer et de gérer leurs services de manière autonome grâce à l'ouverture par le fournisseur de service d'une interface bien définie sur sa plate-forme. Cette interface permet à différents clients de gérer leurs propres services possédant chacun des fonctionnalités particulières. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le fournisseur de services doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Un réseau de télécommunications fait appel à différentes technologies fournissant plusieurs types de services. Ces services sont utilisés par plusieurs clients et une mauvaise configuration d'un service client peut avoir des conséquences sur la qualité de service des autres. La position centrale du contrôleur SDN permet à l'opérateur de gérer tous les services et équipements. Cependant la fourniture d'une interface de gestion et de contrôle de service à granularité variable s'appuyant sur ce contrôleur requiert la mise en place d'une couche supplémentaire de gestion des services au-delà du contrôleur et permettant au fournisseur de services de gérer le cycle de vie du service tout en mettant à la disposition de ses clients une interface de gestion de service. Nous présentons dans le cadre de cette thèse un framework basé sur SDN permettant à la fois de gérer le cycle de vie d'un service et d'ouvrir avec une granularité contrôlable l'interface de gestion de services. La granularité de cette interface permet de fournir différents -Création de service : L'application spécifie les caractéristiques de service dont elle a besoin, elle négocie le SLA associé qui sera disponible pour une durée limitée et enfin elle demande une nouvelle création de service. -Retrait du service : l'application retire le service à la fin de la durée négociée. Cette étape définit la fin de la durée de vie. Les applications de type 2 tire parti des événements provenant de la NBI pour surveiller le service. Il est à noter que ce service peut être créé par la même application qui surveille le service. Ce type d'application ajoute une étape supplémentaire au cycle de vie du service côté client. Ce cycle de vie contient trois étapes principales : -Création de service. -Surveillance de service : Une fois créé, le service peut être utilisé par le client pour une durée négociée. Pendant ce temps, certains paramètres réseau et de service seront surveillés grâce aux événements et aux notifications envoyées par le SDNC à l'application. -Retrait de service. Dans un cas plus complexe, c'est-à-dire les applications de type 3, une application peut créer le service via la NBI, elle surveille le service via cette interface et, en fonction des événements à venir, elle reconfigure le réseau via le SDNC. Ce type de contrôle ajoute une étape rétroactive au cycle de vie du service côté client. Celui-ci contient quatre étapes principales : -Création de service. -Surveillance de service. -Modification de service : Les événements remontés par les notifications peuvent déclencher un algorithme implémenté dans l'application (implémenté au nord du SDNC), dont la sortie reconfigure les ressources réseau sous-jacentes via le SDNC. -Retrait de service. Un cycle de vie global de service côté client contient toutes les étapes préalables nécessaires pour gérer les trois types d'applications, discutées précédemment. Nous introduisons dans ce modèle une nouvelle étape déclenchée par les opérations côté opérateur : -Création de service. -Surveillance de service. -Modification de service. -Mis à jour de service : La gestion du réseau de l'opérateur peut entraîner la mise à jour du service. Cette mise à jour peut être émise en raison d'un problème survenant lors de l'utilisation du service ou d'une modification de l'infrastructure réseau. Cette mise à jour peut être minime, telle que la modification d'une règle dans l'un des équipements sous-jacents, ou peut avoir un impact sur les étapes précédentes, avec des conséquences sur la création du service et / ou sur la consommation du service. -Retrait de service. Le cycle de vie du service côté opérateur comprend en revanche six étapes principales : xxv -Demande de service : Une fois qu'une demande de création ou de modification de service arrive du portail de service des utilisateurs, le gestionnaire de demandes négocie le SLA et une spécification de service de haut niveau afin de l'implémenter. Il convient de noter qu'avant d'accepter le SLA, l'opérateur doit s'assurer que les ressources existantes peuvent gérer le service demandé au moment où il sera déployé. En cas d'indisponibilité, la demande sera mise en file d'attente. -Décomposition de service, compilation : Le modèle de haut niveau du service demandé est décomposé en plusieurs modèles de service élémentaires qui sont envoyés au compilateur de service. Le compilateur génère un ensemble de configurations de ressources réseau qui composent ce service. -Configuration de service : Sur la base du précédent ensemble de configurations de ressources réseau, plusieurs instances de ressources virtuelles correspondantes seront créées, initialisées et réservées. Le service demandé peut ensuite être implémenté sur ces ressources virtuelles créées en déployant des configurations de ressources réseau générées par le compilateur. -Maintenance et surveillance de service : Une fois qu'un service est mis en oeuvre, sa disponibilité, ses performances et sa capacité doivent être maintenues automatiquement. En parallèle, un gestionnaire de journaux de service surveillera tout le cycle de vie du service. -Mise à jour de service : Lors de l'exploitation du service, l'infrastructure réseau peut nécessiter des modifications en raison de problèmes d'exécution ou d'évolution technique, etc. Elle entraîne une mise à jour susceptible d'avoir un impact différent sur le service. La mise à jour peut être transparente pour le service ou peut nécessiter de relancer une partie des premières étapes du cycle de vie du service. -Retrait de service : la configuration du service sera retirée de l'infrastructure dès qu'une demande de retrait arrive au système. Le retrait du service émis par l'exploitant est hors du périmètre de ce travail. Un framework d'approvisionnement de services SDN Les processus de gestion des services peuvent être divisés en deux familles plus génériques : la première gère toutes les étapes exécutants les taches liées au service, depuis la négociation Ces modèles permettent de dériver le type et la taille des ressources nécessaires pour implémenter ce service. Le SO demande la réservation de ressources virtuelles à partir de la couche inférieure et déploie la configuration de service sur les ressources virtuelles via un SDNC. L' "Orchestrateur de ressource" gère les opérations sur les ressources : -Réservation de ressources -Surveillance des ressources Cet orchestrateur, qui gère les ressources physiques, réserve et lance les ressources virtuelles. Il maintient et surveille les états des ressources physiques en utilisant son interface sud. L'architecture interne de SO est composée de cinq modules principaux : -Gestionnaire de demande de service (SCM) : il traite les demandes de service des clients et négocie les spécifications du service. -Gestionnaire de décomposition et compilation de service (SDCM) : il répartit toutes les demandes de service reçues en un ou plusieurs modèles de service élémentaires qui sont des modèles de configuration de ressources. -Gestionnaire de configuration de service (SCM) : il configure les ressources physiques ou virtuelles via le SDNC. -Contrôleur SDN (SDNC) -Gestionnaire de surveillance de service, d'une part, il reçoit les alarmes et notifications à venir de l'orchestrateur inférieur, RO, et d'autre part il communique les notifications de service à l'application externe via la NBI. Bring Your Own Control (BYOC) Conclusion et perspectives Chapter 1 Introduction In this chapter we introduce the context of this thesis followed by the motivation and background of this studies. Then we present our main contributions and we conclude by the structure of this document. Thesis context Over the past two decades, service providers have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of a Service Provider depends on its network which is evaluated by its reliability, availability and ability to deliver services. Due to the introduction of new technologies requiring a pervasive network, new and innovative applications and services are increasing the demand for network access [START_REF] Metzger | Future Internet Apps: The Next Wave of Adaptive Service-Oriented Systems?[END_REF]. Service Providers, on the other hand, are looking for a cost-effective solution to meet this growing demand while reducing the network complexity [START_REF] Benson | Unraveling the Complexity of Network Management[END_REF] and costs (i.e. Capital Expenditure (CapEx) and Operating Expenditure (OpEx)), and accelerating service innovation. The network of an Operator is designed on the basis of equipments that are carefully developed, tested and configured. Due to the importance of this network, the operators avoid the risks of modifications made to the network. Hardware elements, protocols and services require several years of standardization before being integrated into the equipment by suppliers. This hardware lock-in reduces the ability of Service Providers to innovate, integrate and develop new services. The network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Transformation means making it possible to exploit network capabilities through application power. This transformation converts the Operator network from a simple utility to a digital service delivery platform. The latter not only increases the velocity of the service, but also creates new sources of revenue. Recently Software-Defined Networking (SDN) [START_REF] Mckeown | Software-defined networking[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF] and Network Function Virtualization (NFV) [START_REF] Mijumbi | Network Function Virtualization: State-of-the-Art and Research Challenges[END_REF][START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] technologies are proposed to accelerate the transformation of the network. The promise Chapter 1. Introduction of these technologies is to bring more flexibility and agility to the network while creating cost-effective solutions. This will allow Service Providers to become digital businesses. The SDN concept is presented to decouple the control and forwarding functionalities of network devices by putting the first one on a central unit called controller [START_REF] Kreutz | Software-Defined Networking: A Comprehensive Survey[END_REF]. This separation makes it possible to control the network from a central application layer simplifying network control and management tasks. And the programmability of the controller accelerates the Service Providers network transformation. As the network transformation is becoming a reality and the digitalization is changing the service management methods, traditional network services are replacing with on-demand network services. On-demand services allow customers to deploy and manage their services independently through a well-defined interface opened to the Service Providers platform. Motivation and background This interface allows different customers to manage their own services each one possessing special features. For example, to manage a VPN service, a customer might have several types of interactions with the Service Provider platform. For the first case, a customer might request a fully managed VPN interconnecting its sites. For this type of service, the customer owns abstract information about the service and provides a simple service request to the Service Provider. The second case is a customer, with a more professional profile, who monitors the service by retrieving some network metrics sent from the Providers platform. And the third type consists of a more dynamic and open service sold to customers wishing to control all or part of their services. For this type of services, based on the metrics retrieved from the Service Providers platform, the customer re-configures the service. Problem statement In order to offer this freedom to its customers and to provide on-demand network capability, the Service Provider must be able to rely on a dynamic and programmable network control platform. We argue that this platform can be provided by SDN technology. Indeed, the SDN Contributions of this thesis As part of this thesis we present an SDN based framework allowing to both manage the lifecycle of a service and open the service management interface with a fine granularity. The granularity of this interface allows to provide different levels of abstraction to the customer, each one allowing to offer part of the capabilities needed by an on-demand service discussed in Section 1.2. The following are the main research contributions of this thesis. -A double-sided service lifecycle and the associated data model We first characterise the applications that might be deployed upon the northbound side of an SDN controller, through their lifecycle. The characterisation rests on a classification of the complexity of the interactions between the outsourced applications and the controller. This leads us to a double-side service lifecycle presenting two articulated points of view: client and operator. The service lifecycle is refined with a data model completing each of its steps. - A Document structure In Chapter 2 we present a state of the art on SDN and NFV technologies. We try to focus our study on SDN control and application layer. We present two classifications of SDN applications. For the first classification we are interested in the functionality of applications and their contribution in the deployment of the controller. And for the second one, we present different types of applications according to the model of the interaction between them and the controller. We discuss in this second classification three types of applications, each one requiring some characteristics at the Northbound Interface (NBI) level. In Chapter 3 we discuss the deployment of a network service in SDN environment. For the first part of this chapter, we present the MPLS networks with a rapid analysis of the control and forwarding planes of these networks in the legacy world. This analysis quickly shows which information is used to configure such a service. This information is, for confidential reasons, managed by the operator most of which is not manageable by the customer. For the second part of this chapter, we analyze the deployment of the MPLS service on the SDN network through the OpenDaylight controller. For this analysis we consider two possibilities: (1) deployment of the service using the third-party applications developed on the controller (the VPN Services project), and (2) deployment of the service using the northern Application Programming Interface (API)s provided by the controller's native functions. The results obtained during the second part together with the case study discussed in the first part, accentuate the lack of a service management system in the current controllers. This justifies the presentation of a service management framework providing the service management interfaces and managing the service lifecycle. In order to refine the perimeters of this framework, we firstly discuss a service life cycle studies in Chapter 4. This analysis is carried out on two sides: customer and operator. For the service lifecycle analysis from the client-side perspective, we rely on the classification of applications made in Chapter 2. During this analysis we study the additional steps that each application adds in the lifecycle of a service. And for the analysis of the lifecycle from the operator side view point we study all steps an operator takes during the deployment and management of a service. At the end of this chapter, we discuss the data model allowing to implement each step of the service lifecycle. This data model is based on a two layered approach analyzing a service provisioning system on two layers: service and device. Based on this analysis, we study the data model of each service lifecycle step, helping to define the internal architecture of the service management framework. Document structure Service lifecycle analysis leads us to present, in Chapter 5, the SDN-based service management framework. This framework cuts up all the tasks an operator performs to manage the lifecycle of a service. Through an MPLS VPN service deployment example we detail all of these steps. Part of tasks are carried on the service presented to the client, and part of them on the resources managed by the operator. We organize these two parts into two orchestration systems, called respectively Service Orchestrator and Resource Orchestrator. In order to analyze the framework's capability in service lifecycle management, we take the example of MPLS VPN service update. With this example we show how the basic APIs provided by an SDN controller can be used by the framework to deploy and manage a requested service. The presented framework allows us not only to manage the service life cycle but also to open an NBI to the client. This interface allows us to provide different levels of abstraction used by each of lastly discussed three types of applications. In Chapter 6, we present for the first time the new service model: Bring Your Own Control (BYOC). This new service allows a customer or a third party operator to participate in the service lifecycle. This is the practical case of a type 3 application, where the client configures a service based on the events coming up from the controller. We analyze characteristics of interface allowing to deploy such a BYOC-type service. We present in this chapter the XMPP protocol as a good candidate enabling us to implement this new service model. In Chapter 7, we apply the BYOC model to a network service. For this use case we choose to externalize the control of an IPS. Outsourcing the IPS service control involves implementing the attack detection engine in an external controller, called Guest Controller (GC). In Chapter 8, we point out the main contributions of this thesis and give the research perspectives in relation to BYOC services in SDN/NFV and 5G networks. Chapter 2 Programming the network In this chapter we present, firstly, a state of the art on programmable networks. Secondly, we study Software-Defined Networking (SDN) as a technology allowing to control and program network equipment to provide on-demand services. For this analysis we discuss the general architecture of SDN, its layers and its interfaces. Finally, we discuss SDN applications, their different types and the impact that all applications can have on the internal architecture of an SDN controller. Technological context Nowadays Internet whose number of users exceeds 3,7 billions [START_REF]World Internet Usage and Population Statistics[END_REF], is massively used in all human activities from the professional part to the private ones via academical ones, administrative ones, etc. The infrastructure supporting the Internet services rests on various interconnected communication networks managed by network operators. This continuously growing infrastructure evolves very dynamically and becomes quite huge, complex, and sometimes locally ossified. Fundamentals of programmable networks The high performance constraints required for routers in packet switched networks, limit the authorized processing to the sole modification of the packet headers. The strength of this approach is also its weakness because the counterpart of their high performance is their lack of flexibility. The evolution brought by the research on the programmability of the network, has led to the emergence of strong ideas whose relevance can be measured by their intellectual longevity. The seed of the idea of having APIs allowing a flexible management of the network equipments at least goes back to the OpenSig initiative [START_REF] Campbell | A Survey of Programmable Networks[END_REF] which aimed to develop and promote standard programmable interfaces crafted on network devices [START_REF] Biswas | The IEEE P1520 standards initiative for programmable network interfaces[END_REF]. It is one of the first fundamental steps towards the virtualization of networks the main objectives of which consisted in switching from a strongly coupled network, where the hardware and the software are intimately linked to a network where the hardware and the software are decorrelated. It concretely conducts in keeping the data forwarding capability inside the box while outsourcing the control. In a general setting the control part of the processing carried out in routers, roughly consists in organizing in a smart and performant way the local forwarding of each received packet while ensuring a global soundness between all the boxes involved in its path. The outsourcing of the control has been designed according different philosophies. One aesthetically nice but extreme vision known as « Active Networks » recommends that each packet may carry in addition to its own data, the code of the process which will be executed at each crossed Software-Defined Networking (SDN) SDN is presented to change the way networks operate by giving hope to change the current network limitations. It enables simple network data-path programming, allows easier deployment of new protocols and innovative services, opens network virtualization and management by separating the control and data planes [START_REF] Kim | Improving network management with software defined networking[END_REF]. This paradigm is attracting attention by both academia and industry. SDN breaks the vertical integration of the traditional network devices by decoupling the control and data planes, where network devices become a simple forwarding device programmed by a logically centralized application called controller or network operating system. As the OpenFlow-based SDN community is growing up, a large variety of OpenFlowenabled networking hardware and software switches are presented into the market. Hardware devices are produced for a long range purposes, from the small businesses [START_REF] Hp | zl Switch Series[END_REF]18] to Chapter 2. Programming the network high-class one [START_REF] Ferkouss | A 100Gig network processor platform for openflow[END_REF] used for their high switching capacity. Software switches, on the other hand are mostly OpenFlow-enabled applications, and are used to provide the virtual access points in the data centers and to bring virtualized infrastructures. Architecture SDN Southbound Interface (SBI) The communication between the Infrastructure layer and the control one is assured through a well-defined API called Southbound Interface (SBI), that is the element separating the data and the control plane. This one provides for upper layer a common interface to manage physical or virtual devices by a mixture of different southbound APIs and control plug-ins. The most accepted and implemented of such southbound APIs is OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] standardized by Open Networking Foundation (ONF) [START_REF] Onf | Open Networking Foundation[END_REF]. OpenFlow Protocol The SDN paradigm is started by the forwarding and control layer separation idea presented by OpenFlow protocol. This protocol enables flow-based programmability of a network device. Indeed, OpenFlow provides for SDN controller an interface to create, update and delete new entries reactively or proactively. SDN Controller Host A Host B Switch1 Switch2 Switch3 Forwarder Forwarder Forwarder ing the first packet, Switch 1 looks up in its flow table, if no match for the flow is found, the switch sends an OpenFlow PACKET_IN message to the SDN controller for instructions. Ctrl Agent Ctrl Agent 2.4. Software-Defined Networking (SDN) 13 Based on this message, the controller creates a PACKET_OUT message and sends it to the switch. This message is used to add a new entry to the flow table of the switch. Programming a network device using the OpenFlow can be done in three ways [START_REF] Salisbury | OpenFlow: Proactive vs Reactive Flows[END_REF]: -Reactive flow instantiation. When a new flow arrives to the switch, it looks up into the flow table and if the relevant action doesn't match with the flow, the switch sends a PACKET_IN message to the controller. In previous example, shown in Fig. 2.5, the SDN controller programs the Switch 1 in a reactive manner. -Proactive flow instantiation. In contrast to the first case, a flow can be defined in advance. In this case when a new flow comes to the switch there is no lookup into the flow table and the action will be done based on a predefined entry. In our example (Fig. 2.5) the follow programing done for two Switches 2 and 3, is a proactive one. The proactive flow instantiation eliminates the latency introduced by controller interrogation. -Hybrid flow instantiation. This one is a combination of two first modes. In our example (Fig. 2.5) for a specific traffic, sent by Host A to Host B, the controller programs the related switches using this method. The Switch 1 is programmed reactively and two other switches (Switch 2 and Switch 3) are programmed proactively. Using hybrid flow instantiation allows to benefit the flexibility of the reactive mode for granular traffics, while saving a low-latency traffic forwarding for the rest of traffic. OpenFlow switch The most recent OpenFlow Switch (1.5.0) has been defined by ONF [START_REF]OpenFlow Switch Specification, Version 1.5.0[END_REF]. -OpenFlow Channel creates a secured channel, over Secure Sockets Layer (SSL), between the switch and a controller. Using this channel, the controller manages the switch via OpenFlow protocol allowing commands and packet to be sent from the controller to the switch. Chapter 2. Programming the network -Flow Table contains a set of flow entries dictating the switch how to process the flow. These entries include match fields, counters and a set of instructions. -Group Table contains a set of group each one having a set of actions. Fig. 2.7 shows an OpenFlow Switch flow table. Each flow table contains three columns: rules, actions and counters [START_REF] Mckoewn | Why can't I innovate in my wiring closet?[END_REF]. The rules column contains header fields used to define a flow. For an incoming packet, the switch looks up the flow table, if a rule matches the header of the packet, the related action of action table will be applied to the packet, and finally the counter value will be updated. There are several possible actions to be taken on a packet (Fig. 2.7). The packet can be forwarded to a switch port, it can be sent to the controller, it can be sent to a group table, it can be modified in some fashions, or it can be dropped. SDN Controller The Control plane, equivalent to the network operating system [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], is the intelligent part of this architecture. It controls the network thanks to its centralized perspective of networks state. On one hand, this logically centralized control simplifies the network configuration, management and evolution through the SBI. On the other hand, it gives an abstract and global view of the underlying infrastructure to the applications through the Northbound Interface (NBI). While SDN's interest is quite extending in different environments, such as home networks [START_REF] Yiakoumis | Slicing Home Networks[END_REF], data center network [START_REF] Al-Fares | A Scalable, Commodity Data Center Network Architecture[END_REF], and enterprise networks [START_REF] Casado | Ethane: Taking Control of the Enterprise[END_REF], the number of proposed SDN controller architecture and the implemented functions is also growing up. Despite this large number, most of existing proposals implement several core network functions. These functions are used by upper layers, such as network applications, to build their own logic. Among the various SDN controller implementations, these logical blocks can be classified into: Topology Manager, Device Manager, Stats Manager, Notification Manager and Shortest Path Forwarding. For instance, a controller should be able to provide a network topology model to the upper layer applications. It also should be able to receive, process and forward events by creating alarm notifications or state changes. As mentioned previously, nowadays, numerous commercial and non-commercial communities are developing SDN controllers proposing network applications on top of them. Controllers such as NOX [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], Ryu [29], Trema [30], Floodlight [START_REF]Floodlight OpenFlow Controller[END_REF], OpenDayLight [START_REF]The OpenDaylight SDN Platform[END_REF] and ONOS [START_REF] Berde | ONOS: Towards an Open, Distributed SDN OS[END_REF] are the top five today's controllers. These controllers implement basic network functions such as topology manager, switch manager, etc. and provide the network programmability to applications via NBI. In order to implement a complex network service on a SDN-based network, service providers face a large number of controllers each one implementing a large number of core services based on a dedicated work flow and specific properties. R. Khondoker et al. [START_REF] Khondoker | Feature-based comparison and selection of Software Defined Networking (SDN) controllers[END_REF] tried to solve the problem of selecting the most suitable controller by proposing a decision making template. The decision however requires a deep analysis of each controller and totally depends on the service use case. It is worth to mention that in addition to this miscellaneous controller's world, the NBI abstraction level diversity also emphasizes the challenge. SDN Northbound Interface (NBI) In the SDN ecosystem the NBI is the key. This interface allows applications to be independent of a specific implementation. Unlike the southern interface, where we have some standard proposals (OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] and NETCONF [START_REF] Enns | Network Configuration Protocol (NETCONF)[END_REF]), the subject of a common and a standard NBI standard is remained open. Since use cases are still in development, it is still immature to define a standardized NBI. Contrary to its equivalent in the south (SBI), the NBI is a software ecosystem, it means that the standardization of this interface requires more maturity and a well standardized SDN framework. In application ecosystems, implementation is usually the leading engine, while standards emerge later [START_REF] Guis | The SDN Gold Rush To The Northbound API[END_REF]. Open and standard interfaces are essential to promote application portability and interoperability across different control platforms. As illustrated in Table 2.1, existing controllers such as Floodlight, Trema, NOX, ONOS, and OpenDaylight propose and define their own APIs in the north [START_REF] Salisbury | The Northbound API-A Big Little Problem[END_REF]. However, each of them has its own specific definitions. . The experience gained in developing various controllers will certainly be the basis for a common application-level interface. SDN Controller If we consider the SDN controller as a platform allowing to develop applications on a resource pool, a north API can be compared to the Portable Operating System Interface (POSIX) standard in operating systems [START_REF] Josey | POSIX -Austin Joint Working Group[END_REF]. This interface provides generic functions hiding the operational details of the computer hardware. These ordinary functions allow a software to manipulate this hardware by ignoring their technical details. Today, programming languages such as Procera [START_REF] Voellmy | Procera: A Language for Highlevel Reactive Network Control[END_REF] and Frenetic [START_REF] Foster | Frenetic: A Network Programming Language[END_REF] are proposed to follow this logic by providing an abstraction layer on controller functions. The yanc project [START_REF] Monaco | Applying Operating System Principles to SDN Controller Design[END_REF] also offers an abstraction layer simplifying the development of SDN applications. This layer allows programmers to interact with lower-level devices and subsystems through the traditional file system. It may be concluded that it is unlikely that a single northern interface will emerge as a winner because the requirements for different network applications are quite different. For example, APIs for security applications may be different from routing ones. In parallel with its SDN development work, the ONF has begun a vertical solution in its North Bound Interface Working Group (NBI -WG) to present standardized northbound APIs [START_REF] Menezes | North Bound Interface Working Group (NBI-WG) Charter[END_REF]. This work is still ongoing. SDN Applications Analysis SDN Applications At the toppest part of the SDN architecture, the Application layer programs the network behavior through the NBI offered by the SDN controller. Existing SDN applications implement a large variety of network functionalities from simple one, such as load balancing and routing, to more complex one, such as mobility management in wireless networks. This wide variety of applications is one of the major reasons to raise up the adoption of SDN into current networks. Regardless of this variety most SDN applications can be grouped mainly in five categories [START_REF] Hu | A Survey on Software-Defined Network and Open-Flow: From Concept to Implementation[END_REF], including (I) traffic engineering, (II) mobility and wireless, (III) measurement and monitoring, (IV) security and dependability, and (V) data center networking. Traffic engineering The first group of SDN application consists of proposals that monitor the traffic through the SDN Controller (SDNC) and provide the load balancing and energy consumption optimization. Load balancing as one of the first proposed SDN applications [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] covers a big range of network management tasks, from redirecting clients requests traffic to simplifying the network services placement. For instance, the work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] proposes the use of wilcard-based for aggregating a group of clients requests based on their Internet Protocol (IP) prefixes. In the [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] also the network application is used to distribute the network traffic among the available servers based on the network load and computing capacity of servers. The ability of network load monitoring through the SBI introduces applications such as energy consumption optimization and traffic optimization. The information received from the SBI can be used by specialized optimization algorithms to aim up to 50% of economization of network energy consumption [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF] by dynamically scale in/out of the links and devices. This capacity can be leveraged to provision dynamic and scalable of services, such as Virtual Private Network (VPN) [START_REF] Scharf | Dynamic VPN Optimization by ALTO Guidance[END_REF], and increase network efficiency by optimizing rules placement [START_REF] Nguyen | Optimizing Rules Placement in OpenFlow Networks: Trading Routing for Better Efficiency[END_REF]. Mobility and wireless The programmability of the stack layers of wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF], and decoupling the wireless protocol definition from the hardware, introduce new wireless features, such as creation of on-demand Wireless Access Point (WAP) [START_REF] Vestin | CloudMAC: Towards Software Defined WLANs[END_REF], load balancing [START_REF] Gudipati | SoftRAN: Software Defined Radio Access Network[END_REF], seamless mobility [START_REF] Dely | OpenFlow for Wireless Mesh Networks[END_REF] and Quality of Service (QoS) [START_REF] Li | Toward Software-Defined Cellular Networks[END_REF] management. These traditionally hard to implement features are implemented by the help of the well-defined logics presented from the SDN controller. The decoupling of the wireless hardware from its protocol definition provides a software abstraction that allows sharing Media Access Control (MAC) layers in order to provide programmable wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF]. Measurement and monitoring The detailed visibility provided by centralized logic of the SDN controller, permits to introduce the applications that supply network parameters and statistics for other networking services [START_REF] Sundaresan | Broadband Internet Performance: A View from the Gateway[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF]. These measurement methods can also be used to improve features of the SDN controller, such as overload reduction. Security The capability of SDN controller in collecting network data and statistics, and allowing applications to actively program the infrastructure layer, introduce works that propose to improve the network security using SDN. In this type of applications, the SDN controller is the network policy enforcement point [START_REF] Casado | SANE: A Protection Architecture for Enterprise Networks[END_REF] through which malicious traffic are blocked before entering a specific area of the network. In the same category of applications, the work [START_REF] Braga | Lightweight DDoS flooding attack detection using NOX/OpenFlow[END_REF] uses SDN to actively detect and prevent Distributed Denial of Service (DDoS) attacks. Intuitive classification of SDN applications As described previously, SDN applications can be analyzed in different categories. In 2.5.1 we categorized the SDN applications based on the functionality they add to the SDN controller. In this section we analyze these applications based on their contribution on the network control life cycle. SDN applications consist of modules implemented at the top of a SDNC which, thanks to the NBI, configure network resources through the SDNC. This configuration might control the network behavior to offer a network service. Applications which configure the network through a SDNC can be classified in three types. The Fig. 2.8 presents this classification. The first type concerns an application configuring a network service which once initialized and running will not be modified anymore. A "simple site interconnection" through MultiProtocol Label Switching (MPLS), can be a good example for this service. This type of services requires a one direction up-down NBI which can be implemented with a RESTful solution. The second one concerns an application which, firstly, configures a service and, secondly, monitors it during the service life. One example for this model is a network monitoring application which monitors the network via the SDNC in order to generate QoS reports. For example, for assuring the QoS of an MPLS network controlled by the SDNC, this application might calculate the traffic latency between two network endpoints thanks to metrics received from the SDNC. This model requires a bottom-up communication model in the NBI level so that the real-time events can be sent from the controller to the application. Finally, the third type of coordination concerns an application resting on, and usually including, the two previous types and adding specific control treatments executed in the application layer. In this case the application configures the service (type one), listens to network real-time events (type two), and calculates some specific network configurations in order to re-configure the underlying network accordingly (type one). SDN Controller Impact of SDN Applications on Controller design The variety of SDN applications developed at the top of the SDN controller may modify the internal architecture of the controller and its core functions, described in 2.4.4. In this section we analyze some of these applications and their contribution to a SDNcontroller core architecture. The Aster*x [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] and Plug-n-Serve [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] projects propose HTTP load balancing applications that rely on three functional units implemented in the SDN controller: "Flow Manager", HTTP servers and reports it to the Flow Manager. This load-balancing application adds two complementary modules inside the controller, within the core functions. " In work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] authors implemented a series of load-balancing modules in a NOX controller that partition client traffics between multiple servers. The partitioning algorithm implemented in the controller receives client's Transmission Control Protocol (TCP) connection requests, arriving into the Load Balancer Switch, and balances the load over the servers by generating wildcard rules. The load-balancing application proposed in this work is implemented inside the controller, in addition with other controller's core modules. Adjusting the set of active network devices in order to save the data center energy consumption is another type of SDN applications. ElasticTree [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF], as one of these applications, proposes a "network-wide power manager" increasing the network performance and fault tolerance while minimizing its power consumption. This system implements three main modules: "Optimize", "Power control" and "Routing". The optimizer finds the minimum power network subset, it uses the topology, traffic matrix, and calculates a set of active components to both the power control and routing modules. Power control toggles the power states of elements. The routing chooses paths for all flows and pushes routes into the network. In ElasticTree these modules are implemented as a NOX application inside the controller. The application pulls network statistics (flow and port counters) sends them to the Optimizer module, and based on calculated subset it adjusts flow routes and port status by OpenFlow protocol. In order to toggle the elements, such as active ports, linecards, or entire switches different solutions, such as Simple Network Management Protocol (SNMP) or power over OpenFlow can be used. In SDN architecture the network topology is one of the information provided to applications. The work [START_REF] Gurbani | Abstracting network state in Software Defined Networks (SDN) for rendezvous services[END_REF] proposes the Application-Layer Traffic Optimization (ALTO) protocol as a topology manager component of this architectures. In this work authors propose this protocol to provide an abstract view of the network to the applications which, based on Chapter 2. Programming the network this informations, can optimize their decision related to service rendezvous. ALTO protocol provides network topology by hiding its internal details or policies. The integration of ALTO protocol to the SDN architecture introduces an ALTO server inside the SDN controller through which the controller abstracts the information concerning the routing costs between network nodes. This information will be sent to SDN applications in the form of ALTO maps. These maps are used in different types of applications, such as: data centers, Content Distribution Network (CDN)s, and peer-to-peer applications. Network Function Virtualization, an approach to service orchestration Network Function Virtualization (NFV) is an approach to virtualize and orchestrate network functions, traditionally carried out on dedicated hardware, on Commercial Off-The-Shelf (COTS) hardware platform. This is an important aspect of the SDN particularly studied by service providers, who see here a solution to better adjust the investment according to the needs of their customers. The main advantage of using NFV to deploy and manage VNFs is that the Time To Market (TTM) of NFV-based service is less than a legacy service, thanks to the standard hardware platform used in this technology. The second advantage of NFV is lower Capital Expenditure (CapEx) while standard hardware platforms are usually cheaper than wholesale hardware used on legacy services. This approach, however, has certain issues. Firstly, in a service operator network, there is no more a single central (data center type) network to manage, but also several networks deployed by different technologies, both physical or virtual. At first glance this seems to be contrary to one of the primary objectives of the SDN: the simplification of network operations. The second problem is the complexity that the diversity of NFV architecture elements brings to the service management system. In order to create and manage a service, several VNFs should be created. These VNFs are configured, each one, by an EMS, the life cycle of which is managed though the Virtual Network Function Manager (VNFM). All VNFs are deployed within an infrastructure managed by the Virtual Infrastructure Manager (VIM). For the sake of simplicity, we don't mention the license management systems proposed by VNF editors to manage the licensing of their products. In order to manage a service all mentioned systems should be managed by the Orchestrator. Chapter 3 SDN-based Outsourcing Of A Network Service In this chapter we present the MPLS networks, its control plan and its data plan. Then, we study the processes and the necessary parameters in order to configure a VPN network. In the second part, we study the deployment of this type of network using SDN. For this analysis, we firstly analyze the management of the VPN network with non-openflow controllers, such as OpenContrail. Then, we analyze the deployment of the VPN network with one of the most developed OpenFlow enabled controller: OpenDaylight. Introduction to MPLS networks MPLS [START_REF] Rosen | Multiprotocol Label Switching Architecture, RFC 3031[END_REF] technology supports the separation of traffic flows to create VPNs. It allows the majority of packets to be transferred over Layer 2 rather than Layer 3 of the service provider network. In an MPLS network, the label determines the route that a packet will follow. The label is injected between Layer 2 and Layer 3 headers of the packet. A label is a 32 bits word containing several information: -Label: 20 bits -Time-To-Live (TTL): 8 bits -CoS/EXP: specifies the Class of Service used for the QoS, 3 bits -BoS: determines if the label is the last one in the label stack (if BoS = 1), 1 bit MPLS data plan The path taken by the MPLS packet is called Label Switch Path (LSP). MPLS technology is used by providers to improve their QoS by defining LSPs capable of satisfying Service Level Agreement (SLA) in terms of traffic latency, jitter, packet loss. In general, the MPLS network router is called a Label Switch Router (LSR). -Customer Equipment (CE) is the LAN's gateway from the customer to the core network of the service provider -Provider Equipment (PE) is the entry point to the core network. The PE labels packets, classifies them and sends them to a LSP. Each PE can be an Ingress or an Egress LSR. We discussed earlier the way this device injects or removes the label of the packet. MPLS control plan -P routers are the core routers of an MPLS network that switch MPLS packets. These devices are Transit LSRs, the operation of whom is discussed earlier. Each PE can be connected to one or several client sites (Customer Edge (CE)s), Cf. Fig. 3.3. In order to isolate PE-CE traffics and to separate routing tables within PE, an instance of Virtual Routing and Forwarding (VRF) is instantiated for each site, this instance is associated with the interface of the router connected to the CE. The routes that PE receives from the CE are recorded in the appropriate VRF Routing Table. These routes can be propagated by Exterior BGP (eBGP) [START_REF] Rekhter | A Border Gateway Protocol 4 (BGP-4), RFC 4271[END_REF] or Open Shortest Path First (OSPF) [START_REF] Moy | OSPF Version 2, RFC 1247[END_REF] protocols. The PE distributes the VPN information via Multiprotocol BGP (MP-BGP) [START_REF] Bates | Multiprotocol Extensions for BGP-4, RFC 4760[END_REF] to the other PE within the MPLS network. It also installs the Interior Gateway Protocol (IGP) routes learned from the MPLS backbone in its Global Routing Table. We drive this configuration example by joining one of customer_1 sites (Site D of Figure 2) to his VPN. Assuming that the MP-BGP of PE4 is already configured and the MPLS backbone IGP is already running on this router. To start the configuration, the service provider creates a dedicated VRF, called customer_1. He adds the RD value on this VRF, we use for this example the RD = 65000:100. For allowing that VRF to distribute and learn routes of this VPN, the RT specified to this customer (65000:100) is configured on the VRF. He then associates the physical interface connected to the CE4 with the initiated VRF. A routing protocol (eBGP, OSPF, etc.) is configured between the VRF and the CE4. This protocol allows to learn Site D network prefix, the information that will be used by PE4 to send the MP-BGP update to other PEs. We discussed earlier this process. By receiving this update, all sites belonging to this customer are able to communicate with Site D. MPLS VPN Service Management In the network of a service provider, the parameters used to configure the MPLS network of a client, are not managed neither configured by this client. In other words, for the sake of security, the customer doesn't have any right to configure the PEs connected to these sites or to modify the parameters of his service. For example, if a client A modify the configuration of its VRF by supplying the RTs used for the other VPN (of client B), it can overlap its VPN with that of the client B and put itself in the network of this client. On the other hand, a client can parameter the elements of its sites, for example the addressing plan of its Local Area Network (LAN), and exchange the parameters of its service, ex: service classes (Class of Service (CoS)). Table 1 summarizes parameters of an MPLS VPN service that can be modified by the service provider and its client. MPLS VPN Parameters Service Provider Service Client LAN IP address ✗ ✓ RT ✓ ✗ RD ✓ ✗ Autonomous System (AS) ✓ ✗ VRF name ✓ ✗ Routing protocols ✓ ✗ VPN Identifier (ID) ✓ ✗ SDN-based MPLS Decoupling control from the forwarding plane of an OpenFlow-based MPLS network permits to centralize all routing and label distribution protocols (i.e. Border Gateway Protocol (BGP), LDP, etc.) in a logically centralized SDNC. In this architecture forwarding elements deploy uniquely three MPLS actions needed to establish an LSP. However, this architecture is not the only one proposed to deploy SDN-based MPLS. MPLS naturaly decouples the service (i.e. IP unicast) from the transport by LSPs [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF]. This decoupling is achieved by encoding instructions (i.e. MPLS lables) in packet headers. In [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF] the authors propose to use MPLS as a "key enabler" to deploy SDN. In this work to achieve data centers connectivity authors propose to use the OpenContrail controller that allows to establish overlay network between Virtual Machine (VM)s based on BGP MPLS protocols. OpenContrail Solution OpenContrail [START_REF] Singla | Day One: Understanding OpenContrail Architecture[END_REF] is an open source controller developed based on BGP and MPLS service architecture. It decouples overlay network from underlay, and control plane from forwarding one by centralizing network policy management. -Control nodes, that propagate low-level model to and from network elements. -Analytics nodes, that capture real-time data from network elements, abstract it, and present it in a form suitable for applications to consume. -Topology Manager: handles information about the network topology. At the boot time, it builds the topology of the network based on the notifications coming from the switches. This topology can be updated according to notifications coming from other modules like Device Manager and Switch Manager. OpenFlow-based MPLS Networks SDN-based MPLS 31 -Statistics Manager: sends statistics request to resources (switches), collects statistics and stores them in a data base. This component implements an API to retrieve information like meter, table, flow, etc. -Forwarding Rules Manager: manages forwarding rules, resolves conflict and validates rules. This module communicates via the SBI with the equipment. It deploys the new rules in switches. -Switch Manager: provides information for nodes (network equipment) and connectors (ports). When the controller discovers the new device, it stores the parameters in this module. The latter provides an API for retrieving information about nodes and discovered links. -Host Tracker: provides information on end devices. This information can be switch type, port type, network address, etc. To retrieve this information, the Host Tracker uses ARP. The database of this module can also be manually enriched via the north API. -Inventory Manager: retrieves the information about the switches and its ports for keeping its database up to date. These modules provide some APIs at the NBI level allowing to program the controller to install flows. Using this API, the modules implemented in application layer are able to control the behavior of each equipment separately. Programming an OpenFlow switch consists of a tuple of two rules: match and action. Using this tuple, for each incoming packet, the controller can decide if the packet should be treated, if so which action should be applied on this packet. This programming capacity allows the appearance of a large API allowing to manipulate almost every packet types, including MPLS packets. OpenDaylight native MPLS API OpenDaylight proposes native APIs to make three MPLS actions, PUSH, POP, and SWAP, each LSR might apply on the packet. Using these APIs, the NBI application may install flows on the Ingress LSR pushing tag on a packet entering MPLS network. It may install flows on Transit LSRs allowing to swap tags along and routing the packet along the LSP. This application may install a flow on Egress LSR to send the packet to its final destination by popping the tag. In order to program the underlying networks behavior via this native API, the application needs to have a detailed perspective of the network and its topology, and a control on specified MPLS labels. Table 3.2 summarizes parameters that an application may control using the OpenDaylight native API. OpenDaylight VPN Service project Apart from OpenDaylight core functions, additional modules can be developed in this controller. In order to deploy a specific service, these modules benefit the information provided As discussed in this example, the VPN Service project and its interfaces are rich enough to deploy a VPN service via OpenDaylight. Nevertheless, in order to create a "sufficient" complex VPN service, the user must manage the information concerning the service, its sites and its equipments. Table 3.3 summarizes the information that a user should manage using this project. As it is shown in this table, the amount of manageable data, information about its BGP routers (local AS number and identifier) information about its BGP neighbor (AS number and IP address) information on VPN (VPN ID, RD and RT) and etc. can quickly increase exponentially. This large amount of information can make the service management more complex and reduce the QoS. It is important to note that the SDN controller of an operator manages a set of services on different network equipment shared between several clients. MPLS VPN Parameters That means that, for the sake of security, most of the listed information will not be made available to the customer. Outsourcing problematics Decoupling control plane and data plane of MPLS networks and outsourcing the second one into a controller brings several benefits in terms of service management, service agility and QoS [START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF][START_REF] Das | MPLS with a simple OPEN control plane[END_REF]. Centralized control layer offers a service management interface available to the customer. Nevertheless, this outsourcing and openness can create several challenges. The MPLS backbone is a shared environment among the customers of an operator. To deploy a VPN network, as discussed recently, the operator configures a set of devices, situated in the core and the edge of the network. This equipment, mostly provide several services to customers in parallel. The latter ones use the VPN connection as a reliable means of transaction of their confidential data. Outsourcing problematics 35 Outsourcing the control plane to an SDNC brings a lot of visibility on the traffic exchanged within the network. It is through this controller that a customer can create an on-demand service and manage this service dynamically. Tables 3.2 and 3.3 present in detail the information sent from the NBI to deploy a VPN service. These NBIs are proposed by two solutions 3.2.2.2 and 3.2.2.3. The granularity of this information gives to customer more freedom in the creation and management of his service. Moreover, beyond this freedom a customer having access to the NBI not only can modify the parameters of his own service (i.e. VPN) but also it can modify the parameters concerning the services of other customers. In order to control the customers access to the services managed by the controller, while maintaining service management agility, we propose to introduce a service management framework beyond the SDNC. From bottom-up perspective, this framework provides an NBI abstracting all rich SDNC functions and control complexities, discussed in Section 2.5.3. We strengthen this framework by adding the question of the access of the client to managed resources and services. Indeed, this framework must be able to provide a NBI of variable granularity, through which the customer is able to manage all three types of services discussed in Section 2.5.2: -Type-1 applications: The service abstraction model brought by the framework's NBI allows the customers side application to configure a service with minimum of information communicated between the application and the framework. The restricted access provided by the framework prevent unintentional or intentional data leaking and service misconfiguration. -Type-2 applications: On the southern side, internal blocks of the framework receive upcoming network events directly from the resources, or indirectly through the SDNC. On the northern side, these blocks open up an API to applications allowing them to subscribe to some metrics used for monitoring reasons. Based on receiving network events, these metrics are calculated by framework internal blocks and are sent to the appropriate application. -Type-3 applications: The controlled access to SDN based functions assured by the framework provides not only a service management API, but also a service control one, opened to the customers application. The thin granularity control API allows customers to have a low-level access to network resources via the framework. Using this API customers receive upcoming network events sent by devices, based of which they reconfigure the service. In order to provide a framework able to implement mentioned APIs, we need to analyze the service lifecycle in details. This analyze gives rise to all internal blocks of the framework and all steps they may take, from presenting a high-level service and control API to deploying a low-level resource allocation and configuration. Chapter 4 Service lifecycle and Service Data Model In order to propose different level of abstractions on the top of the service providers platform a service orchestrator should be integrated at the top of the SDNC. This system allows third party actor, called user or customer, to participate to all or part of his network service lifecyle. Nowadays, orchestrating an infrastructure based on SDN technology is one of the SDN challenges. This problematic has at our knowledge been once addressed by Tail-F which proposes a partial proprietary solution [START_REF] Chappell | Creating the Programmable Network, The Business case for Netconf/YANG in network devices[END_REF]. In order to reduce the Operation Support System (OSS) cost and also the TTM of services, Tail-F Network Control System (NCS) [START_REF]Tail-f Network Control System (NCS) -Datasheet[END_REF] introduces an abstraction layer on the top of the NBI in order to implement different services, including layer 2 or layer 3 VPN. It addresses an automated chain from the service request, on the one hand, to the device configuration deployment in the network, on the other hand. To transform the informal service model to a formal one this solution uses the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF]. The service model is mapped into device configurations as a data model transformation. The proposed work doesn't however cover all management phases of the service lifecycle, specially service monitoring, maintenance, etc. , and also it doesn't study the possibility of opening up a control interface to a third party actor. Due to the proprietary nature of this product it is not possible to precisely analyze its internal structure. We present in this chapter a comprehensive solution to this problematic by identifying a reasonable set of capabilities of the NBI of the SDN together with the associated API. Our first contribution rests on a global analysis of an abstract model of the operator platform articulated to a generic but simple service lifecycle, described in Section 4.1, which takes into account the view of the user together with that of the operator. Tackling the service lifecycle The second part of this chapter, Section 4.2, is dedicated to service data model analysis, where we describe data model(s) used on each service lifecycle phases, both for client side and operator side. Service Lifecycle The ability of managing the lifecycle of a service is essential to implement it in an operator platform. Existing service lifecycle frameworks are oriented on human-driven services. For example, if a client needs to introduce or change an existing service, the operator has to configure the service manually. This manual configuration may take hours or sometimes days. It may therefore significantly affect the operators OpEx. It clearly appears that the operator has to re-think about its service implementation in order to provision dynamically and also to develop on-demand services. There are proposals in order to enhance new ondemand network resource provisioning. For instance, the GYESERS project [START_REF] Demchenko | GYESERS Project, Service Delivery Framework and Services Lifecycle Management in on-demand services/resources provisioning[END_REF], proposed a complex service lifecycle model for on-demand service provisioning. This model includes five typical stages, namely service requests/SLA negotiation, composition/reservation, deployment/register and synchronization, operation (monitoring), decommissioning. The main drawback of this model rests on its inherent complexity. We argue this one may be reduced by splitting the global service lifecycle in two complementary and manageable viewpoints: client and operator view. Each one of both views captures only the information useful for the associated actor. The global view may however be obtained by composing the two partial views. In a fully virtualized network based on SDN, the SDNC is administratively managed by the Service Operator. This one provides a programmable interface, called NBI, at the top of this SDNC allowing the OSS and Service Client applications to configure on-demand services. In order to analyze the service lifecycle, and to propose a global model of service lifecycle in this kind of networks, the application classification analysis is necessary. In Section 2.5.2 we made an intuitive classification of SDN applications. This classification allows us to analyze the service lifecycle on both operator and client sides. Client side Service Lifecycle Based on the application classification discussed in Section 2.5.2, we analyze the client side service lifecycle of the three main application types. Client side Service Lifecycle managed by Type-1 applications Type-1 applications consist of applications creating a network service using the NBI. This category doesn't monitor neither modify the service based on upcoming network events. Service Creation Service Retirement Client side Service Lifecycle managed by Type-2 applications This category of applications, takes advantage of events coming up from NBI to monitor the service. It is worth to note that this service may be created by the same application which monitors the service. -Service monitoring: Once created, the service may be used by the client for the negotiated duration. During this time some network and service parameters will be Service Creation Client side Service Lifecycle managed by Type-3 applications In a more complex case, an application may create the service through the NBI, monitors the service through this interface, and based on upcoming events reconfigure the network via the SDNC. This type of control adds a retroactive step to the client-side service lifecycle. This one is illustrated in Fig. 4 Global Client-side Service Lifecycle A global client-side service lifecycle is illustrated in Fig. 4 -Service modification and update: The management of the operator's network may lead to the update of the service. This update can be issued because of a problem occurring during the service consummation or a modification of the network infrastructure. This update may be minimal, such as modifying a rule in one of the underlying devices, or it may impact the previous steps, with consequences on the service creation and/or on the service consummation. -Service retirement: [Service retirement] cf. Section 4.1.1.1. Operator Side Service Lifecycle The Operator-side service lifecycle is illustrated in Fig. 4.5. This service lifecycle consists of six main steps: -Service request: Once a service creation or modification request arrives from the users' service portal (through the NBI), the request manager negotiates the SLA and a high level service specification in order to implement it. It is worth noting that before agreeing the SLA the operator should ensure that the existing resources can cope with the requested service at the time it will be deployed. In case of unavailability, the request will be enqueued. -Service configuration: Based on the previous set of network resource configurations, several instances of corresponding virtual resources will be created, initialized and reserved 1 . The requested service can then be implemented on these created virtual resources by deploying network resource configurations generated by the compiler. -Service maintain, monitoring and operation: Once a service is implemented, its availability, performance and capacity should be maintained automatically. In parallel, a service log manager will monitor all service lifecycle. -Service update: During the service exploitation the network infrastructure may necessitate changes due to some execution problems or technical evolution requirements, etc. It leads to update which may impact the service in different way. The update may be transparent to the service or it may require to re-initiate a part of the first steps of the service lifecycle. -Service retirement: the service configuration will be retired from the infrastructure as soon as a retirement request arrives to the system. The service retirement issued by the operator is out of the scope of this work. We argue that this service lifecycle on the provider side is generic enough to manage the three types of applications, discussed in Section 2.5.2. The global view The global service lifecycle is the combination of both service lifecycles explained in Sections -Service Monitoring ↔ Service Maintain, Monitoring and Operating: client-side service monitoring, which is executed during the service consummation, is in parallel with operator-side service maintain, monitoring and operation. -Service Update ↔ Service Update: operator-side service maintain, monitoring and operation phase may lead to the service update phase in the client-side service lifecycle. -Service Retirement ↔ Service Retirement: In the end of the service life, the client-side service retirement phase will be executed in parallel with the operator-side service retirement. Chapter 4. Service lifecycle and Service Data Model Following we describe service model(s) used during each step of operator side service lifecycle, discussed in Section 4.1.2: -Service Request: to negotiate the service with the customer the operator relies on the service layer model. This model is the same as the model used on the client side service lifecycle. For example, for a negotiated VPN service, both Service Request step and client side service lifecycle, will use the same service layer model. An example of this model is discussed in Section 4.2.1 [START_REF] Moberg | A two-layered data model approach for network services[END_REF]. -Service Decomposition and Compilation: this step receives on the one hand, the service layer model and generates, on the other hand, device configuration sets. Comparing to proposed two-layered approach, this phases is equivalent to the intermediate layer transforming data models. A service layer model can be a fusion of several service models that for the sake of simplicity are merged into a global model. During the decomposition step this global model is broken down into elementary service models which are used in compilation step. They are finally transformed in sets of device models. The transformation of models can be done through two methods: -Declarative method is a straightforward template that makes a one to one mapping of a source data model of parameters to a destination one. For example a service model describing a VPN can be transformed to device configuration sets by one-to-one mapping of values given within the service model. In this case it is sufficient that the transformer retrieves required values from the first model to construct the device model based on a given template. -Imperative method is defined by an algorithmic expression used to map a data model to a second one. Usually this model contains some dynamic parameters, e.g. an unlimited list of interfaces. An example for this model can be a VPN service model in which each client's site, i.e. CE, has different number of up-links (1..n) connected to different number of PEs (1..m). In this case the transformation is not a simple one-to-one mapping any more, but rather an algorithmic process (here a loop) that creates one device model per service model. Using one of these methods, i.e. declarative or imperative data transformation, has its own advantage or drawback, hardly one of these methods would be superior than the other [START_REF] Pichler | Imperative versus Declarative Process Modeling Languages: An Empirical Investigation[END_REF][START_REF] Fahland | Declarative versus Imperative Process Modeling Languages: The Issue of Understandability[END_REF]. We argue that the choice of the transformation method used on compilation phase rests on the service model, its related device model and the granularity of parameters within each model. -Service configuration: to configure a resource, the device model generated by the transformation method of the previous step (i.e. compilation) is used. If this model is generated into the same data model known by the network element, no transformation method should be used. Otherwise another data transformation action should be done on the device model transforming the original device model to a network element compatible one. It is worth noting that since this transformation is a one-to-one mapping task, the data transformation can be done with the declarative method. -Service maintain, monitoring and operation: since the service maintain and operation process is directly done on network elements, the data model used for this phase 4.2. Service Data Model 47 is device model. Although the service model used for the monitoring task of this phase relies on the nature of the monitored resource. For example, if the service engineers and operators need to monitor the status of a resource they might use a monitoring method such as SNMP, BGP signaling-based monitoring [START_REF] Di Battista | Monitoring the status of MPLS VPN and VPLS based on BGP signaling information[END_REF], etc. the result of which is described in device model. Otherwise, if a service customer needs to monitor its service, e.g. monitoring the latency of two endpoints of a VPN connection, the monitoring information sent from the operator to the customer is transformed to a service data model. This bottom-up transformation can be done by declarative or imperative method. -Service update: updating a service consists in updating network elements configurations, hence the data model used on this phase is a device data model. Nevertheless this update may derive a modification on the service model represented to the customer. In this case, at the end of the service update process, a new service model will be generated based on the final state of network elements. This new model is the result of the bottom-up data transformation done through of declarative or imperative methods. -Service retirement: decommissioning a service is made up of all tasks done to remove service related configurations from network elements, and eventually to remove the resource itself. In order to remove device configurations, device data models are used. But, during the retirement phase the service model is also used. The data model transformation done in this phase entirely depends on the source of retirement process. Indeed, if the service retirement is requested from the customer, hence the request is arrived from the client side described in a service model. Conclusion In this chapter we conducted an analysis of service lifecycle in an SDN-based ecosystem. This analysis has led us to two general service lifecycles: client-side and service-side. On the first side we discussed how an application implementing network services using an SDNC can contribute to the client-side service lifecycle. For this reason, for each application category discussed in Section 2.5.2, we presented a client-side service lifecycle model, by discussing additional steps that each category may add to this model. Finally, a global clientside service lifecycle is presented. This global model, contains all steps needed to deploy each type of applications. We also presented a global model concerning the operator-side service lifecycle. It represents the model that an operator may take into account to manage a service from the service negotiation to the service retirement phases. In the second part of this chapter we discussed the data model used by each service lifecycle phase. Through an example we explained in details the manner in which a data model is transformed from a source model into a destination one. We argue that presenting a service lifecycle model on one side, allows the implementation of a global SDN orchestration model managed by an operator. On the other side, this model will help us to understand the behavior of applications. In this way it will simplify the specification of the NBI in forthcoming studies. Presenting the data model also describes in details the behavior of the management system on each service lifecycle step. It also permits the definition of operational blocks and their relations allowing to implement the operator side service lifecycle. In this chapter we present a framework involving a minimal set of functions required to Orchestrator-based SDN Framework Service management processes, as illustrated in the previous example, can be divided into two more generic families: the first one managing all steps executing service based tasks from service negotiation to service configuration and service monitoring, and the second one managing all resource based operations. These two families managing together all operatorside service lifecycle (discussed in 4.1.2) can be represented as a framework illustrated in Fig. The model is composed of two main orchestration layers: -Service Orchestrator (SO) -Resource Orchestrator (RO) The "Service Orchestrator" will be dedicated to the service part operations and is conform to the operator side service lifecycle, cf. The "Resource Orchestrator" will manage resource part operations: -Resource Reservation -Resource Monitoring Service Orchestrator (SO): This orchestrator receives service orders and initiates the service lifecycle by decomposing complex and high level service requests to elementary service models. These models allow to derive the type and the size of resources needed to implement that service. The SO will demand the virtual resource reservation from the lower layer and deploy the service configuration on the virtual resources through an SDNC. Resource Orchestrator (RO): This orchestrator, which manages physical resources, will reserve and initiate virtual resources. It maintains and monitors physical resources states using the southbound interface. Internal structure of the Service Orchestrator As mentioned in Fig. 5.3, the first orchestrator, SO, contains five main modules: -SRM -SDCM -SCM Chapter 5. An SDN-based Framework For Service Provisioning The SCM can be considered as a resource driver of the SO. This module is the interface between the orchestrator and resources. Creating such a module facilitates the processes run at upper layers of the orchestrator where the service can be managed independently of existing technologies, controllers and protocols implementing and controlling resources. On the one hand, this module communicates to different resources through its SBI. On the other hand, it exposes a universal resource model to other SO modules, specifically to SDCM. Configuring a service by SCM requires a decomposition into two tasks: creating the resource on the first step (if the resource doesn't exist, cf. arrow 4 of Fig. 5.6), and configuring that resource at the second step (cf. arrow 5 of Fig. 5.6). In our example, once the PE3 ID and the required configuration is received from the SDCM side, the SCM, firstly fetches the management IP address of the PE3 from its database. Secondly if the requested vRouter is missing on the PE, it creates a vRouter (cf. arrow 4 of Fig. 5.6). And thirdly, it configures that vRouter to fulfill the requested service. In order to create the required resource (i.e. to create the vRouter on the PE3), SCM sends a resource creation request to the RO (arrow 4 of Fig. 5.6). Once the virtual resource (vRouter) is initiated, the RO acknowledges the creation of the resource by sending the management IP address of that resource to SCM. All what this latter needs to do, is to push the generated configuration to that vRouter using its management IP address (arrow 6 of Fig. 5.6). The configuration of the vRouter can be done via different methods. In our example the vRouter is an OpenFlow-enabled device programmable via the NBI of an SDNC. To configure the vRouter, SCM uses its interface with the SDNC controlling this resource. SCM -SDN Controller (SDNC) Interface As we explained, the configuration of part or all of virtual resources used to fulfill a service can be done through an SDNC. In Section 3.2.2.1 we analyzed the architecture of the Open-Daylight controller providing a rich set of modules allowing to program network elements. This controller exposes on its NBI some Representational State Transfer (REST) APIs allowing to program flows thanks to its internal Flow Programmer module. These APIs allow to program the behavior of a switch based on a "match" and "action" tuple. Among all actions done on a received packet, the Flow Programmer allows to push, pop and swap MPLS labels. In order to program the behavior of the initiated vRouter, we propose to use the API provided by OpenDaylight Flow Programmer. The vRouters role is to push MPLS label into packets going out from Site D to other sites (A and C), and to pop the MPLS labels from incoming packets sent from these remote sites. To program each flow OpenDaylight requires the Datapath ID (DPID) of the vRouter, inbound and outbound port numbers, MPLS labels to be pushed, popped and swapped, and the IP address of next hops where the packet should be sent to. In the following we will discuss how these information is managed to be sent to the SDNC. Orchestrator-based SDN Framework 57 DPID: During the resource creation time, the vRouter is programmed to be connected automatically to OpenDaylight. The connection establishment between these two entities is explained in OpenFlow specification, where the vRouter sends its DPID to the controller via OpenFlow features reply. This DPID, known by SCM and SDNC, is further used as the unique ID of this virtual resource. Port numbers: Inbound and outbound port numbers are practically interface numbers of the vRouter created by the SO. To create a virtual resource, the SCM relies on a resource template explaining the interface ordering of that resource. This template describes which interface is used for management purpose, which interface is connected to the CE and which one is connected to the P router inside the MPLS network. This template is registered inside the database of the SCM, and this module uses this template to generate REST requests sent to the SDNC. MPLS labels: MPLS labels are other parameters needed to program the flow inside the vRouter. These labels are generated and managed by SDCM. This module controls the consistency of labels inside a managed MPLS network. Labels are generated in this layer and are sent to the SCM to use in service deployment step. Next hop IP address: When a packet enters to vRouter from the CE side, the MPLS label will be pushed into the packet and it will be sent to the next LSR. Knowing that the MPLS network, including LSRs, is managed and configured by the SO, this one has an updated vision of the topology of this network. The IP address of the P router directly connected to the PE is one of information that can be exported from the topology database of SO managed by SCM. Once the vRouter is created and configured on the PE3, the LSP of the MPLS network also should be updated. At the end of the vRouter creation step, the customer owns three sites, each one connected to a PE hosting a vRouter. The SCM configures on each vRouter (1 and 2) the label that should be pushed to each packet sent to Site D and vice versa (cf. arrows 6, 8, 10 of Fig. 5.6). It configures also the P router connected directly to PE3 to take into account the label used by the vRouter3 (cf. arrow 12 of Fig. 5.6). Service Monitoring Manager (SMM) In parallel to the three main modules explained previously, the SO contains a monitoring system, called SMM, that monitors vertically the functionality of all orchestrators modules from the SRM to the SCM and its SDNC. This module has two interfaces to external part of the orchestrator. On the one hand, it receives upcoming alarms and statistics from the lower orchestrator, RO, and on the other hand it communicates the service statistics to the external application via the NBI. Internal architecture of the Resource Orchestrator As it is mentioned in previous sections, 4.1.2 and 5.2.1.3, during the service configuration phase, the RO will be called to initiate resources required to implement that service. In the service configuration step, if a resource is missing, the SCM will request the RO to initiate the resource on the specified location. The initiated resource can be virtual or physical according to the operator politic and/or negotiated service contract. Existing cloud orchestration systems, such as OpenStack platform [START_REF] Sefraoui | OpenStack: Toward an Open-source Solution for Cloud Computing[END_REF], are good candidates to implement a RO. OpenStack is a modular cloud orchestrator that permits providing and managing a large range of virtual resources, from computing resource, using its Nova module, to L2/L3 LAN connection between the resources, using its Neutron module. The flexibility of this platform, the variety of supported hypervisors and its optimized resource management [START_REF] Huanle | An OpenStack-Based Resource Optimization Scheduling Framework[END_REF] can help us to automatically provision virtual resources, including virtual servers or virtual network units. We continue exploiting the proposed framework based on a RO implemented by the help of OpenStack. In order to implement and manage required resources needed to bring up a network service, an interface will be created between the SO and the RO where the SCM can communicate to the underlying OpenStack platform providing the virtual resource pool. This interface provides a resource management abstraction to SO. The resource request will be passed through various internal blocks of the OpenStack, such as Keystone that controls the access to the platform. As our study is mostly focused on service management, in this proposal we don't describe the functionality of each OpenStack module in details. In general, the internal architecture of the required RO is composed of two main modules, one used to provide virtual resources, a composition of Nova, Cinder, Glance and Swift modules of OpenStack, and another one used to monitor these virtual resources, thanks to the Ceilometer module of OpenStack. If the RO faces an issue it will inform the SO which is consuming the resource. The service run-time lifecycle and performance is monitored by the SO. When it faces an upcoming alarm sent by the RO or a service run-time problem occurring on virtual resources, it will either perform some task to resolve the problem autonomously or send an alarm to the service consumer application (service portal). Creating a virtual resource requires a set of information, software and hardware specifications, such as the version of the firmware installed inside that resource, number of physical interfaces, the capacity of its Random-Access Memory (RAM), and its startup configurations like the IP address of the resource. For example to deploy a vRouter, the SO needs a software image which installs the firmware of this vRouter. Like all computing resources, a virtual one also requires some amount of RAM and Hard Disk space to use. In OpenStack world, these requirements are gathered within a Flavor. Fig. 5.7 illustrates a REST call, sent from the SCM to the RO, requesting the creation of the vRouter. In this example, the SCM requests the creation of the "vPE1", that is a vRouter, on 5.3. Implementation 59 a resource called "PE1" using an image called "cisco-vrouter" and the flavor "1". curl -X POST -H "X-Auth-Token:\$1" -H "Content-Type: application/json" -d ' { "server": { "name": "vPE1", "imageRef": "cisco_vrouter", "flavorRef": "1", "availability-zone" : "SP::PE1", "key_name" : "OrchKeyPair" } } ' http://resourceorchestrator:8774/v2/admin/servers | python -m json.tool FIGURE 5.7: REST call allowing to reserve a resource Framework interfaces The composition of this framework requires the creation of three interfaces (cf. Fig. 5.3). The first one, the NBI, provides an abstracted service model enriched by some value-added services to the third party application or service portal. The second one, the SBI, interconnects the SO to the resource layer through the SDNC. This interface permits the SCM to configure and control virtual or physical resources. Inter-orchestrator (middle) interfaces, is the third interface that is presented for the first time in this framework. This interface interconnects the SO to the ROs. The modular aspect created by this interface permits to implement a distributed orchestration architecture. This architecture allows one or several SO(s) to control and communicate to one or several RO(s). Implementation In order to describe the internal architecture of the framework, we implement different layers of the Service Orchestrator through the MPLS VPN deployment example. Hardware architecture Fig. 5.8 shows the physical architecture of our implementation. This one is composed mainly by three servers each one implementing one of the main blocks: -Server1 implements the Mininet Platform [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF]. For the sake of simplicity and because of lack of resources, we implement the infrastructure of our implementation based on a Mininet platform. This one implements all resources, routers and hosts, needed to deploy our desired architecture. -Server2 implements OpenDaylight SDN controller [START_REF]The OpenDaylight SDN Platform[END_REF]. For this implementation we use the Carbon version of the OpenDaylight. From its SBI this controller manages resources implemented by the Mininet platform based on OpenFlow protocol. Implementation 61 For this implementation we study the case where all three customer sites are already connected to the core network and the physical connection between CE and PE routers is established. Software architecture Given that our analysis focuses on the architecture of the SO, in this implementation we study the case where the required resource already exists. In this case the deployment of the service relies on the SO and its related SDNC. Fig. 5.10 shows the internal architecture and the class diagram of the implemented SO. The architecture of the orchestrator is based on the object oriented paradigm developed in Python 2.7. In our implementation each SOs layer is developed in a separated package: Service Request Manager (SRM): contains several classes including Service_request, Customer and Service_model. On the one hand it implements a REST API used by the customer, on the other it manages all available services proposed to the customer and the service requested arrived from the customer. For this, it uses two other objects (classes) each one controlling the resources managed in this layer. The first one, Customer class, manages the customer, its subscribed services and available services to him. The second one, the Service_model, manages customer face service models. This model is used to make a representation of the service to the customer. In the first step, this module retrieves related PE list connected to each remote site from the Topology module. Using the integrated Dijkstra engine of the Topology module, it calculates the shortest path to reach other sites from each PE. And using the labels generated by the Label_manager, and device model templates managed by the Flow_manager, it generates a list of device models to be deployed on the underlying network devices to create the required LSP. In our implementation we use a device model database containing all models needed to create a MPLS network on an OpenFlow based infrastructure. This database is managed by the Flow_manager module. The entries of this database are each one a flow template { " s e r v i c e _ t y p e " : " mpls_vpn " , " customer_id " : " customer_1 " , " p r o p e r t i e s " : { " c e _ l i s t " : [ { " c e _ i d " : " ce1 " , " l a n _ n e t " : " 1 9 Conclusion In this chapter, we proposed a SDN framework derived from the operator-side service lifecycle discussed in 4.1.2. This framework which is structured in a modular way encapsulates SDNC with two orchestrators, SO and RO, dedicated respectively to the management of services and resources. The proposed framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface, which provides a virtual resource abstraction layer on the top the RO. Our approach gives the foundation for the rigorous definition of the SDN architecture. It is important to note the difference between the SO and its complementary system, RO. The RO provisions, maintains and monitors physical devices hosting several virtual resources. It doesn't dispose any perspective of running configuration on each virtual resource. Unlike RO, the SO manages the internal behavior of each resource. It is also the responsible of interconnecting several virtual resources to conduct a required service. monitoring tasks, done at the operator-side service lifecycle, as potentially interesting candidates, some parts of which may be delegated to the GC. Such an outsourcing moreover leads to enrich in some ways the APIs described in Fig. 6.2. Applying the BYOC concept to Type 1 services Configuring a service in the SO is initiated after the Service compilation phase of the operator side service lifecycle 4.1.2. This one translates the abstracted network models into detailed network configurations thanks to integrated network topology and statement databases. In order to apply the BYOC concept, all or a part of the service compilation phase may be outsourced to the application side represented by the GC. For example, the resource configuration set of a requested VPN service, discussed in Section 5.1, can be generated by a GC. This delegation needs an interface between the SDCM and the GC. We suggest to enrich the first API with dedicated primitives allowing the GC to proceed to the delegated part of the complete compilation process (cf. the primitive "Outsourced (Service Compilation)" in Fig. 6.3). It is worth pointing out that the compilation process assigned to the GC could be partial because the operator may want to maintain the confidentiality of sensitive information, as for example the topology of its infrastructure. Applying the BYOC concept to Type 2 services In this case the application may configure a service and monitor it via the NBI. This type involves the compilation phase, discussed earlier, and the monitoring one. Outsourcing the monitoring task from the Controller to the GC, thanks to the BYOC concept, requires an asynchronous API that permits to transfer the real-time network events to GC during the monitoring phase. The control application implemented in the GC observes the network state thanks to the real-time events sent from the Controller. A recent work [START_REF] Aflatoonian | An asynchronous push/pull communication solution for Northbound Interface of SDN based on XMPP[END_REF] proposed an XMPP-based push/pull solution to implement an NBI that permits to communicate the networks real-time events to the application for a monitoring purpose. The outsourced monitoring is located in the second API of Fig. 6.2 and could be expressed by refining some existing primitives of the API (cf. "Outsourced (Service Monitoring Req./Resp.)" of Fig. 6.3). Applying the BYOC concept to Type 3 services This type concerns the application that configures a service (Type 1) and monitors it (Type 2), according to which it may modify the network configuration and re-initiate the partial compilation process (Type 1). The second API of Fig. 6.2 should be sufficient to implement such type of service even if it may necessitate non trivial refinements or extensions in order to be able to collect the information needed by GC. The delegation of the control induced by this kind of GC comes exactly from the computation of the new configuration together with its re-injection in the network, through the SDNC, in order to modify it. Northbound Interface permitting the deployment of a BYOC service Requirements for specification of the NBI The GC is connected to the SO through the NBI. This is where the service operator communicates with the service customer and sometimes couples with the client side applications, orchestrators, and GC(s). In order to accomplish these functionalities certain packages should be implemented. These packages maintain two categories of tasks: 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control. Synchronous vs Asynchronous interactions The former uses a synchronous interaction that implements a simple request/reply communication that permits the client-side application to send service requests and modifications, while the latter uses an asynchronous interaction where a notification will be pushed to the subscribed service application. The asynchronous nature of this package makes it useful for sending control messages to the GC. From its SBI, the SO tracks the network events sent by resources. Based on the service profile related to the resources, it sends events to concerning modules implemented either inside the SO or within an external GC. Push/Pull paradigm to structure the interactions The communication between the GC and the SO is based on Push-and-Pull (PaP) algorithm [START_REF] Bhide | Adaptive Push-Pull: Disseminating Dynamic Web Data[END_REF] that is basically used for the HTTP browsing reasons. In this proposal we try to adapt this algorithm to determine the communication method of the NBI which will use publish/submit messaging paradigm. The GC subscribes to the SO. To manage BYOC-type services, Decision Engine (DE) and Service Dispatcher (SD) modules are implemented within the SO. The DE receives messages sent by network elements and based on them it decides whether to treat the message inside the SO or forward the message to the GC. For messages needed to be analyzed within a GC, the DE sends them to the SD. The SD distributes these messages to every GC that has subscribed to the corresponding service. a: The service customer requests a service from the SRM, through the service portal. In addition to the service request confirmation, the system sends the subscription details about the way that service is managed, internal or BYOC. b: Using the subscription details, the user connects to the SD unit and subscribes to the relevant service. c: When a control message, e.g. OpenFlow PackeIn message, is sent to the DE, the DE creates a notification and sends it to the SD. d: The SD unit pushes the event to all subscribers of that specific service. The initiative concerning the WebSockets [START_REF] Fette | The WebSocket Protocol[END_REF] should eventually be interesting, but actually this solution is still under development. As mentioned in the work of Franklin and Zdonik [START_REF] Franklin | Data in Your Face: Push Technology in Perspective[END_REF], push systems are actually implemented with the help of a periodic pull that may cause an important charge in the network. Alternative solutions like Asynchronous JAvascript and Xml (AJAX) also rely on client's initiated messages. We argue that XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] could be a good candidate due to its maturity and simplicity it may cope with all the previous requirements. XMPP As An Alternative Solution XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF], also known as Jabber, is originally developed as an Instant Messaging (IM) protocol by the Jabber community. This protocol, formalized by the IETF, uses an XML streaming technology in order to exchange XML elements, called stanza, between any two entities across the network, each one identified by a unique Jabber ID (JID). The JID format is composed of three elements: "node@domain/resource" where the "node" can be a username, the "domain" is a server and the "resource" can be a device identifier. XMPP Standard Foundation tries to enlarge the capability of this protocol by providing a collection of XMPP Extension Protocols (XEP)s [START_REF]XMPP Extensions[END_REF], XEP-0072 [START_REF]XEP-0072: SOAP Over XMPP[END_REF], for example, defines methods for transporting SOAP messages over XMPP. Thanks to its flexibility, the XMPP is used in a large domain, from a simple application such as instant messaging to a larger one such as remote computing and cloud computing [START_REF] Hornsby | From instant messaging to cloud computing, an XMPP review[END_REF]. The work [START_REF] Wagener | XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services[END_REF] shows how XMPP is a compelling solution for cloud services and how its push mechanism eliminates unnecessary polling. XMPP forms a push mechanism where nodes can receive messages and notifications whenever they occur on the server. This asynchronous nature eliminates the need for periodic pull messages. Chapter 6. Bring Your Own Control (BYOC) two main crucial packages listed in the beginning of this section; packages that execute 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control. The NBI security problem also is considered in this proposal. The XMPP specifications describe security functions as the core parts of the protocol [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] and all XMPP libraries support these functionalities by default. XMPP provides a secure communication through an encrypted channel (Transport Layer Security (TLS)) and restricts the client access via the Simple Authentication and Security Layer (SASL) that permits XMPP servers to accept only encrypted connections. All this signifies that XMPP is well suited for constructing a secured NBI allowing to deploy a BYOC service. NBI Data Model In order to hide the service implementation complexity, services can be represented as a simple resource model described in a data modeling language. YANG data modeling language [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] can be a good candidate for this purpose. A YANG data model is translated into an equivalent XML syntax called YANG Independent Notation (YIN) that, on the one hand, allows the use of a rich set of XML-based tools and, on the other hand, can be easily transported through the XMPP-based NBI. Simulation results In order to evaluate this proposal, we assessed the XMPP NBI implementation performance in term of delay and overhead costs by comparing a simple GC using an XMPP-based NBI with the same GC using a RESTful-based one. Once the term "near to real-time" is used to develop a system, the delay is the first parameter to be reduced. In a multi-tenant environment the system charge is also the other important parameter to take into account. To measure these parameters and compare them in the XMPP case versus the REST one, we need to implement a simple GC that exploits the NBI to monitor packets belonging to some specific service, in our case HTTP filtering service. The underlying network is simulated thanks to Mininet [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF] project. We implemented two NBIs, XMPP-based and RESTful) which accessed to the NBI, in parallel. We use the term "event" to describe control messages sent from the SO to the GC. In the XMPP-based NBI case this event is pushed near-to realtime, thanks to the XMPP protocol. But for the RESTful one, the event message should be stored in a temporary memory before the GC pull them up REST requests. In this case, to simulate a real-time process and to reduce the delay, REST requests are sent in a little time intervals. In the case of the XMPP-based NBI, the event is sent in a delay of 0.28 ms. The NBI overhead of this NBI is 530 Bytes which is the size of an XMPP message needed to carry the event. In the other case, with the RESTful NBI, the GC will pull periodic message to push this information, a request/response message that will be at least 293 Bytes. In order to reduce the delay, the time interval between each request should be scaled down. This periodic request/response messages will create a huge overhead in the NBI. The Fig. Conclusion In the first part of this chapter we introduced BYOC as a new concept providing a convenient framework structuring the openness of the SDN on its northbound side. We derived from the lifecycle characterizing the services deployed in an SDN, the parts of services the control of which may be delegated by the operator to external GC through dedicated APIs located in the NBI. We presented EaYB business model through which the operator monetizes the openness of its SDN platform thanks to the BYOC concept. Several use cases are briefly presented, that have potential interest to be implemented by the BYOC concept. In the second part we determined basic requirements to specify an NBI that tightly couples the SDN framework, presented recently, with the GC. We proposed an XMPP-based NBI conforming to previously discussed requirements and allowing to deploy the BYOC service. Apart all the numerous advantages of the XMPP-based NBI, the main limitation concern the transfer of large service descriptions. These ones are restricted by the "maximum stanza site" value that limits the maximum size of the XMPP message processed and accepted by the server. This value can however be parameterized when deploying the XMPP server. This dissertation is setted out to investigate the role that SDN plays in various aspects of network service control and management, and to use an SDN based framework as service management system. In this final chapter, we will review the research contributions of this dissertation, as well as discuss directions for future research. Contributions The following are the main research contributions of this dissertation. -A double-sided service lifecycle and data model (Chapter 4) At the beginning of this dissertation the SDN based service management was one of the non-answered questions. service. The second type is the customer who monitors his service, and the third one is the customer who, using the management interface, receives some service parameters based on which he reconfigures or updates that service. Based on this analysis, the client-side service lifecycle can be modified. In this section we analyzed all phases that each service type might add to the service lifecycle. On the other side, the operator-side service lifecycle analysis presents a service lifecycle model representing all phases an operator should cross to deploy, configure and maintain a service. This double-sided analysis allows to determine actions that each service customer and operator can take on a service that is the common object between a customer and an operator. At the second time, we presented the data model of each lifecycle sides based on a double-layered data model approach. In this approach a service can be modeled in two data models: service and device, and an elementary model, called transformation, defines how one of these two models can be transformed to the other one. The In the first part of this chapter we introduce BYOC as a concept allowing to delegate, through the NBI, the control of all or a part of a service to an external controller, called "Guest Controller (GC)". The latter might be managed by the same customer requesting and consuming the service or by a third party operator. Opening a control interface at the top of the SDN platform requires some specifications at the NBI level. We discussed at the second part of this chapter the requirements of the NBI allowing to open the BYOC API. Based on these requirements we proposed the use of XMPP as the protocol allowing to deploy such an API. Future researches The framework and its multilevel service provisioning interface introduced in this dissertation, provides a new service type, called BYOC, to future research. While this work has demonstrated the potential of opening a tuned control access to a service though a dynamic IPS service in Chapter 7, many opportunities for extending the scope of this thesis remain. In this section we discuss some of these opportunities. A detailed study of the theoretical and technical approach of the BYOC Opening up the control interface to a GC by BYOC concept may create some new revenue resources. Indeed, BYOC allows not only to the service customer to implement its personalized control algorithm and fully managing its service, but also it allows the operator to monetize the openness of its SDN-based system. We presented the Earn as You Bring (EaYB) business model allowing the operator to resell a service to a customer controlled by third party GC [START_REF] Aflatoonian | BYOC: Bring Your Own Control a new concept to monetize SDN's openness[END_REF]. Opening the control platform and integrating an external Controller in a service production chain, however, may create some security and complexity problems. One of the fundamental issues concerns the impact of the BYOC concept on the performance of the network Chapter 8. Conclusions and Future Research controller. In fact, externalizing the control engine of a service to a GC may create a significant delay on decision step of the controller, the delay that will have a direct effect on the QoS. The second issue concerns the confidentiality of information available to the GC. By opening its control interface, the operator provides the GC with information that may be confidential. To avoid this type of security problem, a data access control mechanism must take place, through which the operator controls all the data communicated between the controller and the GC while maintaining the flexibility of the BYOC model [START_REF] Jiang | A Secure Multi-Tenant Framework for SDN[END_REF]. The analysis of advantages of BYOC model and the complexity and security issues that BYOC may bring to the service management process can be the subject of a future work. This analysis requires a more sophisticated study of this concept, the potential business model that it can introduce (ex. EaYB), the methods and protocols used to implement the northern interface and to control the access to resources exposed to the GC, and the real impact of this type of services on the performance of services. BYOC as a key enabler to flexible NFV service chaining A NFV SC defines a set of Service Function (SF)s and the order of these SF through which a packet should pass in the downlink and uplink traffic. Chaining network elements to create a service is not a new subject. Indeed, legacy network services are made of several network functions which are hardwired back-to-back. These solutions however remain difficult to deploy and expensive to change. As soon as software-centric networking technologies, such as SDN and NFV brought the promise of programmability and flexibility to the network, the flexible service chaining became one of the academic challenges. The flexible service chaining consists in choosing the relevant SC through the analysis of traffic. There are several initiatives trying to propose an architecture for creation of Service Function Chaining (SFC) [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF][START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF][START_REF]ETSI. Network Functions Virtualisation (NFV); Architectural Framework. TS ETSI GS NFV 002[END_REF]. Among these solutions, IETF [START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF] and ONF [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF] propose to use a traffic classifier at the ingress point of BYOC as a key concept leading to 5G dynamic network slicing 5th generation (5G) networks needs to support new demands from a wide variety of service groups from e-health to broadcast services [START_REF]5G white paper[END_REF]. In order to cover all these domains 5G networks need to support diverse requirements in terms of network availability, throughput, capacity and latency [START_REF] Salah | 5g service requirements and operational use cases: Analysis and metis ii vision[END_REF]. In order to deliver services to such wide domains and to answer these various requirements, network slicing has been introduced in 5G networks [START_REF] Ngmn Alliance | Description of network slicing concept[END_REF][START_REF] Galis | Autonomic Slice Networking-Requirements and Reference Model[END_REF][START_REF] Jiang | Network slicing management & prioritization in 5G mobile systems[END_REF]. Network slicing allows operators to establish different capabilities for each service group and serve multiple tenants in parallel. SDN will play an important role in shifting to dynamic network slicing [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]110,[START_REF] Hakiri | Leveraging SDN for the 5G networks: trends, prospects and challenges[END_REF]. The control and forwarding plane decoupling leads to separation of software from hardware, the concept that allows to share the infrastructure between different tenants each one using one or several slices of the network. In [112] "Dynamic programmability and control" brought by SDN, is presented as one of the key principles guiding the dynamic network slicing. In this work the authors argue that "the dynamic programming of network slices can be accomplished either by custom programs or within an automation framework driven by analytics and machine learning." Applying the BYOC concept to 5G networks leads to externalizing the control of one or several slices to a GC owned or managed by a customer, an Over The Top (OTT), or an OSS. We argue that this openness is totally in line with the dynamic programmability and control principle of 5G networks presented in [112]. The innovative algorithms implemented within the GC controlling the slice of the network empowers promising value-added services and business models. However, this externalization creates some management and orchestration issues presented previously in [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous illustrons notre approche par l'ouverture d'une API BYOC sécurisée basée sur XMPP. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. Nous illustrons la faisabilité de notre approche par l'exemple du service IPS (système de prévention d'intrusion) décliné en BYOC. Mots clefs : Réseau logiciel programmable, Interface nord, Interface de programmation applicative, Apporter votre propre contrôle, Externalisation / Délégation, Multi-client Abstract Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology. We first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps. The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. We propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an ondemand service by the delegation of part of its control to an external third party. An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based Northbound Interface (NBI) allowing opening up a secured BYOCenabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. We illustrate the feasibility of our approach through a BYOC-based Intrusion Prevention System (IPS) service example. Keywords: Sofware Defined Networking, Northbound Interface, API, Bring Your Own Control, Outsourcing, Multi-tenancy 2. 1 SDNIntroduction 1 Controllers and their NBI . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 MPLS VPN configuration parameters . . . . . . . . . . . . . . . . . . . . . . . 3.2 MPLS VPN configuration parameters accessible via OpenDaylight API . . . . 3.3 MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Service lifecycle phases and their related data models and transformation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. La croissance des demandes d'accès au réseau conduisent les fournisseurs de services à rechercher des solutions rentables pour y répondre tout en réduisant la complexité et le cout du réseau et en accélérant l'innovation de service. Le réseau d'un opérateur est conçu sur la base d'équipements soigneusement développés, testés et configurés. En raison des enjeux critiques liés à ce réseau, les opérateurs limitent autant que faire se peut, sa modifications. Les éléments matériels, les protocoles et les services nécessitent plusieurs années de standardisation avant d'être intégrés dans les équipements par les fournisseurs. Ce verrouillage matériel réduit la capacité des fournisseurs de services à innover, intégrer et développer de nouveaux services. La transformation du réseau offre la possibilité d'innover en matière de service tout en réduisant les coûts et en atténuant les restrictions imposées par les équipementiers. Transformation signifie qu'il est possible d'optimiser l'exploitation des capacités du réseau grâce à la puissance des applications pour finalement donner au réseau du fournisseur de services une dimension de plate-forme de prestation de services numériques. L'émergence récente de la technologie Software Defined Networking (SDN) accompagné du modèle Network Function Virtualisation (NFV) permettent d'envisager l'accélération de la transformation du réseau. La promesse de ces approches se décline en terme de flexibilité et d'agilité du réseau tout en créant des solutions rentables. Le concept SDN introduit la possibilité de découpler les fonctionnalités de contrôle et de réacheminement des équipements réseau en plaçant les premières sur une unité centrale appelée contrôleur. Cette séparation permet de contrôler le réseau à partir d'une couche applicative centralisé, ce qui simplifie les tâches de contrôle et de gestion du réseau. De plus la programmabilité du contrôleur accélère la transformation du réseau des fournisseurs de services. niveaux d'abstraction au client, chacun permettant d'offrir une partie des capacités nécessaires pour un service à la demande. Découpler le plan de contrôle et le plan de données des réseaux MPLS et localiser le premier dans un contrôleur apporte plusieurs avantages en termes de gestion de service, d'agilité de service et de contrôle de la QoS. La couche de contrôle centralisée offre une interface de gestion de service disponible pour le client. Néanmoins, cette localisation et cette ouverture peuvent créer plusieurs défis. Le backone MPLS est un environnement partagé entre les clients d'un opérateur. Pour déployer un réseau VPN l'opérateur configure un ensemble de périphériques, situés en coeur et en bordure de réseau. Ces équipements fournissent ainsi e, parallèle plusieurs services aux clients qui utilisent la connexion VPN comme moyen de transaction fiable de leurs données confidentielles. L'externalisation du plan de contrôle vers un contrôleur SDN (SDNC) apporte beaucoup de visibilité sur le trafic échangé au sein du réseau. C'est grâce à l'interface nord (NBI) de ce xxiii contrôleur qu'un client peut créer un service à la demande et gérer ce service dynamiquement. La granularité de cette information donne au client plus de liberté dans la création et la gestion de son service. xxiv Le cycle de vie du service côté client géré par ce type d'applications contient deux étapes principales : de service jusqu'à sa configuration et sa surveillance, et le second gère toutes les opérations basées sur les ressources. Ces deux familles gérant ensemble tout le cycle de vie du service côté opérateur. Ce framework est composé de deux couches d'orchestration principales : -Orchestrateur de service (SO) -Orchestrateur de ressource (RO) L' "Orchestrateur de service" sera dédié aux opérations de la partie service et est conforme au cycle de vie du service côté opérateur : -Demande de service -Décomposition de service, compilation -Configuration de service xxvi -Maintenance et surveillance de service -Mise à jour de service -Retrait de service Cet orchestrateur reçoit les ordres de service et initie le cycle de vie du service en décomposant les demandes de service complexes et de haut niveau en modèles de service élémentaires. Figure 2 . 2 Figure 2.4 shows a simplified view of the SDN's architecture based on this separation. FIGURE 2 . 5 : 25 FIGURE 2.5: OpenFlow Protocol in practice Fig. 2 . 2 [START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] shows three main components of this switch: Fig. 2 . 2 Fig.2.9 illustrates our first analysis of different controllers, their core modules, NBI and applications. Proposing all control function required for implementing a service, may rely on the use of several SDNC. Managing the lifeceycle of a service also requires the use of several APIs proposed through the NBI. FIGURE 2 . 2 FIGURE 2.10: The MANO architecture proposed by ETSI (source [62]) FIGURE 3 . 1 : 31 FIGURE 3.1: Label Switch Routers (LSR)s Fig. 3 . 3 Fig. 3.2 shows the topology of a simple MPLS network. The network is designed with three types of equipment: 3. 1 . 1 Introduction to MPLS networks 27 the destination CE is connected directly to him. It then pops the label and forwards the IPv4 packet to the CE1. FIGURE 3 . 5 :Fig. 3 . 353 FIGURE 3.5: OpenContrail control plane architecture (source [69]) 3. 2 . 2 figure vRouters, control node uses the Extensible Messaging and Presence Protocol (XMPP) based interface. These nodes communicate with other control nodes using their east-west interfaces implemented in BGP.OpenContrail is a suitable solution used to interconnect VMs within one or multiple data centers. VMs are initiated inside a compute node that are general-purpose virtualized servers. Each compute node contains a vRouter implementing the forwarding plane of the OpenContrail architecture. Each VM contains one or several Virtual Network Interface Cart (vNIC)s, and each vNIC is connected to a vRouter's tap interface. In this architecture, the link connecting the VM to the tap interface is equivalent to the CE-PE link of VPN service. This interface is dynamically created as soon as the VM is spawned.In OpenContrail proposed architecture, XMPP performs the same function as MP-BGP in signaling overlay networks. After joining spawning a VM the vRouter assigns an MPLS label to the related tap interface connected to the VM. Next, it advertises the network prefix and the label to the control node, using a XMPP Publish Request message. This message, going from the vRouter to the Control node is equivalent to a BGP update from both semantic and structural point of view. The Control node, acts like a Route Reflector (RR) that centralizes route signaling and sends routes from one vRouter to another one by an XMPP Update Notification.Proposed OpenContrail architecture and its complementary blocs provide a turnkey solution suitable for public and private clouds. However, this solution covers mostly data center oriented use cases based on specific forwarding devices, called vRouters. The XMPP-based interface used by the latter creates "technological dependency" and reduces the openness of the solution, while the XMPP is not a commune interface usable by other existing SDN controllers. 3 . 2 . 2 . 1 MPLS 3221 Configuring and controlling MPLS networks via SDN controllers is one of challenges. Nowadays, SDN controllers propose to externalize MPLS control plane inside modules some of which are implemented within the controller or application layer. The work[START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF] proposes the implementation of MPLS Traffic Engineering (MPLS-TE) and MPLS-based VPN using OpenFlow and NOX. In this work authors discuss how the implementation of MPLS control plane becomes simple thanks to the consistent and up to date topology map of the controller. This externalization is done though the network applications implemented at the top of the SDN controllers, applications like Traffic Engineering (TE), Routing, VPN, Discovery, and Label Distribution. This work is an initiative to SDN-based MPLS networks, and the internal architecture of SDN controller and APIs allowing to configure an MPLS network is not explained in details. Chapter 3. SDN-based Outsourcing Of A Network Service In order to analyze the internal architecture of controllers proposing the deployment of MPLS networks, and also to study SDN APIs allowing to configure the underlying network, we try to orient our studies on one of the most developed open source SDN controllers, OpenDaylight. Networks in OpenDaylight controller With its large development community and it various projects the OpenDaylight controller is one of the most popular controllers in the SDN academic and open source world. Open-Daylight is an open source project under the Linux Foundation based on the microservices architecture. In this architecture, each core function of the controller is a microservice which can be activated or deactivated dynamically. OpenDaylight supports a large number of network protocols beyond OpenFlow, such as NETCONF, OVSDB, BGP, and SNMP. FIGURE 3 . 6 : 36 FIGURE 3.6: OpenDaylight Architecture (source [32]) : Maintains the state of the Forwarding Information Base (FIB) that associates routes and NextHop for each VRF. This information is sent to the OVS by OpenFlow. The VPN Service project provides the NBI APIs for deploying an L3 VPN for the Data Center (DC) Cloud environment. FIGURE 3 . 8 : 2 . 3 . 3823 FIGURE 3.8: VPN Configuration Using OpenDaylight VPN Service projetct Contents 1 . 1 1 1. 2 2 1. 3 2 1. 4 3 1. 5 1112232435 Thesis context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation and background . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributions of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 4 . 4 Service lifecycle and Service Data Model following these two views simplifies the service abstraction design. The first viewpoint allows us to identify the APIs structuring the NBI and shared by both actors (operator and service consumer). Fig. 4 . 4 Fig. 4.1 illustrates the client-side service lifecycle managed by this type of applications, containing two main steps: -Service creation: The application specifies the service characteristics it needs, it negotiates the associated SLA which will be available for limited duration and finally it requests a new service creation. In the reminder of the text we will mark it [Service creation]. -Service retirement: The application retires the service at the end of the negotiated duration. This step defines the end of the service life. In the reminder of the text we will mark it [Service retirement]. FIGURE 4 . 1 : 41 FIGURE 4.1: Client side Service Lifecycle of Type-1 applications Fig. 4 . 4 2 illustrates the supplementary step added by this type of applications to the client-side service lifecycle. This lifecycle contains three main steps: -Service creation: [Service creation] cf. Section 4.1.1.1. Chapter 4 . 4 Service lifecycle and Service Data Model monitored thanks to the upcoming events and statics sent from the SDNC to the application. In the reminder of the text we will mark it [Service monitoring]. -Service retirement: [Service retirement] cf. Section 4.1.1.1. FIGURE 4 . 2 : 42 FIGURE 4.2: Client side Service Lifecycle of Type-2 applications .3 and contains four main steps: -Service creation: [Service creation] cf. Section 4.1.1.1. -Service monitoring: [Service monitoring] cf. Section 4.1.1.2. -Service modification: Upcoming events and statistics may trigger an algorithm implemented inside the application (implemented at the top of the SDNC), the output of which reconfigures the underlying network resources through the SDNC. In the reminder of the text we will mark it [Service modification]. -Service retirement: [Service retirement] cf. Section 4.1.1.1. FIGURE 4 . 3 : 43 FIGURE 4.3: Client side Service Lifecycle of Type-3 applications FIGURE 4 . 4 : 44 FIGURE 4.4: Global Client Side Service Lifecycle 4. 1 . 1 1 and 4.1.2. The Fig. 4.6 illustrates the interactions between these two service lifecycles. During the service run-time the client and the operator interact with each other using the NBI. This interface interconnects different phases of each part, as described below: -Service Creation and Modification ↔ Service Request, Decomposition, Compilation and Configuration: the client-side service creation and specification phase leads to three first phases of the service lifecycle in the operator side; service request, decomposition, compilation and configuration. TABLE 4 . 1 : 41 Consequently, the service model -device model transformation is a top-bottom model transformation. Otherwise, if the service retirement is triggered by the service operator, a new service model should be represented to the customer. This one requires a bottom-up model transformation done with one of explained methods. Service lifecycle phases and their related data models and transformation methods SM -Service model, DM -Device model, D -Declarative, I -Imperative Chapter 5 . 5 manage any network service conform to the service lifecycle model presented in the previous chapter. We organize this set of functions in two orchestrators, one dedicated exclusively to the management of the resources: the resource orchestrator, and the other one grouping the remaining functions: the service orchestrator. The general framework structuring the internal architecture of SDN is presented in Section 5.2 and illustrated with an example. This framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface. An SDN-based Framework For Service Provisioning SDNC NBI to program these resources by recently generated instructions (arrows 6, 10, 12 of Fig.5.2). Finally at the end of the service deployment, the client will be informed about the service implementation through the SRM (arrows "Service Creation Resp." of Fig.5.2). Fig. 5 . 5 Fig. 5.11 shows the negotiated MPLS VPN service model requested by the customer. In this model the customer requests creating a VPN connection between three remote sites each one connected to a CE (ce1, ce2, and ce3). Fig. 5 . 5 Fig.5.12 shows the simplified algorithm implemented within the MPLS_vpn_transformer. FIGURE 5 . 12 : 512 FIGURE 5.12: Implemented MPLS VPN transformer simplified algorithm Finally, we 1 29 3. 3 1293 described in 5.3 the implementation of the main components of the proposed framework based on OpenDaylight controller and Mininet platform. In this prototype we study the service data model transformation, discussed in 4.2, through a simple MPLS VPN service deployment.Introduction to MPLS networks . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 MPLS data plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 MPLS control plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.3 MPLS VPN Sample Configuration . . . . . . . . . . . . . . . . . . . . 26 3.1.4 MPLS VPN Service Management . . . . . . . . . . . . . . . . . . . . . 27 3.2 SDN-based MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 OpenContrail Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.2 OpenFlow-based MPLS Networks . . . . . . . . . . . . . . . . . . . . Outsourcing problematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34NBI refers to the software interfaces between the controller and the applications running atop. These ones are abstracted through the application layer consisting in a set of applications and management systems acting upon the network behavior at the top of the SDN stack through the NBI. The centralized nature of this architecture brings large benefits to the network management domain, including the third party network programmability access. Network applications, on the highest layer of the architecture, achieve the desired network behavior without knowledge of detailed physical network configuration. The implementation of the NBI relies on the level of the network abstraction to be provided to the application and the type of the control that the application brings to the controller, called SO in our work.NBI appears as a natural administrative border between the SDN Orchestrator, managed by an operator, and its potential clients residing in the application layer. We argue that providing to the operator the capability of mastering SDN's openness on its northbound side should largely be profitable to both operator and clients. We introduce such a capability through the concept of BYOC: Bring Your Own Control which consists in delegating all or a part of the network control or management role to a third party application called Guest Controller (GC) and owned by an external client. An overall structure of this concept is presented in Fig.6.1, which shows the logical position of the Bring Your Own Control (BYOC) application in the traditional SDN architecture, that includes partly the Control Layer and the Application one.Chapter 6. Bring Your Own Control (BYOC) Figure 6 . 6 Figure 6.4 illustrates the communication between components of the system in detail. FIGURE 7 . 6 : 7 . 1 . 3 . 3 Fig. 7 . 7671337 FIGURE 7.6: A model of Decision Base implemented within the SO FIGURE 7 . 7 : 1 59 5. 3 59 5. 3 . 2 60 5. 3 . 3 61 5. 4 77159359326033614 FIGURE 7.7: Decision Engine task flowchart service model is a general and simplified model of the service presented to the service customer. And the device model is the technical definition of the device configuration generated based on the negotiated service model. The service object, shared between the operator and the customer is described in the service model. Consequently, the client-side service lifecycle is using the service model and all phases of the lifecycle are based on this model. The service model crosses down the operator-side lifecycle and is transformed to one or different device models. In Section 4.2 we discuss the model used or generated by each operator-side service lifecycle phase. In this section we discussed also the transformation type each step might do to convert a model from a service to a device one. -A service management framework based on SDN paradigm (Chapter 5) The service lifecycle analysis gives us a tool to determine all activities an operator should do to manage a service. In Chapter 5, based on the operator-side service lifecycle, we propose a framework through which a service model presented to the 8.2. Future researches 93 customer, is transformed to device models deployed on resources. The architecture of this framework is based on a double-layered system managing the service lifecycle through two orchestrators: service orchestrator and resource orchestrator. The first one puts together all functions allowing operator to manage a service vertically, and the second one manages resources needed by the first one to deploy a service. -Bring Your Own Control BYOC service (Chapter 6) The proposed framework gives rise to a system deploying and managing services. It opens an interface to the customers' side. In this chapter we present a new service control model, called Bring Your Own Control (BYOC) that follows the Type 3 applications model discussed in Section 2.5.2. the SC allowing to classify traffic flows based on policies. This classification allows to specify a path ID to the flow used to forward the flow on a specific path, called Service Function Path (SFP). In the ONF proposal the SDNC has a central role, where it sets up SFPs by programming Service Function Forwarder (SFF) to steer the flow through the sequence of SF instances. It also locates and program the flow classifier through the SBI allowing to classify a flow.Applying the BYOC concept to the approach proposed by ONF consists in opening a control interface between the SDNC and a GC that implements all functions needed to classify the flow and reconfigure the SFF, and the flow classifier based on new flows arrived on the classifier. Delegating the control of the SFC to the customer, gives more flexibility, visibility and freedom to the customer to create a flexible SFC based on its customized path computation algorithms and its applications requirements. On the other hand, a BYOC based SFC allows the Service Provider to lighten the service OpEx. une frontière administrative naturelle entre l'orchestrateur SDN, géré par un opérateur, et ses clients potentiels résidant dans la couche application. Fournir à l'opérateur la capacité de maîtriser l'ouverture de SDN sur son côté nord devrait être largement profitable à l'opérateur et aux clients. Nous introduisons une telle fonctionnalité à travers le concept de BYOC : Bring Your Own Control qui consiste à déléguer tout ou partie du contrôle et/ou de la gestion de réseau à une application tierce appelée Guest Controller (GC) et appartenant à un client extérieur. xxviii et qui permet à l'application côté client d'envoyer des demandes de service et des modifications, tandis que la seconde utilise une interaction asynchrone où une notification sera envoyée à l'application de service abonnée. La nature asynchrone de cette librairie la rend utile pour envoyer des messages de contrôle au GC. La communication entre le GC et le SO 'importance d'un système de provisionnement de service est basée sur son NBI qui connecte le portail de service au système de provisionnement de service (SO) dans un environnement SDN. Cette interface fournit une abstraction de la couche service et des fonctions essentielles pour créer, modifier et détruire un service, et, comme décrit ci-dessus, elle prend en compte est basée sur l'algorithme Push-and-Pull (PaP) essentiellement utilisé dans les applications web. Dans cette proposition, nous essayons d'adapter cet algorithme pour déterminer la mé- thode de communication de la NBI qui utilisera le paradigme de publication / soumission de messagerie. NBI fait référence aux interfaces logicielles entre le contrôleur et ses applications. Celles-ci sont extraites à travers la couche application consistant en un ensemble d'applications et de systèmes de gestion agissant sur le comportement du réseau en haut de la pile SDN à travers la NBI. La nature centralisée de cette architecture apporte de grands avantages au domaine de gestion de réseau. Les applications réseau, sur la couche supérieure de l'architecture, atteignent le comportement réseau souhaité sans connaître la configuration détaillée du réseau physique. L'implémentation de la NBI repose sur le niveau d"abstraction du réseau à fournir à l'application et sur le type de contrôle que l'application apporte au contrôleur, appelé SO dans notre travail. xxvii NBI apparaît comme Lles unités de contrôle externalisées appelées GC. Cette interface est un point d'accès partagé entre différents clients, chacun contrôlant des services spécifiques avec un abonnement associé à certains événements et notifications. Il est donc important que cette interface partagée implémente un environnement isolé pour fournir un accès multi-tenant. Celui-ci devrait être contrôlé à l'aide d'un système intégré d'authentification et d'autorisation. Dans notre travail, nous introduisons une NBI basée sur le protocole XMPP. Ce protocole est développé à l'origine comme un protocole de messagerie instantané (IM) par la communauté. Ce protocole utilise une technologie de streaming pour échanger des éléments XML, appelés stanza, entre deux entités du réseau, chacune identifiée par un unique identifiant JID. La raison principale de la sélection de ce protocole pour implémenter la NBI du système de provisionnement de services repose sur son modèle d'interaction asynchrone qui, à l'aide de son système push intégré, autorise l'implémentation d'un service BYOC. de vie du service représentant toutes les phases qu'un opérateur doit traverser pour déployer, configurer et maintenir un service. Cette analyse recto-verso permet de déterminer les actions que chaque client et opérateur de service peut effectuer sur un service qui est l'objet commun entre un client et un opérateur.Nous avons présenté pour la deuxième fois le modèle de données de chaque cycle de vie basé sur une approche de modèle de données à deux couches. Dans cette approche, un service peut être modélisé en deux modèles de données : service et dispositif, et un modèle élémentaire, appelé transformation, définit comment l'un de ces deux modèles peut être transformé en un autre. Le modèle de service est un modèle général et simplifié du service présenté au client du service. Et le modèle de périphérique est la définition technique de la configuration de périphérique générée sur la base du modèle de service négocié. L'objet de service partagé entre l'opérateur et le client est décrit dans le modèle de service. Par Afin de définir un framework d'approvisionnement de service basé sur SDN permettant de définir les couches de contrôle et d'application, une analyse du cycle de vie du service devait avoir lieu. Nous avons organisé l'analyse du cycle de vie du service selon deux points de vue : client et opérateur. La première vue concerne le cycle de vie du service client qui traite les différentes phases dans lesquelles un client de service (ou client) peut être pendant le cycle de vie du service. Cette analyse est basée sur la classification des applications et des services que nous avons précédemment faite. Selon cette classification, un client de service peut utiliser l'interface de gestion de service pour gérer trois types de services. Le premier est le cas où le client demande et configure un service. Le deuxième type est le client qui surveille son service, et le troisième est le client qui, en utilisant l'interface de gestion, reçoit certains paramètres de service sur la base desquels il reconfigure ou met à jour ce service. Sur la base de cette analyse, le cycle de vie du service côté client peut être modifié. Nous avons analysé toutes les phases que chaque type de service pourrait ajouter au cycle de vie du service. D'un autre côté, l'analyse du cycle de vie du service côté opérateur présente un xxix modèle de cycle conséquent, le cycle de vie du service côté client utilise le modèle de service et toutes les phases du cycle de vie sont basées sur ce modèle. Le modèle de service traverse le cycle de vie côté opérateur et est transformé en un ou plusieurs modèles de ressource. L'analyse du cycle de vie du service nous donne un outil pour déterminer toutes les activités qu'un opérateur doit effectuer pour gérer un service. Basé sur le cycle de vie du service côté opérateur, nous proposons un framework à travers lequel un modèle de service présenté au client est transformé en modèles de ressources déployés sur des ressources. L'architecture de ce framework repose sur un système à deux couches gérant le cycle de vie du service via deux orchestrateurs : orchestrateur de service et orchestrateur de ressource. Le premier regroupe toutes les fonctions permettant à l'opérateur de gérer un service verticalement, et le second gère les ressources nécessaires au premier pour déployer un service. Le framework proposé donne lieu à un système de déploiement et de gestion de services. Il ouvre une interface du côté des clients. Nous présentons un nouveau modèle de contrôle de service, appelé Bring Your Own Control (BYOC) qui suit le modèle d'application de type 3. Nous introduisons BYOC comme un concept permettant de déléguer, à travers la NBI, le contrôle de tout ou partie d'un service à un contrôleur externe, appelé Guest Controller (GC). Ce dernier peut être géré par le même client demandant et consommant le service ou par un opérateur tiers. L'ouverture d'une interface de contrôle au nord de la plate-forme SDN nécessite certaines spécifications au niveau de NBI. Nous avons abordé dans la suite de notre travail les exigences de la NBI permettant d'ouvrir l'API de BYOC. Sur la base de ces exigences, nous avons proposé l'utilisation de XMPP comme le protocole permettant de déployer une telle API. L'analyse des avantages du concept BYOC et les problèmes de complexité et de sécurité que BYOC peut apporter au processus de gestion des services peuvent faire l'objet d'un travail futur. Cette analyse nécessite une étude plus sophistiquée de ce concept, du modèle économique potentiel qu'il peut introduire (ex. Earn as You Bring EaYB), des méthodes et des protocoles utilisés pour implémenter l'interface nord et contrôler l'accès aux ressources exposées au GC, et l'impact réel de ce type de services sur la performance des services. xxx L'ouverture de l'interface de contrôle de type BYOC permet de créer des nouveaux modèles de service non seulement dans le domaine SDN mais aussi dans les domaines NFV et 5G. service management framework based on SDN paradigm The service lifecycle analysis gives us a tool to determine all activities an operator should do to manage a service. Based on the operator-side service lifecycle, we propose a framework through which a service model presented to the customer, is transformed to device models deployed on resources. This framework is organized into two orchestrator systems called respectively Service Orchestrator and Resource Orchestrator interconnected by an internal interface. Our approach is illustrated through the analysis of the MPLS VPN service, and a Proof Of Concept (POC) of our framework based on the OpenDaylight controller is proposed. -Bring Your Own Control (BYOC) service model Chapter 1. Introduction illustrate our approach through the outsourcing of an Intrusion Prevention System (IPS) service. We exploit the proposed framework by introducing a new and original service control model called Bring Your Own Control (BYOC). It allows the customer or a third party operator to participate in the service lifecycle following various modalities. We analyse the characteristics of interfaces allowing to deploy a BYOC service and we Table Flow Table Group Table Group Table Port Port Port Port OpenFlow Protocol OpenFlow Switch FlowGroupGroupPort Datapath Control Channel Pipeline FIGURE 2.6: OpenFlow Switch Components (source [23]) TABLE 2 . 2 1: SDN Controllers and their NBI This question has been raised several times and a common conclusion is that northbound APIs are indeed important, but it is too early to define a single standard at this time [38, 39, Chapter 2. Programming the network 40] Net Manager" and "Host Manager". The first one, Flow Manager, controls and routes flows based on a specific load-balancing algorithm implemented in this module. This one implements necessary controller core functions and Layer 2 protocols such as Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP) and Spanning Tree Pro- tocol (STP). The second module, Net Manager, keeps track of the network topology, link usages and links packet latency. The third module, Host Manager, monitors the state of each TABLE 3 . 3 1: MPLS VPN configuration parameters TABLE 3 . 3 OpenDaylight Native MPLS API LAN IP address ✓ RT ✓ RD ✓ AS ✓ VRF name ✓ Routing protocols ✓ VPN ID ✓ MPLS labels ✓ 3: MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project .[START_REF] Kim | Improving network management with software defined networking[END_REF]. This model contains all previous steps needed to manage three types of applications, discussed earlier. We introduce to this Service Creation Service Service Retirement Monitoring Service Modification & update Chapter 5 An SDN-based Framework For Service Provisioning Contents 2.1 Technological context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Modeling programmable networks . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Fundamentals of programmable networks . . . . . . . . . . . . . . . . . . . 9 2.4 Software-Defined Networking (SDN) . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 SDN Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.3 SDN Southbound Interface (SBI) . . . . . . . . . . . . . . . . . . . . . 12 2.4.4 SDN Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.5 SDN Northbound Interface (NBI) . . . . . . . . . . . . . . . . . . . . 15 2.5 SDN Applications Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.1 SDN Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.2 Intuitive classification of SDN applications . . . . . . . . . . . . . . . 18 2.5.3 Impact of SDN Applications on Controller design . . . . . . . . . . . 19 2. 6 Network Function Virtualization, an approach to service orchestration . . 20 Chapter 6. Bring Your Own Control (BYOC) a unique Uniform Resource Locator (URL). Resources are application's state and functionality which are represented by a uniform interface to transfer the state between the client and server. Unlike most of web services architecture, it is not necessary to use XML as a data interchange format in REST. The implementation of the REST is standard-less and the format of exchanged information can be in XML, JavaScript Object Notation (JSON),The simplicity, performance and scalability of REST are the reasons of its popularity in SDN controllers' world. REST is easier to use and is more flexible. In order to interact with the Web Services, no expensive tools are required. Comparing to our requirements explained in section 6.2.1, the fundamental limitation of this method rests on the absence of asynchronous capabilities and managing a secured multi-access.Traditional Web Services solutions, such as Simple Object Access Protocol (SOAP) have previously been used to specify NBI but quickly abandoned in favor of the RESTful approach. Comma-Separated Values (CSV), plain text, Rich Site Summary (RSS) or even in HyperText Markup Language (HTML), i.e. REST is ambivalent. 6.6 shows the Overhead charge of the NBI obtained during this test. The NBI Overhead charge rests constant for the XMPP-based case and varies for the RESTful one. The overhead charge of the NBI in the simulated real-time case (less that 1 ms of delay) for the RESTful NBI is about 3 MB. To reduce this charge and achieve the same Overhead as XMPP-based one, we need to increase the time interval up to 200 ms. This time interval will have a direct effect on the event transfer delay. Overhead Bytes 3500000 3000000 2500000 2000000 REST 1500000 XMPP 1000000 500000 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Interval ms FIGURE 6.6: NBI Overhead Charge There were several initiatives to define the perimeter of Chapter 8. Conclusions and Future Research their perimeter and aiming to define an NBI was almost impossible. ONF had just started the NBI group activities aiming to define an NBI answering requirements of most of applications. However, this work was far from being realized, because defining a standard NBI, that is an application interface, requires a careful analysis of several implementations and the feedback gained by all those implementations.In order to define a reference SDN-based service provisioning framework allowing to define the control and application layer edge, a service lifecycle analysis had to take place. At the first time, in Section 4.1, we presented the service lifecycle analysis in two point of views: client and operator. The fist view, client-side service lifecycle, discusses different phases in which a service customer (or client) can be during the service lifecycle. This analysis is held based-on the application and service classification that we have previously done in Section 2.5.2. According to this classification, a service customer can use the service management interface to manage three types of services. The first one is the case where the customer requests and configures a SDN architecture layers, several SDN controllers were in the design and development phase, and developed SDN controllers and frameworks were deployed each one for specific research topics. Some of SDN-based services were deployed by internal SDN controller's functions and some of them were controlled by applications developed at the top of the controller programing the network via the controllers NBI. Due to the nature of ongoing projects, and the fact that there were not any clear definition of SDN controller core functions and northbound applications, defining the border of these two layers, i.e. SDN controller and SDN applications, helping to delimitate Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. Cette évolution continue du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services réseau. Ces derniers évoluent vers l'intégration d'une capacité « à la demande » dont la particularité consiste à permettre aux clients du SP de pouvoir les déployer et les gérer de manière autonome et optimale. Pour offrir une telle souplesse de fonctionnement, le SP doit pouvoir s'appuyer sur une plateforme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking).Nous proposons dans un premier temps une caractérisation de la classe de services réseau à la demande. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui en précise chacune des étapes.L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous introduisons un Framework original qui encapsule le contrôleur SDN, et permet la gestion de toutes les étapes du cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé. En plus de maintenir l'agilité de la gestion des services, nous proposons d'introduire un framework de gestion des services au-delà du SDNC. Ce framework fournit une vue d'ensemble de toutes les fonctions et commandes. Nous renforçons ce framework en ajoutant la question de l'accès du client aux ressources et services gérés. En effet, ce framework doit être capable de fournir une granularité variable, capable de gérer tous les types de services :-Applications de type 1 : Le modèle abstrait de service apporté par NBI du framework permet à l'application côté client de configurer un service avec un minimum d'informations communiquées entre l'application et le framework. L'accès restreint fourni par le framework empêche les fuites de données involontaires ou intentionnelles et la mauvaise configuration du service.-Applications de type 2 : Du côté sud, les blocs internes du framework reçoivent les événements de réseau venus directement à partir des ressources, ou indirectement via le SDNC. Du côté nord, ces blocs ouvrent une API aux applications leurs permettant de s'abonner à certaines métriques utilisées pour des raisons de surveillance. Sur la base d'événements remontés par les ressources, ces métriques sont calculées par des blocs internes du framework et sont envoyées à l'application appropriée.-Applications de type 3 : L'accès contrôlé aux fonctions basées sur SDN assurées par le framework fournit non seulement une API de gestion de service, mais aussi une interface de contrôle de service ouvert à l'application client. L'API de contrôle avec une granularité fine permet aux clients d'avoir un accès de bas niveau aux ressources réseau via le framework. En utilisant cette API les clients reçoivent les événements réseau envoyés par les équipements, à partir desquels ils reconfigurent le service. Afin de fournir un framework capable de mettre en oeuvre les API mentionnées, nous devons analyser le cycle de vie du service en détail. Cette analyse conduit à l'identification de tous les blocs internes du framework et à leurs articulations internes pour permettre aussi bien la présentation d'une API de service et de contrôle que le déploiement l'allocation et la configuration de ressources.Cycle de vie du service et modèle de données de serviceAfin de réduire la complexité de la gestion du cycle de vie, nous divisons le cycle de vie du service global en deux points de vue complémentaires : la vue du client et celle de l'opérateur. Chacune des deux vues capture uniquement les informations utiles pour l'acteur associé. La vue globale peut cependant être obtenue en composant les deux vues partielles.Sur la base de la classification des applications abordées dans nos études, nous analysons le cycle de vie du service côté client pour les trois principaux types d'applications. Les applications de type 1 sont constituées d'applications créant un service réseau à l'aide de la NBI.Cette catégorie ne surveille ni ne modifie le service en fonction des événements réseau. BYOC devrait clairement permettre de réduire la charge de traitement du contrôleur. En effet, les architectures et les propositions SDN existantes centralisent la plupart du contrôle de réseau et de la logique de décision dans une seule entité. Celle-ci doit supporter une charge importante en fournissant un grand nombre de services tous déployés dans la même entité. Une telle complexité est clairement un problème que BYOC peut aider à résoudre en externalisant une partie du contrôle à une application tierce. La préservation de la confidentialité de l'application client de service est un autre point important apporté par BYOC.En fait, centraliser le contrôle du réseau dans un système et passer toutes les données de ce contrôleur peut créer des problèmes de confidentialité qui peuvent empêcher l'utilisateur final, que nous appelons SC, d'utiliser le SDNC. Enfin et surtout, BYOC peut aider l'opérateur à affiner sensiblement son modèle économique basé sur SDN en déléguant un contrôle presque "à la carte" via des APIs dédiés. Une telle approche peut être exploitée intelligemment selon le nouveau paradigme de "Earn as you bring" (EaYB) que nous présentons et décrivons ci-dessous.En effet, un client extérieur possédant un algorithme sophistiqué propriétaire peut vouloir commercialiser les traitements spécialisés associés, à d'autres clients via l'opérateur SDN qui pourrait apparaître comme un courtier de ce type de capacités. Il convient de souligner que ces avantages de BYOC peuvent en partie être compensés par la tâche non triviale de vérifier la validité des décisions prises par l'intelligence externalisée qui doivent être au moins conformes aux différentes politiques mises en oeuvre par l'opérateur dans le contrôleur. Ce point qui mérite plus d'investigation pourrait faire l'objet de recherches futures.L'externalisation d'une partie des tâches de gestion et de contrôle modifie le modèle de cycle de vie du service. Il s'agit en effet de traduire du côté client, des parties de certaines tâches appartenant initialement à l'opérateur. Une analyse minutieuse nous permet d'identifier les tâches de compilation et de surveillance, réalisées au niveau du cycle de vie du service côté opérateur, comme des candidats potentiellement intéressants, dont certaines parties peuvent être déléguées au GC. Le GC est connecté au SO à travers la NBI. C'est là que l'opérateur de service communique avec le client du service et parfois avec les applications côté client, les orchestrateurs et les GCs.Afin de réaliser ces fonctionnalités, certaines librairies devraient être implémentées. Ces dernières prennent en charge deux catégories de tâches : 1) création, configuration et modification de service, et 2) surveillance de service et contrôle de service BYOC. La première utilise une interaction synchrone qui implémente une simple communication requête / réponse This aspect is not mentioned in this figure because it falls outside of the scope of the service lifecycle. Acknowledgements To formalize data model on each layer and the transformation allowing to map one layer onto the other one, Moberg et al. [START_REF] Moberg | A two-layered data model approach for network services[END_REF] proposed to use the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] as the formal modeling language. Using the unified YANG modeling for both service and device layers was also previously proposed by Wallin et al. [START_REF] Wallin | Automating Network and Service Configuration Using NETCONF and YANG[END_REF]. These three elements required to model a service, (i.e. service layer, device layer and transformation model), are illustrated in [START_REF] Moberg | A two-layered data model approach for network services[END_REF] where the authors specify with this approach IP VPN services. At the service layer the model consists in a formal description of VPN services presented to the customer, and derived from a list of VPN service parameters including BGP AS number, VPN name and a list of VPN endpoints (CE and/or PE). At the second layer the device model is the set of device configurations corresponding to a VPN service. This one is defined by all configurations done on PEs connected to all requested endpoints. This information includes PE IP address, RT, RD, BGP AS number, etc. Finally, for the third element which is the transformation template mapping one layer on the other one, the authors propose the use of a declarative model. In the example the template is based on Extensible Markup Language (XML) which according to service model parameters, generates a device model. Applying the two-layered model approach on Service Lifecycle Proposed model based on YANG data modeling language brings dynamicity and agility to the service management system. Its modular aspect allows to reduce the new service creation and modification costs. In this section we apply this model on the proposed service lifecycle, discussed in Section 4.1. This analysis aims to introduce a model allowing to formalize service lifecycle phases and their respective data models. An example of this analysis is presented in the Table 4.1 at the end of this Section. Applying two-layered model on client side service lifecycle The client side service representation is a minimal model containing informal information about the customers service. All steps of client side service lifecycle are relying on the negotiated service model, i. e. the service layer model of two-layered approach (Section 4.2.1) that is a representation of the service and its components. From the operator side point of view, data model used by this service lifecycle rests on the service layer model. Applying two-layered model on operator side service lifecycle The integration of new services and updating the service delivery platform involves the creation and updating data models used by the client side service lifecycle. Contrary to the client side, the operator side service lifecycle relies on several data models. Illustrating Service Deployment Example The operator side service lifecycle is presented in Section 4.1.2. This model represents all processes an operator may take into account to manage a service. We introduce a service and resource management platform which encapsulates an SDNC and provides through other functional modules, the capabilities to implement each step of the service lifecycle presented before. Fig. 5.1 illustrates this platform with the involved modules together with special data required and generated by each module. It shows the diversity of information needed to manage a service automatically. We will detail the different modules of this platform in the next section. We prefer now to illustrate this model by describing the main processes through the example of a VPN service connecting two remote sites of a client connected to physical routers: PE1 and PE2. In MPLS networks, each CE is connected to a VRF instance hosted in a PE. In our example we call these instances, (i.e. VRFs) respectively vRouter1 and vRouter2. The first step of the service lifecycle which consists in the "Service Creation" gives rise in the nominal case to a call flow the details of which are presented in Fig. 5.2. In the first step (arrow 1 of Fig. In the last chapter we proposed a new service implementing BYOC-based services. Lastly in chapter 4 we analyzed the lifecycle of a service based on two service actors, the client and the operator, view points. Dividing the service lifecycle into two parts refines our analysis and helped us to present lastly a SDN framework in chapter 5, where we discussed a framework through which a negotiated service model can be vertically implemented. This framework is issued from the operator-side service lifecycle steps, where that permits not only to implement and control a service, but also to manage its lifecycle. The second part of service lifecycle, client-side, presents all steps that every applications types, presented in section 2.5.2, may take to deploy, monitor and reconfigure a service through the SDNC. In this chapter we rapidly showed how the lastly presented framework permits to deploy a BYOC-type service. We also presented an XMPP-based NBI allowing to open the interface to the GC. IPS Control Plane as a Service Referenced architecture The architecture of proposed service is based on the Intrusion Prevention System (IPS) architecture divided into two entities. The first one is Intrusion Detection System (IDS)-end that is implemented in key points of the network and observes real-time traffics. The second one, called Security Manager (SM), is a central management system that, thanks to its This database is comparable with the database used normally within Firewalls where there are actions like: ACCEPT, REJECT, and DROP. The difference between these ones and the DB presented in BYOC is in fields "IPProto" and "Action" where we record the type of message log / alert (in the IPProto field) and the ID of the GC, gcId (in the Action field). The values stored in this database are configured by the network administrator (operator) that manages the entire infrastructure. Service Dispatcher (SD) and NBI The SD is the module directly accessible by GCs. It identifies GCs using the identifier of each one (gcId), registered in the Action field of the DB. We propose here to use the XMPP protocol to implement the interface between the SD and GCs where each endpoint is identified by a JID. Detailed components of the Guest Controller (GC) The Fig. 7.8 shows the detailed components of the SM implemented the GC. As stated recently the SD sends the log and alert messages arriving from IDS-ends to an appropriate GC. These messages contain the specific values (143, 144) in their IPProto field. By receiving a message, the SM needs to know the origin of this message, whatever it is: log and alert. For this we propose to implement Security Proxy (SP). By examining the IPProto field, the SP decides whether a message relates to a log or an alert. Applying the GC decision on the infrastructure Once a decision is made by the GC, it sends a service update message to the SDCM. This decision may update a series of devices. In our example, to block an attacking traffic, the decision just updates the OpenFlow switch that is installed in front of the IDS-end. This new configuration, deployed on the switch, allows the GC to block the inbound traffic entering the customers sites (interface I.1 on the figure 7.5). The update message sent from the GC contains a service data model equivalent to the model presented in the service creation phase. Thanks to this homogeneity of models, the BYOC service update becomes transparent for the SO and the update process will be done through existing blocks of the SO. Distributed IPS control plane Opening a control interface on the IDS-end equipment through the SO allows to break down the inner modules of the SM between several GCs. The fig. 7.9 illustrates this example in details. In this example, an attack signature database is shared between multiple SMs.
183,007
[ "1043610" ]
[ "482801", "491313" ]
01758354
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://pastel.hal.science/tel-01758354/file/65301_TANNE_2017_archivage.pdf
Laura De Lorenzis Claudia Comi Jose Adachi l'opportunité de rencontrer l'équipe R&D dont Résumé La défaillance d'une structure est souvent provoquée par la propagation de fissures dans le matériau initialement sain qui la compose. Une connaissance approfondie dans le domaine de la mécanique de la rupture est essentielle pour l'ingénieur. Il permet notamment de prévenir des mécanismes de fissurations garantissant l'intégrité des structures civiles, ou bien, de les développer comme par exemple dans l'industrie pétrolière. Du point de vue de la modélisation ces problèmes sont similaires, complexes et difficiles. Ainsi, il est fondamental de pouvoir prédire où et quand les fissures se propagent. Ce travail de thèse se restreint à l'étude des fissures de type fragile et ductile dans les matériaux homogènes sous chargement quasi-statique. On adopte le point de vue macroscopique c'est-à-dire que la fissure est une réponse de la structure à une sollicitation excessive et est caractérisée par une surface de discontinuité du champ de déplacement. La théorie la plus communément admise pour modéliser les fissures est celle de Griffith. Elle prédit l'initiation de la fissure lorsque le taux de restitution d'énergie est égal à la ténacité du matériau le long d'un chemin préétabli. Ce type de critère requière d'évaluer la variation de l'énergie potentielle de la structure à l'équilibre pour un incrément de longueur de la fissure. Mais l'essence même de la théorie de Griffith est une compétition entre l'énergie de surface et l'énergie potentielle de la structure. Cependant ce modèle n'est pas adapté pour des singularités d'entaille faible i.e. une entaille qui ne dégénère pas en pré-fissure. Pour pallier à ce défaut des critères de type contraintes critiques ont été développés pour des géométries régulières. Malheureusement ils ne peuvent prédire correctement l'initiation d'une fissure puisque la contrainte est infinie en fond d'entaille. Une seconde limitation de la théorie de Griffith est l'effet d'échelle. Pour illustrer ce propos, considérons une structure unitaire coupé par une fissure de longueur a. Le chargement critique de cette structure évolue en 1/ √ a, par conséquent le chargement admissible est infini lorsque la taille du défaut tend vers zéro. Ceci n'a pas de sens physique et est en contradiction avec les expériences. Il est connu que cette limitation provient du manque de contrainte critique (ou longueur caractéristique) dans le modèle. Pour s'affranchir de ce défaut Dugdale et Barenblatt ont proposé dans leurs modèles de prendre en compte des contraintes cohésives sur les lèvres de la fissure afin d'éliminer la singularité de contraintes en fond d'entaille. i Plus récemment, les modèles variationnels à champ de phase aussi connu sous le nom de modèles d'endommagements à gradient [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF] ont fait leurs apparition début des années 2000. Ces modèles permettent de s'affranchir des problèmes liés aux chemins de fissures et sont connus pour converger vers le modèle de Griffith lorsque le paramètre de régularisation tend vers 0. De plus les résultats numériques montrent qu'il est possible de faire nucléer une fissure sans singularité grâce à la présence d'une contrainte critique. Ces modèles à champ de phase pour la rupture sont-ils capables de surmonter les limitations du modèle de Griffith ? Concernant les chemins de fissures, les modèles à champ de phase ont prouvé être redoutablement efficaces pour prédire les réseaux de fractures lors de chocs thermiques [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. Dans cette thèse, les résultats obtenus montrent que les modèles d'endommagement à gradient sont efficaces pour prédire la nucléation de fissure en mode I et de tenir compte de l'effet d'échelle. Naturellement ces modèles respectent le critère d'initiation de la théorie de Griffith et sont étendus à la fracturation hydraulique comme illustré dans le deuxième volet de cette thèse. Cependant ils ne peuvent rendre compte de la rupture de type ductile tel quel. Un couplage avec les modèles de plasticité parfaite est nécessaire afin d'obtenir des mécanismes de rupture ductile semblables à ceux observés pour les métaux. Le manuscrit est organisé comme suit: Dans le premier chapitre, une large introduction est dédiée à l'approche variationnelle de la rupture en partant de Griffith vers une approche moderne des champs de phase en rappelant les principales propriétés. Le second chapitre étudie la nucléation de fissures dans des géométries pour lesquels il n'existe pas de solution exacte. Des entailles en U-et V-montrent que le chargement critique évolue continûment du critère en contrainte critique au critère de ténacité limite avec la singularité d'entaille. Le problème d'une cavité elliptique dans un domaine allongé ou infini est étudié. Le troisième chapitre se concentre autour de la fracturation hydraulique en prenant en compte l'influence d'un fluide parfait sur les lèvres de la fissure. Les résultats numériques montrent que la stimulation par injection de fluide dans d'un réseau de fissures parallèles et de même longueur conduit à la propagation d'une seule des fissures du réseau. Il s'avère que cette configuration respecte le principe de moindre énergie. Le quatrième chapitre se focalise uniquement sur le modèle de plasticité parfaite en partant de l'approche classique vers une l'approche variationnelle. Une implémentation numérique utilisant le principe de minimisation alternée de l'énergie est décrite et vérifiée dans un cas simple de Von Mises. Le dernier chapitre couple les modèles d'endommagement à gradient avec les modèles de plasticité parfaite. Les simulations numériques montrent qu'il est possible d'obtenir des fissures de type fragile ou ductile en variant un seul paramètre uniquement. De plus ces simulations capturent qualitativement le phénomène de nucléation et de propagation de fissures en suivant les bandes de cisaillement. Introduction Structural failure is commonly due to fractures propagation in a sound material. A better understanding of defect mechanics is fundamental for engineers to prevent cracks and preserve the integrity of civil buildings or to control them as desired in energy industry for instance. From the modeling point of view those problems are similar, complex and still facing many challenges. Common issues are determining when and where cracks will propagate. In this work, the study is restricted to brittle and ductile fractures in homogeneous materials for rate-independent evolution problems in continuum mechanics. We adopt the macroscopic point of view, such that, the propagation of a macro fracture represents a response of the structure geometry subject to a loading. A fracture à la Griffith is a surface of discontinuity for the displacement field along which stress vanishes. In this widely used theory the fracture initiates along an a priori path when the energy release rate becomes critical, this limit is given by the material toughness. This criterion requires one to quantify the first derivative of potential energy with respect to the crack length for a structure at the equilibrium. Many years of investigations were focused on the notch tips to predict when the fracture initiates, resulting to a growing body of literature on computed stress intensity factors. Griffith is by essence a competition between the surface energy and the recoverable bulk energy. Indeed, a crack increment reduces the potential energy of the structure while it is compensated by the creation of a surface energy. However such a fracture criterion is not appropriate to account for weak singularity i.e. a notch angle which does not degenerate into a crack. Conversely many criteria based on a critical stress are adapted for smooth domains, but fail near stress singularities. Indeed, a nucleation criterion based solely on pointwise maximum stress will be unable to handle with crack formation at the singularity point i.e. σ → ∞. A second limitation of Griffith's theory is the scale effects. To illustrate this, consider a unit structure size cut by a pre-fracture of length a. The critical loading evolves as ∼ 1/ √ a, consequently the maximum admissible loading is not bounded when the defect size decays. Again this is physically not possible and is inconsistent with experimental observations. It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical length scale) in Griffith's theory. To overcome these aforementioned issues in Griffith's theory, Dugdale and Barenblatt pioneers of cohesive and ductile fractures theory proposed to kill the stress singularity at the tip by accounting of stresses on fracture lips. iii Recently, many variational phase-field models [START_REF] Bourdin | The Variational Approach to Fracture[END_REF] are known to converge to a variational Griffith -like model in the vanishing limit of their regularization parameter. They were conceived to handle the issues of crack path. Furthermore, it has been observed that they can lead to numerical solution exhibiting crack nucleation without singularities. Naturally, these models raise some interesting questions: can Griffith limitations be overcome by those phase-field models? Concening crack path, phase-field models have proved to be accurate to predict fracture propagation for thermal shocks [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. In this dissertation, numerical examples illustrate that Griffith limitations such as, nucleation and size effects can be overcome by the phase-field models referred to as gradient damage models in the Chapter 2. Naturally this models preserve Griffith's propagation criterion as shown in the extended models for hydraulic fracturing provided in Chapter 3. Of course Griffith's theory is unable to deal with ductile fractures, but in Chapter 5 we show that by coupling perfect plasticity with gradient damage models we are able to capture some of ductile fractures features, precisely the phenomenology of nucleation and propagation. The dissertation is organized as follows: In Chapter 1, a large introduction of phasefield models to brittle fracture is exposed. We start from Griffith to the modern approach of phase-field models, and recall some of their properties. Chapter 2 studies crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from that predicted by a strength criterion to that of a toughness criterion, when the strength of the stress concentration or singularity varies. We present validation and verifications of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase-field models properly account for structural and material size effects. Chapter 3 focuses on fractures propagation in hydraulic fracturing, we extend the variational phase-field models to account for fluid pressure on the crack lips. We recover the closed form solution of a perfect fluid injected into a single fracture. For stability reason, in this example we control the total amount of injected fluid. Then we consider a network of parallel fractures stimulated. The numerical results show that only a single crack grows and this situation is always the best energy minimizer compared to a multi-fracking case where all fractures propagate. This loss of symmetry in the cracks patterns illustrates the variational structure and the global minimization principle of the phase-field model. A third example deals with fracture stability in a pressure driven laboratory test for rocks. The idea is to capture different stability regimes using linear elastic fracture mechanics to properly design the experiment. We test the phase-field models to capture fracture stability transition (from stable to unstable). Chapter 4 is concerned with the variational perfect plasticity models and its implementation and verification. We start by recalling main ingredients of the classic approach of perfect elasto-plasticity models and then recasting into the variational structure. Later the algorithm strategy is exposed with a verification example. The strength of the proposed algorithm is to solve perfect elasto-plasticity maiv terials by prescribing the yield surfaces without dealing with non differentiability issues. Chapter 5 studies ductile fractures, the proposed model couple gradient damage models with perfect plasticity independently exposed in Chapter 1 and 4. Numerical simulations show that transition from brittle to ductile fractures is recovered by changing only one parameter. Also the ductile fracture phenomenology, such as crack initiation at the center and propagation along shear bands are studied in plane strain specimens and round bars in three dimensions. The main research contributions is in Chapter 2,3 and 5. My apologies to the reader perusing the whole dissertation which contains repetitive elements due to self consistency and independent construction of all chapters. v Chapter 1 Variational phase-field models of brittle fracture In Griffith's theory, a crack in brittle materials is a surface of discontinuity for the displacement field with vanishing stress along the fracture. Assuming an a priori known crack path, the fracture propagates when the first derivative of the potential energy with respect to the crack length at the equilibrium becomes critical. This limit called the fracture toughness is a material property. The genius of Griffith was to link the crack length to the surface energy, so the crack propagation condition becomes a competition between the surface energy and the recoverable bulk energy. By essence this criterion is variational and can be recast into a minimality principle. The idea of Francfort and Marigo in variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] is to keep Griffith's view and extend to any possible crack geometry and complex time evolutions. However cracks remain unknown and a special method needs to be crafted. The approach is to approximate the fracture by a damage field with a non zero thickness. In this region the material stiffness is deteriorated leading to decrease the sustainable stresses. This stress-softening material model is ill-posed mathematically [START_REF] Comi | On localisation in ductile-brittle materials under compressive loadings[END_REF] due to a missing term limiting the damage localization thickness size. Indeed, since the surface energy is proportional to the damage thickness size, we can construct a broken bar without paying any surface energy, i.e. by decaying the damaged area. To overcome this aforementioned issue, the idea is to regularize the surface energy. The adopted regularization takes its roots in Ambrosio and Tortorelli's [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] functionals inspired by Mumford-Shah's work [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] in image segmentation. Gradient damage models is closely related to Ambrosio and Tortorelli's functionals and have been adapted to brittle fracture. The introduction of a gradient damage term comes up with a regularized parameter. This parameter denoted is also called internal length and governs the damage thickness. Following Pham and Marigo [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF], the damage evolution problem is built on three principles, the damage irreversibility, stability and balance of the total energy. The beauty of the model is that the unknown discrete crack evolution is approximated by a regularized functional evolution which is intimately related to Griffith by its variational structure and its asymptotic behavior. This chapter is devoted to a large introduction of gradient damage models which 1 Chapter 1. Variational phase-field models of brittle fracture constitute a basis of numerical simulations performed in subsequent chapters. The presentation is largely inspired by previous works of Bourdin-Maurini-Marigo-Francfort and many others. In the sequel, section 1.1 starts with the Griffith point of view and recasts the fracture evolution into a variational problem. By relaxing the pre-supposed crack path constraint in Griffith's theory, the Francfort and Marigo's variational approach to fracture models is retrieved. We refer the reader to [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] for a complete exposition of the theory. Following the spirit of the variations principle, gradient damage models are introduced and constitute the basis of numerical simulations performed. Section 1.2 focuses on the application to a relevant one-dimensional problem which shows up multiple properties, such as, nucleation, critical admissible stress, size effects and optimal damage profile investigated previously by [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. To pass from a damage model to Griffith-like models, connections need to be highlighted, i.e letting the internal length to zero. Hence, section 1.3 is devoted to the Γ-convergence in one-dimensional setting, to show that gradient damage models behave asymptotically like Griffith. Finally, the implementation of such models is exposed in section 1.4 . Gradient damage models From Griffith model to its minimality principle The Griffith model can be settled as follow, consider a perfectly brittle-elastic material with A the Hooke's law tensor and G c the critical energy release rate occupying a region Ω ⊂ R n in the reference configuration. The domain is partially cut by a fracture set Γ of length l, which grows along an a priori path Γ. Along the fracture, no cohesive effects or contact lips are considered here, thus, it stands for stress free on Γ(l). The sound region Ω \ Γ is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ •ν on the remainder ∂ N Ω = ∂Ω\∂ D Ω, where ν denotes the appropriate normal vector. Also, for the sake of simplicity, body force is neglected. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u such that e(u) = ∇u + ∇ T u 2 . In linear elasticity the free energy is a differentiable convex state function given by ψ e(u) = 1 2 Ae(u) : e(u) . Thereby, the stress-strain relation naturally follows σ = ∂ψ(e) ∂e = Ae(u). By the quasi-static assumption made, the cracked solid is, at each time, in elastic equilibrium with the loads that it supports at that time. The problem is finding the unknown displacement u = u(t, l) for a given t and l = l(t) that satisfies the following constitutive equations, 1.1. Gradient damage models          div σ =0 in Ω \ Γ(l) u =ū(t) on ∂ D Ω \ Γ(l) σ • ν =g(t) on ∂ N Ω σ • ν =0 on Γ(l) (1.1) At the time t and for l(t) let the kinematic field u(t, l) be at the equilibrium such that it solves (1.1). Hence, the potential energy can be computed and is composed of the elastic energy and the external work force, such that, P(t, l) = Ω\Γ(l) 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 where dH n-1 denotes the Hausdorff n-1 -dimensional measure, i.e. its aggregate length in two dimensions or surface area in three dimensions. The evolution of the crack is given by Griffith's criterion: Definition 1 (Crack evolution by Griffith's criterion) i. Consider that the crack can only grow, this is the irreversibility condition, l(t) ≥ 0. ii. The stability condition says that the energy release rate G is bounded from above by its critical value G c , G(t, l) = - ∂P(t, l) ∂l ≤ G c . iii. The energy balance guarantee that the energy release rate is critical when the crack grows, G(t, l) -G c l = 0 Griffith says in his paper [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF], the "theorem of minimum potential energy" may be extended so as to of predicting the breaking loads of elastic solids, if account is taken of the increase of surface energy which occurs during the formation of cracks. Following Griffith, let us demonstrate that crack evolution criteria are optimality conditions of a total energy to minimize. Provided some regularity on P(t, l) and l(t), let formally the minimization problem be: for any loading time t such that the displacement u is at the Chapter 1. Variational phase-field models of brittle fracture equilibrium, find the crack length l which minimizes the total energy composed of the potential energy and the surface energy subject to irreversibility, min l≥l(t) P(t, l) + G c l (1.2) An optimal solution of the above constraint problem must satisfy the KKT1 conditions. A common methods consist in computing the Lagrangian, given by, L(t, l, λ) := P(t, l) + G c l + λ(l(t)l) (1.3) where λ denotes the Lagrange multiplier. Then, apply the necessary conditions, Substitute the Lagrange multiplier λ given by the stationarity into the dual feasibility and complementary slackness condition to recover the irreversibility, stability and energy balance of Griffith criterion. Futhermore, let the crack length l and the displacement u be an internal variables of a variational problem. Note that the displacement does not depend on l anymore. Provided a smooth enough displacement field and evolution of t → l(t) to ensure that calculations make sense, the evolution problem can be written as a minimality principle, such as, Definition 2 (Fracture evolution by minimality principle) Find stable evolutions of l(t), u(t) satisfying at all t: i. Initial conditions l(t 0 ) = l 0 and u(t 0 , l 0 ) = u 0 ii. l(t), u(t) is a minimizer of the total energy, E(t, l, u) = Ω\Γ(l) 1 2 Ae(u) : e(u)dx - ∂ N Ω g(t) • u dH n-1 + G c l (1.5) amongst all l ≥ l(t) and u ∈ C t := u ∈ H 1 (Ω \ Γ(l)) : u = ū(t) on ∂ D Ω \ Γ(l) . 1.1. Gradient damage models iii. The energy balance, E(t, l, u) = E(t 0 , l 0 , u 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.6) One observes that stability and irreversibility have been substituted by minimality, and the energy balance takes a variational form. To justify this choice, we show first that irreversibility, stability and kinematic equilibrium are equivalent to the first order optimality conditions of E(t, l, u) for u and l separately. Then, followed by the equivalence of the energy balance adopted in the evolution by minimality principle and within Griffith criterion. Proof. For a fixed l, u is a local minimizer of E(t, l, u), if for all v ∈ H 1 0 (Ω \ Γ(l)), for some h > 0 small enough, such that u + hv ∈ C t , E(t, l, u + hv) = E(t, l, u) + hE (t, l, u) • v + o(h) ≥ E(t, l, u) (1.7) thus, E (t, l, u) • v ≥ 0 (1.8) where E (t, l, u) denotes the first Gateaux derivative of E at u in the direction v. By standard arguments of calculus of variations, one obtains, E (t, l, u) • v = Ω\Γ(l) 1 2 Ae(u) : e(v)dx - ∂ N Ω g(t) • v dH n-1 (1.9) Integrating the term in e(v) by parts over Ω \ Γ(l), and considering both faces of Γ(l) with opposites normals, one gets, E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂Ω Ae(u) • ν • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1 - ∂ N Ω g(t) • v dH n-1 (1. E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂ N Ω Ae(u) • ν -g(t) • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1 (1.11) Chapter 1. Variational phase-field models of brittle fracture Taking v = -v ∈ H 1 0 (Ω\Γ(l)) , the optimality condition leads to E (t, l, u)•v = 0. Formally by a localization argument taking v such that it is concentrated around boundary and zero almost everywhere, we obtain that all integrals must vanish for any v. Since the stress-strain relation is given by σ = Ae(u), we recover the equilibrium constitutive equations,          div Ae(u) = 0 in Ω \ Γ(l) u = ū(t) on ∂ D Ω \ Γ(l) Ae(u) • ν = g(t) on ∂ N Ω Ae(u) • ν = 0 on Γ(l) (1.12) Now consider u is given. For any l > 0 for some h > 0 small enough, such that l + h l ≥ l(t), the derivative of E(t, l, u) at l in the direction l is, E (t, l, u) • l ≥ 0 ∂P(t, l, u) ∂l + G c ≥ 0 (1.13) this becomes an equality, G(t, l, u) = G c when the fracture propagates. To complete the equivalence between minimality evolution principle and Griffith, let us verify the energy balance. Provided a smooth evolution of l, the time derivative of the right hand side equation (1.6) is, dE(t, l, u) dt = ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 (1.14) and the explicit left hand side, dE(t, l, u) dt = E (t, l, u) • u + E (t, l, u) • l - ∂ N Ω ġ(t) • u dH n-1 . (1.15) The Gateaux derivative with respect to u have been calculated above, so E (t, l, u) • u stands for, E (t, l, u) • u = - Ω\Γ div(Ae(u)) • u dx + ∂ D Ω Ae(u) • ν • u dH n-1 + ∂ N Ω Ae(u) • ν • u dH n-1 - ∂ N Ω g(t) • u dH n-1 . (1.16) Since u respects the equilibrium and the admissibility u = u on ∂ D Ω, all kinematic contributions to the elastic body vanish and the energy balance condition becomes, E (t, l, u) • l = 0 ⇔ ∂P ∂l + G c l = 0 (1.17) Gradient damage models At this stage minimality principle is equivalent to Griffith criterion for smooth evolution of l(t). Let's give a graphical interpretation of that. Consider a domain partially cut by a pre-fracture of length l 0 subject to a monotonic increasing displacement load, such that, ū(t) = tū on ∂ D Ω and stress free on the remainder boundary part. Hence, the elastic energy is ψ e(tu) = t 2 2 Ae(u) : e(u) and the irreversibility is l ≥ l 0 . The fracture stability is given by t 2 ∂P (1, l) ∂l + G c ≥ 0 and for any loading t > 0, the energy release rate for a unit loading is bounded by G(1, l) ≤ G c /t 2 . Forbidden region The fracture evolution is smooth if G(1, l) is strictly decreasing in l, i.e. P(1, l) is strictly convex as illustrated on the Figure 1.1(left). Thus, stationarity and local minimality are equivalent. Let's imagine that material properties are not constant in the structure, simply consider the Young's modulus varying in the structure such that G(1, l) has a concave part, see Figure 1.1(right). Since G(1, l) is a deceasing function, the fracture grows smoothly by local minimality argument until being stuck in the local well for any loadings which is physically inconsistent. Conversely, considering global minimization allows up to a loading point, the nucleation of a crack in the material, leading to a jump of the fracture evolution. Extension to Francfort-Marigo's model In the previous analysis, the minimality principle adopted was a local minimization argument because it considers small perturbations of the energy. This requires a topology, which includes a concept of distance defining small transformations, whereas for global minimization principle it is topology-independent. Without going too deeply into details, arguments in favor of global minimizers are described below. Griffith's theory does not Chapter 1. Variational phase-field models of brittle fracture hold for a domain with a weak singularity. By weak singularity, we consider any free stress acute angle that does not degenerate into a crack (as opposed to strong singularity). For this problem, by using local minimization, stationary points lead to the elastic solution. The reason for this are that the concept of energy release rate is not defined for a weak singularity and there is no sustainable stress limit over which the crack initiates. Hence, to overcome the discrepancy due to the lack of a critical stress in Griffith's theory, double criterion have been developed to predict fracture initiation in notched specimen, more details are provided in Chapter 2. Conversely, global minimization principle has a finite admissible stress allowing cracks nucleation, thus cracks can jump from a state to another, passing through energy barriers. For physical reasons, one can blame global minimizers to not enforce continuity of the displacement and damage field with respect to time. Nevertheless, it provides a framework in order to derive the fracture model as a limit of the variational damage evolution presented in section 1.3. This is quite technical but global minimizers from the damage model converge in the sens of Γ-convergence to global minimizers of the fracture model. Finally, under the assumptions of a pre-existing fracture and strict convexity of the potential energy, global or local minimization are equivalent and follow Griffith. In order to obtain the extended model of Francfort-Marigo variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Marigo | Initiation of cracks in griffith's theory: An argument of continuity in favor of global minimization[END_REF][START_REF] Bourdin | Variational Models and Methods in Solid and Fluid Mechanics, chapter Fracture[END_REF] one has to keep the rate independent variational principle and the Griffith fracture energy, relax the constrain on the pre-supposed crack path by extending to all possible crack geometries Γ and consider the global minimization of the following total energy E(u, Γ) := Ω\Γ 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c H n-1 (Γ) (1.18) associated to cracks evolution problem given by, Definition 3 (Crack evolution by global minimizers) u(t), Γ t satisfies the variational evolution associated to the energy E(u, Γ) if the following three conditions hold: i. t → Γ t is increasing in time, i.e Γ t ⊇ Γ s for all t 0 ≤ s ≤ t ≤ T . ii. for any configuration (v, Γ) such that v = g(t) on ∂ D Ω \ Γ t and Γ ⊇ Γ t , E v, Γ ≥ E u(t), Γ t (1.19) iii. for all t, E u(t), Γ t = E u(t 0 ), Γ t 0 + t t 0 ∂ D Ω (σ • ν) • u(t) dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.20) Gradient damage models It is convenient, to define the weak energy by extending the set of admissibility function to an appropriate space allowing discontinuous displacement field, but preserving "good" properties. SBD(Ω) = u ∈ SBV (Ω); Du = ∇u + (u + -u -) • ν dH n-1 J(u) (1.21) where, Du denotes the distributional derivative, J(u) is the jump set of u. Following De Giorgi in [START_REF] De Giorgi | Existence theorem for a minimum problem with free discontinuity set[END_REF], the minimization problem is reformulated in a weak energy form functional of SBV , such as, min u∈SBV (Ω) Ω 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c H n-1 J(u) (1.22) For existence of solution in the discrete time evolution and time continuous refer to [START_REF] Francfort | Existence and convergence for quasi-static evolution in brittle fracture[END_REF][START_REF] Babadjian | Existence of strong solutions for quasi-static evolution in brittle fracture[END_REF]. The weak energy formulation will be recalled in section 1.3 for the Γ-convergence in one dimension. Gradient damage models to brittle fracture Because the crack path remains unknown a special method needs to be crafted. The approach is to consider damage as an approximation of the fracture with a finite thickness where material properties are modulated continuously. Hence, let the damage α being an internal variable which evolves between two extreme states, up to a rescaling α can be bounded between 0 and 1, where α = 0 is the sound state material and α = 1 refers to the broken part. Intermediate values of the damage can be seen as "micro cracking", a partial disaggregation of the Young's modulus. A possible choice is to let the damage variable α making an isotropic deterioration of the Hooke's tensor, i.e. a(α)A where a(α) is a stiffness function. Naturally the recoverable energy density becomes, ψ(α, e) = 1 2 a(α)Ae(u) : e(u), with the elementary property that ψ(α, e) is monotonically decreasing in α for any fixed u. The difficulty lies in the choice of a correct energy dissipation functional. At this stage of the presentation a choice would be to continue by following Marigo-Pham [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] for a full and self consistent construction of the model. Their main steps are, assume a dissipation potential k(α), apply the Drucker-Ilushin postulate, then, introduce a gradient damage term to get a potential dissipation of the form k(α, ∇α). Instead, we will continue by following the historical ideas which arose from the image processing field with Mumford-Shah [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] where continuous functional was proposed to find the contour of the image in the picture by taking into account strong variations of pixels intensity across boundaries. Later, Ambrosio-Tortorelli [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] proposed the following functional which constitute main ingredients of the regularized damage models, Ω 1 2 (1 -α) 2 |∇u| 2 dx + Ω α 2 + 2 |∇α| 2 dx where > 0 is a regularized parameter called internal length. One can recognize the second term as the dissipation potential composed of two parts, a local term depending only on the damage state and a gradient damage term which penalizes sharp localization of the damage. The regularized parameter came up with the presence of the gradient damage term which has a dimension of the length. Following [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF], we define the regularized total energy of the gradient damage model for a variety of local dissipations and stiffness functions denoted w(α) and a(α), not only w(α) = α 2 and a(α ) = (1 -α) 2 by E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.23) where G c is the critical energy release rate, c w = 1 0 w(α)dα with w(α) and a(α) following some elementary properties. 1. The local dissipation potential w(α) is strictly monotonically increasing in α. For a sound material no dissipation occurs hence w(0) = 0, for a broken material the dissipation must be finite, and up to a rescaling we have w(1) = 1. 2. The elastic energy is monotonically decreasing in α for any fixed u. An undamaged material should conserve its elasticity property and no elastic energy can be stored in a fully damaged material such that, the stiffness function a(α) is a decreasing function with a(0) = 1 and a(1) = 0. 3. For numerical optimization reasons one can assume that a(α) and w(α) are continuous and convex. A large variety of models with different material responses can be constructed just by choosing different functions for a(α) and w(α). A non exhaustive list of functions used in the literature is provided in Table 1.1. Despite many models used, we will mainly focus on AT 1 and sometimes refers to AT 2 for numerical simulations. Now let us focus on the damage evolution of E (u, α) defined in (1.23). First, remark that to get a finite energy, the gradient damage is in L 2 (Ω) space. Consequently, the trace can be defined at the boundary, so, damage values can be prescribed. Accordingly let the set of admissible displacements and admissible damage fields C t and D, equipped with their natural H 1 norm, C t = u ∈ H 1 (Ω) : u = ū(t) on ∂ D Ω , D = α ∈ H 1 (Ω) : 0 ≤ α ≤ 1, ∀x ∈ Ω . The evolution problem is formally similar to one defined in Definition 2 and reads as, 1.1. Gradient damage models Name a(α) w(α) AT 2 (1 -α) 2 α 2 AT 1 (1 -α) 2 α LS k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 KKL 4(1 -α) 3 -3(1 -α) 4 α 2 (1 -α) 2 /4 Bor c 1 (1 -α) 3 -(1 -α) 2 + 3(1 -α) 2 -2(1 -α) 3 α 2 SKBN (1 -c 1 ) 1 -exp (-c 2 (1 -α) c 3 ) 1 -exp (-c 2 ) α Table 1.1: Variety of possible damage models, where c 1 , c 2 , c 3 are constants. AT 2 introduced by Ambrosio Tortorelli and used by Bourdin [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF], AT 1 model initially introduced by Pham-Amor [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF], LS k in Alessi-Marigo [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF], KKL for Karma-Kessler-Levine used in dynamics [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF], Bor for Borden in [START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF], SKBN for Sargadoa-Keilegavlena-Berrea-Nordbottena in [START_REF] Sargado | High-accuracy phase-field models for brittle fracture based on a new family of degradation functions[END_REF]. Definition 4 (Damage evolution by minimality principle) For all t find (u, α) ∈ (C t , D) that satisfies the damage variational evolution: i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. (u, α) is a minimizer of the total energy, E (u, α) E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.24) amongst all α ≥ α(t) iii. Energy balance, E (u t , α t ) = E (u 0 , α 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds. (1.25) This damage evolution is written in a weak form in order to obtain the damage criterion in a strong formulation, we have to explicit the first order necessary optimality conditions of the constraint minimization of E for (u, α) given by, E (u, α)(v, β) ≥ 0 ∀(v, β) ∈ H 1 0 (Ω) × D (1.26) Chapter 1. Variational phase-field models of brittle fracture Using calculus of variation argument, one gets, E (u, α)(v, β) = Ω a(α)Ae(u) : e(v) dx - ∂ N Ω (g(t) • ν) • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u)β dx + G c 4c w Ω w (α) β + 2 ∇α • ∇β dx. (1.27) Integrating by parts the first term in e(v) and the last term in ∇α • ∇β, the expression leads to, E (u, α)(v, β) = - Ω div a(α)Ae(u) • v dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α β dx + G c 4c w ∂Ω 2 ∇α • ν β dH n-1 . (1.28) This holds for all β ≥ 0 and for all v ∈ H 1 0 (Ω), thus, one can take β = 0 and v = -v. Necessary, the first two integrals are equal to zero. Again, we recover the kinematic equilibrium with the provided boundary condition since σ = a(α)Ae(u),      div a(α)Ae(u) = 0 in Ω a(α)Ae(u) = g(t) on ∂ N Ω u = ū(t) on ∂ D Ω (1.29) The damage criteria and its associated boundary conditions arise for any β ≥ 0 and by taking v = 0 in (1.28), we obtain that the third and fourth integrals are non negative.      1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α ≥ 0 in Ω ∇α • ν ≥ 0 on ∂Ω (1.30) The damage satisfies criticality when (1.30) becomes an equality. Before continuing with the energy balance expression, let us focus a moment on the damage criterion. Notice that it is composed of an homogeneous part depending in w (α) and a localized contribution in ∆α. Assume the structure being at an homogeneous damage state, such that α is constant everywhere, hence the laplacian damage term vanishes. In that case, the elastic domain in a strain space is given by, 1.1. Gradient damage models Ae(u) : e(u) ≤ G c 2c w w (α) -a (α) (1.31) and in stress space, by, .32) this last expression requires to be bounded such that the structure has a maximum admissible stress, A -1 σ : σ ≤ G c 2c w w (α)a(α) 2 -a (α) (1 max α w (α) c (α) < C (1.33) where c(α) = 1/a(α) is the compliance function. If α → w (α)/c (α) is increasing the material response will be strain-hardening. For a decreasing function it is a stress-softening behavior. This leads to, w (α)a (α) > w (α)a (α) (Strain-hardening) w (α)c (α) < w (α)c (α) (Stress-softening) (1.34) Those conditions restrict proper choice for w(α) and a(α). Let us turn our attention back to find the strong formulation of the problem using the energy balance. Assuming a smooth evolution of damage in time and space, the time derivative of the energy is given by, dE (u, α) dt = E (u, α)( u, α) - ∂ N Ω ( ġ(t) • ν) dH n-1 (1.35) The first term has already been calculated by replacing (v, β) with ( u, α) in (1.27), so that, dE (u, α) dt = - Ω div a(α)Ae(u) • u dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • u dH n-1 + ∂ D Ω a(α)Ae(u) • ν • u dH n-1 - ∂ N Ω ġ(t) • ν • u dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α α dx + G c 4c w ∂Ω 2 ∇α • ν α dH n-1 (1.36) Chapter 1. Variational phase-field models of brittle fracture The first line vanishes with the equilibrium and boundary conditions, the second line is equal to the right hand side of the energy balance definition (1.25). Since the irreversibility α ≥ 0 and the damage criterion (1.30) hold, the integral is non negative, therefore the energy balance condition gives,      1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α • α = 0 in Ω (∇α • ν) • α = 0 on ∂Ω (1.37) Notice that the first condition in (1.37) is similar to the energy balance of Griffith, in the sense that the damage criterion is satisfied when damage evolves. Finally, the evolution problem is given by the damage criterion (1.30), the energy balance (1.37) and the kinematic admissibility (1.29). The next section is devoted to the construction of the optimal damage profile by applying the damage criterion to a one-dimensional traction bar problem for a given . Then, defined the critical energy release rate as the energy required to break a bar and to create an optimal damage profile. Application to a bar in traction The one-dimension problem The aim of this section is to apply the gradient damage model to a one-dimensional bar in traction. Relevant results are obtained with this example such as, the role of critical admissible stress, the process of damage nucleation due to stress-softening, the creation of an optimal damage profile for a given and the role of gradient damage terms which ban spacial jumps of the damage. In the sequel, we follow Pham-Marigo [START_REF] Pham | Construction et analyse de modèles d'endommagement à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF]] by considering a one-dimensional evolution problem of a homogeneous bar of length 2L stretched by a time controlled displacement at boundaries and no damage value is prescribed at the extremities, such that, the admissible displacement and damage sets are respectively, C t := {u : u(-L) = -tL, u(L) = tL}, D := {α : 0 ≤ α ≤ 1 in [0, L]} (1.38) with the initial condition u 0 (x) = 0 and α 0 (x) = 0. Since no external force is applied, the total energy of the bar is given by, E (u, α) = L -L 1 2 a(α)Eu 2 dx + G c 4c w L -L w(α) + |α | 2 dx (1.39) where E is the Young's modulus, > 0 and (•) = ∂(•)/∂x. For convenience, let the compliance being the inverse of the stiffness such that c(α) = a -1 (α). Assume that α is at least continuously differentiable, but a special treatment would be required for α = 1 1.2. Application to a bar in traction which is out of the scope in this example. The pair (u t , α t ) ∈ C t × D is a solution of the evolution problem if the following conditions holds: 1. The equilibrium, σ t (x) = 0, σ t (x) = a(α t (x))Eu t (x), u t (-L) = -tL and u t (L) = tL The stress is constant along the bar. Hence it is only a function of time, such that, 2tLE = σ t L -L c α t (x) dx (1.40) Once the damage field is known. The equation (1.40) gives the stress-displacement response. 2. The irreversibility, αt (x) ≥ 0 (1.41) 3. The damage criterion in the bulk, - c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) ≥ 0 (1.42) 4. The energy balance in the bulk, - c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) αt (x) = 0 (1.43) 5. The damage criterion at the boundary, α t (-L) ≥ 0 and α t (L) ≤ 0 (1.44) 6. The energy balance at the boundary, α t (±L) αt (±L) = 0 (1.45) For smooth or brutal damage evolutions the first order stability enforce α t (±L) = 0 to respect E (u, α) = 0. Thus the damage boundary condition is replaced by α t (±L) = 0 when damage evolves. All equations are settled to solve the evolution problem. Subsequently, we study a uniform damage in the bar and then focus on the localized damage solution. The homogeneous damage profile Consider a case of a uniform damage in the bar α t (x) = α t , which is called the homogeneous solution. We will see that the damage response depends on the evolution of α → w (α)/c (α), i.e. for stress-hardening (increasing function) the damage evolves uniformly in the bar, and localizes for stress-softening configuration. Now suppose that the damage does not evolve and remains equal to its initial value, α t = α 0 = 0. Then using the damage criterion in the bulk (1.42) the admissible stress must satisfy, σ 2 t ≤ 2EG c 4c w w (0) c (0) (1.46) and the response remains elastic until the loading time t e , such that, t 2 ≤ - G c 2E c w w (0) a (0) = t 2 e (1.47) Suppose the damage evolves uniformly belongs to the bar, using the energy balance (1.43) and the damage criterion (1.42) we have, σ 2 t ≤ 2EG c 4c w w (α t ) c (α t ) , σ 2 t - 2EG c 4c w w (α t ) c (α t ) αt = 0 (1.48) The homogeneous damage evolution is possible only if α t → w (α)/c (α) is growing, this is the stress-hardening condition. Since αt > 0, the evolution of the stress is given by, σ 2 t = 2EG c 4c w w (α t ) c (α t ) ≤ max 0<α<1 2EG c 4c w w (α t ) c (α t ) = σ 2 c (1.49) where σ c is the maximum admissible stress for the homogeneous solution. One can define the maximum damage state α c obtained when σ t = σ c . This stage is stable until the loading time t c , t 2 ≤ - G c 2E c w w (α c ) a (α c ) = t 2 c (1.50) Since w (α)/c (α) is bounded and > 0, a fundamental property of gradient damage model is there exists a maximum value of the stress called critical stress, which allows crack to nucleate using the minimality principle. The localized damage profile The homogeneous solution is no longer stable if the damage α t → w (α)/c (α) is decreasing after α c . To prove that, consider any damage state such that, α t (x) > α c and the stress-softening property, leading to, 1.2. Application to a bar in traction 0 ≤ 2EG c 4c w w (α t (x)) c (α t (x)) ≤ 2EG c 4c w w (α c ) c (α c ) = σ 2 c (1.51) By integrating the damage criterion (1.42) over (-L, L) and using (1.44), we have, σ 2 t 2E L -L c (α t (x)) dx ≤ G c 4c w L -L w (α t (x)) dx + 2 α t (L) -α t (-L) ≤ G c 4c w L -L w (α t (x)) dx (1.52) then, put (1.51) into (1.52) to conclude that σ t ≤ σ c and use the equilibrium (1.40) to obtain σ t ≥ 0. Therefore using (1.52) we get that α t (x) ≥ 0, consequently the damage is no longer uniform when stress decreases 0 ≤ σ t ≤ σ c . Assume α t (x) is monotonic over (-L, x 0 ) with α t (-L) = α c and the damage is maximum at x 0 , such that, α t (x 0 ) = max x α t (x) > α c . Multiplying the equation (1.42) by α t (x), an integrating over [-L, x) for x < x 0 we get, α 2 t (x) = - 2c w σ 2 t EG c c(α t (x)) -c(α c ) + w(α t (x)) -w(α c ) (1.53) Plugging this above equation into the total energy restricted to the (-L, x 0 ) part, E (u t (x), α t (x)) (-L,x 0 ) = x 0 -L σ 2 t 2a(α t (x))E dx + G c 4c w x 0 -L w(α t (x)) + α 2 t (x) dx = x 0 -L σ 2 t 2a(α c )E dx + G c 4c w x 0 -L 2w(α) -w(α c ) dx (1.54) Note that the energy does not depend on α anymore, we just have two terms: the elastic energy and the surface energy which depends on state variation of w(α). The structure is broken when the damage is fully localized α(x 0 ) = 1. From the equilibrium (1.40), the ratio stress over stiffness function is bounded such that |σ t c(α)| < C, thus, |σ 2 t c(1)| → 0 and (1.53) becomes, α 2 t (x) = w(α t (x)) -w(α c ) , ∀x ∈ (-L, x 0 ) Remark that, the derivative of the damage and u across the point x 0 where α = 1 is finite. By letting the variable β = α t (x), the total energy of the partial bar (-L, x 0 ) is Chapter 1. Variational phase-field models of brittle fracture E (u t (x), α t (x)) (-L,x 0 ) = lim x→x 0 G c 4c w x -L 2w(α t (x)) -w(α c ) dx = lim β→1 G c 4c w β αc 2w(α) -w(α c ) β dβ = lim β→1 G c 4c w β αc 2w(β) -w(α c ) w(β) -w(α c ) dβ = lim β→1 G c 4c w β αc 2 w(β) -w(α c ) + w(α c ) w(β) -w(α c ) dβ = G c 2c w k(α c ) (1.55) with, k(α c ) := 1 αc w(β) -w(α c ) dβ + w(α c ) D 4 , where D is the damage profile size between the homogeneous and fully localized state, given by, D = L -L dx α (x) = 1 αc 2 w(β) -w(α c ) dβ. (1.56) Note that the right side of the bar (x 0 , L) contribute to the exact same total energy than the left one (-L, x 0 ). Different damage response is observed depending on the choice of w(α) and a(α). The model AT 1 for instance has an elastic part, thus α c = 0 and the energy release during the breaking process of a 1d bar is equal to G c . Models with an homogeneous response before localization, AT 2 for example, overshoot G c due to the homogeneous damage profile. A way to overcome this issue, is to consider that partial damage do not contribute to the dissipation energy, it can be relaxed after localization by removing the irreversibility. Another way is to reevaluate c w such as, c w = k(α c ). Limit of the damage energy From inception to completion gradient damage models follows the variational structure of Francfort-Marigo's [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] approach seen as an extension of Griffith, but connections between both need to be highlighted. Passing from damage to fracture, i.e. letting → 0 requires ingredients adapted from Ambrosio Tortorelli [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] on convergence of global minimizers of the total energy. A framework to study connections between damage and fracture variational models is that of Γ-convergence which we briefly introduce below. We refer the reader to [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF][START_REF] Dal | An introduction to Γ-convergence[END_REF] for a complete exposition of the underlying theory. Limit of the damage energy In the sequel, we restrict the study to a 1d case structure of interval Ω ⊂ R whose size is large compare to the internal length and with a unit Young's modulus. We prescribe a boundary displacement ū on a part ∂ D Ω and stress free on the remaining part ∂ N Ω := ∂Ω \ ∂ D Ω. We set aside the issue of damage boundary conditions for now and we define the weak fracture energy, E(u, α, Ω) = F(u, Ω) if u ∈ SBV (Ω) +∞ otherwise (1.57) and F(u, Ω) := 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.58) where #(J(u)) denotes the cardinality of jumps in the set of u. Derived from E(u, α, Ω) its associated regularized fracture energy is, E (u, α, Ω) = F (u, α, Ω) if u ∈ W 1,2 (Ω), α ∈ W 1,2 (Ω; [0, 1]) +∞ otherwise (1.59) and F (u, α, Ω) := 1 2 Ω a(α)(u ) 2 dx + G c 4c w Ω w(α) + α 2 dx (1.60) To prove that up to a subsequence minimizers for E converge to global minimizers of E we need the fundamental theorem of the Γ-convergence given in the Appendix A. We first show the compactness of the sequence of minimizers of E , then the Γconvergence of E to E. Before we begin, let the truncation and optimal damage profile lemma be, Lemma 1 Let u (resp. (u, α)) be a kinematically admissible global minimizer of F (resp. F ). Then u L ∞ (Ω) ≤ ū L ∞ (Ω) Proof. Let M = ū L ∞ , and u * = inf {sup{-M, u}, M }. Then F(u * ) ≤ F(u) with equality if u = u * . Lemma 2 Let α be the optimal profile of S (α ) := I w(α ) + (α ) 2 dx where I ⊂ R, then S (α ) = 4c w . Proof. In order to construct α we solve the optimal profile problem: Let γ be the solution of the following problem: find γ ∈ C 1 [-δ, x 0 ) such that γ(-δ) = 0 and lim x→x 0 γ(x) = ϑ, and which is a minimum for the function, F (γ) = x 0 -δ f (γ(x), γ (x), x)dx (1.61) where f (γ(x), γ (x), x) := w(γ(x)) + γ 2 (x) (1.62) Note that the first derivative of f is continuous. We will apply the first necessary optimality condition to solve the optimization problem described above, if γ is an extremum of F , then it satisfies the Euler-Lagrange equation, 2γ = w (γ) 2 and γ (-δ) = 0 (1.63) Note that w (γ) ≥ 0 implies γ convex, thus γ is monotonic in [-δ, x 0 ). Multiplying by γ and integrating form -δ to x, we obtain, γ 2 (x) -γ 2 (-δ) = w(γ(x)) -w(γ(-δ)) 2 (1.64) Since γ (-δ) = 0 and w(γ(-δ)) = 0, one gets, γ (x) = w(γ(x)) 2 (1.65) Let us define, α (x) = γ (|x -x 0 |) then, α (x) := γ (|x -x 0 |) if |x -x 0 | ≤ δ 0 otherwise (1.66) Note that α is continuous at x 0 and values ϑ, we have that, S (α ) = I w(α ) + (α ) 2 dx = 2 x 0 -δ w(γ ) + (γ ) 2 dx (1.67) Plug (1.65) into the last integral term, and change the variables β = γ (x), it turns into S (α ) = 2 x 0 -δ w(γ ) + (γ ) 2 dx = 2 γ(x 0 ) γ(-δ) w(β) β dβ = 4 ϑ 0 w(β) dβ (1.68) The fully damage profile is obtained once ϑ → 1, we get, This will be usefull for the recovery sequence in higer dimensions. S (α ) = lim Compactness Theorem 1 Let (x) := x 0 w(s)ds, and assume that there exists C > 0 such that 1 -(s) ≤ C a(s) for any 0 ≤ s ≤ 1. Let (u , α ) be a kinematic admissible global minimizer of E . Then, there exists a subsequence (still denoted by (u , α ) ), and a function u ∈ SBV (Ω) such that u → u in L 2 (Ω) and α → 0 a.e. in Ω as → 0 Proof. Note that the technical hypothesis is probably not optimal but sufficient to account for the AT 1 and AT 2 functionals. Testing α = 0 and an arbitrary kinematically admissible displacement field ũ, we get that, E (u , α ) ≤ E (ũ, 0) ≤ 1 2 Ω |ũ | 2 dx ≤ C (1.69) So that E (u , α ) is uniformly bounded by some C > 0. Also, this implies that w(α ) → 0 almost everywhere in Ω, and from properties of w, that α → 0 almost everywhere in Ω.Using the inequality a 2 + b 2 ≥ 2|ab| on the surface energy part, we have that, Ω 2 w(α )|α | dx ≤ Ω w(α ) + (α ) 2 dx ≤ C (1.70) In order to obtain the compactness of the sequence u , let v := (1 -(α )) u and using the truncation Lemma 1, v is uniformly bounded in L ∞ (Ω). Then, v = (1 -(α ))u -(α )α u ≤ (1 -(α ))|u | + w(α )|α ||u | ≤ a(α )|u | + w(α )|α ||u | (1.71) From the uniform bound on E (u , α ), we get that the first term is bounded in L 2 (Ω), while (1.70) and the truncation Lemma 1 show that the second term is bounded in L 1 (Ω) thus in L 2 (Ω). Finally, i. v is uniformly bounded in L ∞ (Ω) ii. v is uniformly bounded in L 2 (Ω) Chapter 1. Variational phase-field models of brittle fracture iii. J(v ) = ∅ invoking the Ambrosio's compactness theorem in SBV (in the Appendix A), we get that there exists v ∈ SBV (Ω) such that v → v strongly in L 2 (Ω). To conclude, since u = v (1-(α )) and α → 0 almost everywhere, we have, u → u in L 2 (Ω) Remark the proof above applies unchanged to the higher dimension case. Gamma-convergence in 1d The second part of the fundamental theorem of Γ-convergence requires that E Γ-converges to E. The definition of the Γ-convergence is in the Appendix A. The first condition means that E provides an asymptotic common lower bound for the E . The second condition means that this lower bound is optimal. The Γ-convergence is performed in 1d setting and is decomposed in two steps as follow: first prove the lower inequality, then construct the recovery sequence. Lower semi-continuity inequality in 1d We want to show that for any u ∈ SBV (Ω), and any (u , α ) such that u → u and α → 0 almost everywhere in Ω, we have, lim inf →0 E (u , α , Ω) ≥ 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.72) Proof. Consider any interval I ⊂ Ω ⊂ R, such that, lim inf →0 E (u , α , I) ≥ 1 2 I (u ) 2 dx if u ∈ W 1,2 (I) (1.73) and, lim inf →0 E (u , α , I) ≥ G c otherwise (1.74) If lim inf →0 E (u , α , I) = ∞, both statements are trivial, so we can assume that there exist 0 ≤ C < ∞ such that, lim inf →0 E (u , α , I) ≤ C (1.75) We focus on (1.73) first, and assume that u ∈ W 1,2 (I). From (1.75) we deduce that w(α ) → 0 almost everywhere in I. Consequently, α → 0 almost everywhere in I. By Egoroff's theorem, for any > 0 there exists I ⊂ I such that |I | < and such that α → 0 uniformly on I \ I . For any δ > 0, thus we have, 1.3. Limit of the damage energy 1 -δ ≤ a(α ) on I \ I , for all and small enough, so that, I\I (1 -δ) (u ) 2 dx ≤ I\I a(α ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx (1.76) Since u → u in W 1,2 (I) , and taking the lim inf on both sides, one gets, (1 -δ) 2 I\I (u ) 2 dx ≤ lim inf →0 1 2 I a(α ) (u ) 2 dx (1.77) we obtain the desired inequality (1.73) by letting → 0 and δ → 0. To prove the second assertion (1.74), we first show that lim →0 sup x∈I α = 1, proceeding by contradiction. Suppose there exists δ > 0 such that α < 1δ on I. Then, I a(1 -δ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx Taking the lim inf on both sides and using (1.75) , we get that, lim inf →0 I (u ) 2 dx ≤ C a(1 -δ) So u is uniformly bounded in W 1,2 (I), and therefore u ∈ W 1,2 (I), which contradicts our hypothesis. Reasoning as before, we have that α → 0 almost everywhere in I. Proceeding the same way on the interval (b , c ), one gets that, lim inf →0 G c 4c w I w(α ) + (α ) 2 dx ≥ G c which is (1.74). In order to obtain (1.72), we apply (1.74) on arbitrary small intervals centered around each points in the jump set of u and (1.73) on each remaining intervals in I. Recovery sequence for the Γ-limit in 1d The construction of the recovery sequence is more instructive. Given (u, α) we need to buid a sequence (u , α ) such that lim sup F (u , α ) ≤ F(u, α). Proof. If F (u, α) = ∞, we can simply take u = u and α = α, so that we can safely assume that F(u, α) < ∞. As in the lower inequality, we consider the area near discontinuity points of u and away from them separately. Let (u, α) be given, consider an open interval I ⊂ R and a point x 0 ∈ J(u) ∩ I. Without loss of generality, we can assume that x 0 = 0 and I = (-δ, δ) for some δ > 0 . The construction of the recovery sequence is composed of two parts, first the recovery sequence for the damage, then one for the displacement. The optimal damage profile obtained in the Lemma 2, directly gives, lim sup →0 G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ G c , (1.81) this is the recovery sequence for the damage. Now, let's focus on the recovery sequence for the bulk term. We define b and u (x) :=    x b u(x) if -b ≤ x ≤ b u(x) otherwise (1.82) Since a(α ) ≤ 1, we get that, -b -δ a(α ) (u ) 2 dx ≤ -b -δ (u ) 2 dx (1. a(α ) (u ) 2 dx ≤ b -b (u ) 2 dx ≤ b -b u b + xu b 2 dx ≤ 2 b -b u b 2 dx + 2 b -b xu b 2 dx ≤ 2 b 2 b -b |u| 2 dx + 2 b -b (u ) 2 dx (1.85) Since |u| ≤ M , the first term vanish when b → 0. Combining (1.83),(1.85) and (1.84). Then, taking the lim sup on both sides and using I |u | 2 dx < ∞, we get that, lim sup →0 1 2 δ -δ a(α ) (u ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx (1.86) Finally combining (1.81) and (1.86), one obtains lim sup →0 δ -δ 1 2 a(α ) (u ) 2 + G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx + G c (1.87) For the final construction of the recovery sequence, notice that we are free to assume that #(J(u)) is finite and chose δ ≤ inf{|x ix j |/2 s.t. x i , x j ∈ J(u), x i = x j }. For each x i ∈ J(u), we define I i = (x iδ, x i + δ) and use the construction above on each I i whereas on I \ I i we chose u = u and α linear and continuous at the end points of the I i . With this construction, is easy to see that α → 1 uniformly in I \ I i and that, lim sup →0 I\ I i 1 2 a(α )(u ) 2 dx ≤ I (u ) 2 dx, (1.88) and, lim sup →0 I\ I i w(α ) + (α ) 2 dx = 0 (1.89) Altogether, we obtain the upper estimate for the Γ-limit for pairs (u, 1) of finite energy, i.e. lim sup →0 F (u , α ) ≤ F (u , 1) (1.90) Extension to higher dimensions To extend the Γ-limit to higher dimensions the lower inequality part is technical and is not developed here. But, the idea is to use Fubini's theorem, to build higher dimension by taking 1d slices of the domain, and use the lower continuity on each section see [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. The recovery sequence is more intuitive, a possible construction is to consider a smooth Γ ⊂ Ω and compute the distance to the crack J(u), such that, d(x) = dist(x, J(u)) (1.91) and let the volume of the region bounded by p-level set of d, such that, s(y) = |{x ∈ R n ; d(x) ≤ y}| (1.92) Figure 1.2: Iso distance to the crack J(u) for the level set b and δ Following [START_REF] Evans | Measure theory and fine properties of functions[END_REF][START_REF] Evans | On the partial regularity of energy-minimizing, areapreserving maps[END_REF], the co-area formula from Federer [START_REF] Federer | Geometric measure theory[END_REF] is, Ω f (x) ∇g(x) dx = +∞ -∞ g -1 (y) f (x)dH n-1 (x) dy (1.93) In particular, taking g(x) = d(x) which is 1-Lipschitz, i.e. ∇d(x) = 1 almost everywhere. We get surface s(y), s(y) = s(y) ∇d(x) dx = y 0 H n-1 ({x; d(x) = t})dt (1.94) and s (y) = H n-1 ({x; d(x) = y}) (1.95) In particular, s (0) = lim y→0 s(y) y = 2H n-1 (J(u)) (1.96) Limit of the damage energy Consider the damage, α (d(x)) :=      1 if d(x) ≤ b γ (d(x)) if b ≤ d(x) ≤ δ 0 otherwise (1.97) The surface energy term is, Ω w(α ) + |∇α | 2 dx = 1 d(x)≤b dx + b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx (1.98) The first integral term, is the surface bounded by the iso-contour distant b from the crack, i.e s(b ) = d(x)≤b dx = b 0 H n-1 ({x; d(x) = y}) dy (1. ≤ δ/ 0 w(α (x )) + α (x ) 2 s (x ) dx (1.102) Passing the limit → 0 and using the Remark 1 on the optimal profile invariance, we get, lim sup →0 G c 4c w Ω w(α (x)) + |∇α (x)| 2 dx ≤ G c H n-1 (J(u)) (1.103) For the bulk term, consider the displacement, u (x) :=    d(x) b u(x) if d(x) ≤ b u(x) otherwise (1.104) Similarly to the 1d, one gets, lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx ≤ Ω 1 2 (∇u ) 2 dx (1.105) Therefore, lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx + G c 4c w Ω w(α + |∇α | 2 dx ≤ Ω 1 2 (∇u ) 2 dx + G c H n-1 (J(u)) (1.106) Numerical implementation In a view to numerically implement gradient damage models, it is common to consider time and space discretization. Let's first focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps, such that, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . At any step i, the sets of admissible displacement and damage fields C i and D i are, For any i find (u i , α i ) ∈ (C i , D i ) that satisfies the discrete evolution by local minimizer if the following hold: i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. For some C i := u ∈ H 1 (Ω) : u = ūi on ∂ D Ω D i := β ∈ H 1 (Ω) : α i-1 (x) ≤ β ≤ 1, ∀x ∈ Ω , (1.107 h i > 0, find (u i , α i ) ∈ (C i , D i ), such that, (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β) (1.108) where, E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.109) One observes that our time-discretization evolution do not enforce energy balance. Since a(α) and w(α) are convex, the total energy E (u, α) is separately convex with respect to u and α, but that is not convex. Hence, a proposed alternate minimization algorithm guarantees to converge to a critical point of the energy satisfying the irreversibility condition [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF]. The idea is for each time-step t i , we minimize the problem with respect to any kinematic admissible u for a given α, then, fixed u and minimize E (u, α) with respect to α subject to the irreversibility α i ≥ α i-1 , repeat the procedure until the variation of the damage is small. This gives the following algorithm see Algorithm 1, where δ α is a fixed tolerance parameter. For the space discretization of E (u, α), we use the finite element methods considering linear Lagrange elements for u and α. To solve the elastic problem preconditioned conjugate gradient solvers is employed, and the constraint minimization with respect to the damage is implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | PETSc Web page[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF]. All computations were performed using the open source mef902 . Due to the non-convexity of E , solution satisfying irreversibility and stationarity might not be unique. For remainder solutions, a study selection can be performed. For instance looking at solutions which satisfy the energy balance, or selecting displacement and damage fields which are continuous in time. Another way is to compare results with all previous one in order to avoid local minimizers solution (see [START_REF] Bourdin | The Variational Approach to Fracture[END_REF][START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF] for more details on the backtracking idea). This method will select global minimizers from the set of solutions. 1: Let j = 0 and α 0 := α i-1 2: repeat 3: Compute the equilibrium, u j+1 := argmin u∈C i E (u, α j ) 4: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α) 5: j := j + 1 6: until α j -α j-1 L ∞ ≤ δ α 7: Set, u i := u j and α i := α j Conclusion The strength of the phase fields models to brittle fracture is the variational structure of the model conceived as an approximation of Griffith and its evolution based on three principles: irreversibility of the damage, stability and energy balance of the total energy. A fundamental property of the model is the maximum admissible stress illustrated in the one dimensional example. This also constrained the damage thickness size, since it governs . Numerically the fracture path is obtained by alternate searching of the damage trajectory which decreases the total energy and the elastic solution of the problem. Appendix A Theorem 2 (Ambrosio's compactness and lower semicontinuity on SBV) Let (f n ) n be a sequence of functions in SBV (Ω) such that there exists non-negative constants C 1 , C 2 and C 3 such that, i. f n is uniformly bounded in L ∞ (Ω) ii. ∇f n is uniformly bounded in L q (Ω, R n ) with q > 1 iii. H n-1 (J(f n )) is uniformly bounded Then, there exists f ∈ SBV (Ω) and a subsequence f k(n) such that, i. f k(n) → f strongly in L p (Ω), for all p < ∞ ii. ∇f k(n) → ∇f weakly in L q (Ω; R n ) iii. H n-1 (J(f )) ≤ lim inf n H n-1 (J(f n )) Theorem 3 (Fundamental theorem of Γ-convergence) If E Γ -converges to E, u is a minimizer of E , and (u ) is compact in X, then there exists u ∈ X such that u is a minimizer of E, u → u, and E (u ) → E(u). Definition 6 (Γ-convergence) Let E : X → R and E : X → R, where X is a topological space. Then E Γ converges to E if the following two conditions hold for any u ∈ X i. Lower semi continuity inequality: for every equence (u ) ∈ Xsuch that u → u E(u) ≤ lim inf →0 E (u ), ii. Existence of a recovery sequence: there exists a sequence (u ) ∈ X with u → u such that lim sup →0 E (u ) ≤ E(u). Chapter 2 Crack nucleation in variational phase-field models of brittle fracture Despite its many successes, Griffith's theory of brittle fracture [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF] and its heir, Linear Elastic Fracture Mechanics (LEFM), still faces many challenges. In order to identify a crack path, additional branching criteria whose choice is still unsettled have to be considered. Accounting for scale effects in LEFM is also challenging, as illustrated by the following example: consider a reference structure of unit size rescaled by a factor L. The critical loading at the onset of fracture scales then as 1/ √ L, leading to a infinite nucleation load as the structure size approaches 0, which is inconsistent with experimental observation for small structures [START_REF] Bažant | Scaling of quasibrittle fracture: asymptotic analysis[END_REF][START_REF] Issa | Size effects in concrete fracture: Part I, experimental setup and observations[END_REF][START_REF] Chudnovsky | Slow crack growth, its modeling and crack-layer approach: A review[END_REF]. It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical lengthscale) in Griffith's theory. Yet, augmenting LEFM to account for a critical stress is very challenging. In essence, the idea of material strength is incompatible with the concept of elastic energy release rate near stress singularity, the pillar of Griffith-like theories, as it would imply crack nucleation under an infinitesimal loading. Furthermore, a nucleation criterion based solely on pointwise maximum stress will be unable to handle crack formation in a body subject to a uniform stress distribution. Many approaches have been proposed to provide models capable of addressing the aforementioned issues. Some propose to stray from Griffith fundamental hypotheses by incorporating cohesive fracture energies [START_REF] Ortiz | Finite-deformation irreversible cohesive elements for three-dimensional crack-propagation analysis[END_REF][START_REF] Del Piero | A diffuse cohesive energy approach to fracture and plasticity: the one-dimensional case[END_REF][START_REF] De Borst | Cohesive-zone models, higher-order continuum theories and reliability methods for computational failure analysis[END_REF][START_REF] Charlotte | Initiation of cracks with cohesive force models: a variational approach[END_REF] or material non-linearities [START_REF] Gou | Modeling fracture in the context of a strain-limiting theory of elasticity: A single plane-strain crack[END_REF]. Others have proposed dual-criteria involving both elastic energy release rate and material strength such as [START_REF] Leguillon | Strength or toughness? A criterion for crack onset at a notch[END_REF], for instance. Models based on the peridynamics theory [START_REF] Silling | Reformulation of elasticity theory for discontinuities and long-range forces[END_REF] may present an alternative way to handle these issues, but to our knowledge, they are still falling short of providing robust quantitative predictions at the structural scale. Francfort and Marigo [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] set to devise a formulation of brittle fracture based solely on Griffith's idea of competition between elastic and fracture energy, yet capable of handling the issues of crack path and crack nucleation. However, as already pointed-out in [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], their model inherits a fundamental limitation of the Griffith theory and LEFM: the lack of an internal length scale and of maximum allowable stresses. Amongst many numerical methods originally devised for the numerical implemen-tation of the Francfort-Marigo model [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Negri | Numerical minimization of the Mumford-Shah functional[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF][START_REF] Schmidt | Eigenfracture: An eigendeformation approach to variational fracture[END_REF], Ambrosio-Tortorelli regularizations [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], have become ubiquitous. They are known nowadays as phase-field models of fracture, and share several common points with the approaches coming from Ginzburg-Landau models for phase-transition [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF]. They have been applied to a wide variety of fracture problems including fracture of ferro-magnetic and piezo-electric materials [START_REF] Abdollahi | Phase-field modeling of crack propagation in piezoelectric and ferroelectric materials with different electromechanical crack conditions[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF], thermal and drying cracks [START_REF] Maurini | Crack patterns obtained by unidirectional drying of a colloidal suspension in a capillary tube: experiments and numerical simulations using a two-dimensional variational approach[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF], or hydraulic fracturing [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF][START_REF] Wheeler | An augmented-lagrangian method for the phase-field approach for pressurized fractures[END_REF][START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF][START_REF] Wilson | Phase-field modeling of hydraulic fracture[END_REF] to name a few. They have been expended to account for dynamic effects [START_REF] Larsen | Existence of solutions to a regularized model of dynamic fracture[END_REF][START_REF] Bourdin | A time-discrete model for dynamic fracture based on crack regularization[END_REF][START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF][START_REF] Hofacker | A phase field model of dynamic fracture: Robust field updates for the analysis of complex crack patterns[END_REF], ductile behavior [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Miehe | Phase field modeling of fracture in multi-physics problems. Part II. coupled brittle-to-ductile failure criteria and crack propagation in thermo-elastic-plastic solids[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF], cohesive effects [START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Freddi | Numerical insight of a variational smeared approach to cohesive fracture[END_REF], large deformations [START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Borden | A phasefield formulation for fracture in ductile materials: Finite deformation balance law derivation, plastic degradation, and stress triaxiality effects[END_REF], or anisotropy [START_REF] Li | Phase-field modeling and simulation of fracture in brittle materials with strongly anisotropic surface energy[END_REF], for instance. Although phase-field models were originally conceived as approximations of Francfort and Marigo's variational approach to fracture in the vanishing limit of their regularization parameter, a growing body of literature is concerned with their links with gradient damage models [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF]. In this setting, the regularization parameter is kept fixed and interpreted as a material's internal length [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Freddi | Regularized variational theories of fracture: A unified approach[END_REF][START_REF] Del | A variational approach to fracture and other inelastic phenomena[END_REF]. In particular, [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] proposed an evolution principle for an Ambrosio-Tortorelli like energy based on irreversibility, stability and energy balance, where the regularization parameter is kept fixed and interpreted as a material's internal length. This approach, which we refer to as variational phase-field models, introduces a critical stress proportional to 1/ √ . As observed in [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF], it can potentially reconcile stress and toughness criteria for crack nucleation, recover pertinent size effect at small and large length-scales, and provide a robust and relatively simple approach to model crack propagation in complex two-and three-dimensional settings. However, the few studies providing experimental verifications [START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] are still insufficient to fully support this conjecture. The goal of this chapter is precisely to provide such evidences, focusing on nucleation and size-effects for mode-I cracks. We provide quantitative comparisons of nucleation loads near stress concentrations and singularities with published experimental results for a range of materials. We show that variational phase-field models can reconcile strength and toughness thresholds and account for scale effect at the structural and the material length-scale. In passing, we leverage the predictive power of our approach to propose a new way to measure a material's tensile strength from the nucleation load of a crack near a stress concentration or a weak singularity. In this study, we focus solely on the identification of the critical stress at the first crack nucleation event and are not concerned by the post-critical fracture behavior. The chapter is organized as follows: in Section 2.1, we introduce variational phasefield models and recall some of their properties. Section 2.2 focuses on the links between stress singularities or concentrations and crack nucleation in these models. We provide validation and verification results for nucleation induced by stress singularities using Vshaped notches, and concentrations using U-notches. Section 2.3 is concerned with shape and size effects. We investigate the role of the internal length on nucleation near a defect, focusing on an elliptical cavity and a mode-I crack, and discussing scale effects at the material and structural length scales. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture Variational phase-field models We start by recalling some important properties of variational phase-field models, focussing first on their construction as approximation method of Francfort and Marigo's variational approach to fracture, then on their alternative formulation and interpretation as gradient-damage models. Regularization of the Francfort-Marigo fracture energy Consider a perfectly brittle material with Hooke's law A and critical elastic energy release rate G c occupying a region Ω ⊂ R n , subject to a time dependent boundary displacement ū(t) on a part ∂ D Ω of its boundary and stress-free on the remainder ∂ N Ω. In the variational approach to fracture, the quasi-static equilibrium displacement u i and crack set Γ i at a given discrete time step t i are given by the minimization problem (see also [START_REF] Bourdin | The variational approach to fracture[END_REF]) (u i , Γ i ) = argmin u=ū i on ∂ D Ω Γ⊃Γ i-1 E(u, Γ) := Ω\Γ 1 2 Ae(u) • e(u) dx + G c H n-1 (Γ ∩ Ω \ ∂ N Ω), (2.1) where H n-1 (Γ) denotes the Hausdorff n -1-dimensional measure of the unknown crack Γ, i.e. its aggregate length in two dimensions or surface area in three dimensions, and e(u) := 1 2 (∇u + ∇ T u) denotes the symmetrized gradient of u. Because in (2.1) the crack geometry Γ is unknown, special numerical methods had to be crafted. Various approaches based for instance on adaptive or discontinuous finite elements were introduced [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF]. Variational phase-field methods, take their roots in Ambrosio and Tortorelli's regularization of the Mumford-Shah problem in image processing [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], adapted to brittle fracture in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF]. In this framework, a regularized energy E depending on a regularization length > 0 and a "phase-field" variable α taking its values in [0, 1] is introduced. A broad class of such functionals was introduced in [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. They are E (u, α) = Ω a(α) + η 2 Ae(u) • e(u) dx + G c 4c w Ω w(α) + |∇α| 2 dx, (2.2) where a and w are continuous monotonic functions such that a(0) = 1, a(1) = 0, w(0) = 0, and w(1) = 1, η = o( ), and c w := 1 0 w(s) ds is a normalization parameter. The approximation of E by E takes place with the framework of Γ-convergence (see [START_REF] Maso | An introduction to Γ-convergence[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF] for instance). More precisely, if E Γ-converges to E, then the global minimizers of E converge to that of E. The Γ-convergence of a broad class of energies, including the ones above was achieved with various degrees of refinement going from static scalar elasticity to time discrete and time continuous quasi-static evolution linearized elasticity, and their finite element discretization [START_REF] Bellettini | Discrete approximation of a free discontinuity problem[END_REF][START_REF] Bourdin | Image segmentation with a finite element method[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF][START_REF] Giacomini | Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a generalized Ambrosio-Tortorelli functional[END_REF][START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF]. Throughout this chapter, we focus on two specific models: E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + G c 2 Ω α 2 + |∇α| 2 dx, (AT 2 ) 2.1. Variational phase-field models introduced in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] for the Mumford-Shah problem and in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] for brittle fracture, and E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + 3G c 8 Ω α + |∇α| 2 dx (AT 1 ) used in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. The "surfing" problem introduced in [START_REF] Hossain | Effective toughness of heterogeneous media[END_REF] consists in applying a translating boundary displacement on ∂Ω given by ū(x, y) = ūI (x-V t, y), where ūI denotes the asymptotic farfield displacement field associated with a mode-I crack along the x-axis with tip at (0, 0), V is a prescribed loading "velocity", and t a loading parameter ("time"). ] with an initial crack Γ 0 = [0, l 0 ] × {0} for several values of . The AT 1 model is used, assuming plane stress conditions, and the mesh size h is adjusted so that /h = 5, keeping the "effective" numerical toughness G eff := G c 1 + h 4cw fixed (see [START_REF] Bourdin | The variational approach to fracture[END_REF]). The Poisson ratio is ν = 0.3, the Young's modulus is E = 1, the fracture toughness is G c = 1.5, and the loading rate V = 4. As expected, after a transition stage, the crack length depends linearly on the loading parameter with slope 3.99, 4.00 and 4.01 for =0.1, 0.05 and 0.025 respectively. The elastic energy release rate G, computed using the G θ method [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF][START_REF] Li | Gradient damage modeling of brittle fracture in an explicit dynamics context[END_REF] is very close to G eff . Even though Γ-convergence only mandates that the elastic energy release rate in the regularized energy converges to that of Griffith as → 0, we observe that as long as is "compatible" with the discretization size and domain geometry, its influence on crack propagation is insignificant. Similar observations were reported in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF][START_REF] Zhang | Numerical evaluation of the phasefield model for brittle fracture with emphasis on the length scale[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF]. Figure 2.1(right) repeats the same experiment for a curve propagating along a circular path. Here, the boundary displacement is given by Muskhelishvili's exact solution for a crack propagating in mode-I along a circular path [START_REF] Muskhelishvili | Some Basic Problems of the Mathematical Theory of Elasticity: Fundamental Equations, Plane Theory of Elasticity, Torsion, and Bending (translated from Russian)[END_REF]. The Young's modulus, fracture toughness, and loading rate are set to 1. Again, we see that even for a fixed regularization length, the crack obeys Griffith's criterion. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture When crack nucleation is involved, the picture is considerably different. Consider a one-dimensional domain of length L, fixed at one end and submitted to an applied displacement ū = e L at the other end. For the lack of an elastic singularity, LEFM is incapable of predicting crack nucleation here, and predicts a structure capable of supporting arbitrarily large loads without failing. A quick calculation shows that the global minimizer of (2.1) corresponds to an uncracked elastic solution if e < e c := 2Gc EL , while at e = e c , a single crack nucleates at an arbitrary location (see [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]). The failure stress is σ c = 2G c E/L, which is consistent with the scaling law σ c = O 1/ √ L mentioned in the introduction. The uncracked configuration is always a stable local minimizer of (2.1), so that if local minimization of (2.1) is considered, nucleation never takes place. Just as before, one can argue that for the lack of a critical stress, an evolution governed by the generalized Griffith energy (2.1) does not properly account for nucleation and scaling laws. When performing global minimization of (2.2) using the backtracking algorithm of [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for instance, a single crack nucleates at an -dependent load. As predicted by the Γ-convergence of E to E, the critical stress at nucleation converges to 2G c E/L as → 0. Local minimization of (2.2) using the alternate minimizations algorithm of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], or presumably any gradient-based monotonically decreasing scheme, leads to the nucleation of a single crack at a critical load e c , associated with a critical stress σ c = O G c E/ , as described in [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for example. In the limit of vanishing , local and global minimization of (2.2) inherit therefore the weaknesses of Griffith-like theories when dealing with scaling properties and crack nucleation. Variational phase-field models as gradient damage models More recent works have sought to leverage the link between σ c and . Ambrosio-Tortorelli functionals are then seen as the free energy of a gradient damage model [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF][START_REF] Benallal | Bifurcation and stability issues in gradient theories with softening[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] where α plays the role of a scalar damage field. In [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], a thorough investigation of a one-dimensional tension problem led to interpreting as a material's internal or characteristic length linked to a material's tensile strength. An overview of this latter approach, which is the one adopted in the rest of this work, is given below. In all that follows, we focus on a time-discrete evolution but refer the reader to [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for a time-continuous formulation which can be justified within the framework of generalized standard materials [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF] and rate-independent processes [START_REF] Mielke | Evolution of rate-independent systems[END_REF]. At any time step i > 1, the sets of admissible displacement and damage fields C i and D i , equipped with their natural H 1 norm, are C i = u ∈ H 1 (Ω) : u = ūi on ∂ D Ω , D i = β ∈ H 1 (Ω) : α i-1 (x) ≤ β(x) ≤ 1, ∀x ∈ Ω , where the constraint α i-1 (x) ≤ β(x) ≤ 1 in the definition of D i mandates that the damage be an increasing function of time, accounting for the irreversible nature of the 2.1. Variational phase-field models damage process. The damage and displacement fields (u i , α i ) are then local minimizers of the energy E , i.e. there exists h i > 0 such that ∀(v, β) ∈ C i × D i such that (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β), (2.3) where • denotes the natural H 1 norm of C i × D i . We briefly summarize the solution of the uniaxial tension of a homogeneous bar [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], referring the reader to the recent review [START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for further details: As one increases the applied strain, the damage field remains 0 and the stress field constant until it reaches the elastic limit σ e = G c E c w w (0) 2s (0) . (2.4) where E is the Young modulus of the undamaged material, and s(α) = 1/a(α). If the applied displacement is increased further, the damage field increases but remains spatially constant. Stress hardening is observed until peak stress σ c , followed by stress softening. A stability analysis shows that for long enough domains (i.e. when L ), the homogeneous solution is never stable in the stress softening phase, and that a snapback to a fully localized solution such that max x∈(0,L) α(x) = 1 is observed. The profile of the localized solution and the width D of the localization can be derived explicitly from the functions a and w. With the choice of normalization of (2.2), the surface energy associated to the fully localized solution is exactly G c and its elastic energy is 0, so that the overall response of the bar is that of a brittle material with toughness G c and strength σ c . Knowing the material's toughness G c and the Young's modulus E, one can then adjust in such a way that the peak stress σ c matches the nominal material's strength. Let us denote by ch = G c E σ 2 c = K 2 Ic σ 2 c (2.5) the classical material's characteristic length (see [START_REF] Rice | The mechanics of earthquake rupture[END_REF][START_REF] Falk | A critical evaluation of cohesive zone models of dynamic fracture[END_REF], for instance), where E = E in three dimensions and in plane stress, or E = E/(1ν 2 ) in plane strain, and K Ic = √ G c E is the mode-I critical stress intensity factor. The identification above gives 1 := 3 8 ch ; 2 := 27 256 ch , (2.6) for the AT 1 and AT 2 models, respectively. Table 2.1 summarizes the specific properties of the AT 1 and AT 2 models. The AT 1 model has some key conceptual and practical advantages over the AT 2 model used in previous works, which were leveraged in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] for instance: It has a non-zero elastic limit, preventing diffuse damage at small loading. The length localization band D is finite so that equivalence with Griffith energy is obtained even for a finite value of , and not only in the limit of → 0, as predicted by Γ-convergence [START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF]. By remaining quadratic in the α and u variables, its numerical implementation using alternate minimizations originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] is very efficient. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture Model w(α) a(α) c w σ e σ c D ch AT 1 α (1 -α) 2 2 3 3GcE 8 3GcE 8 4 8 3 AT 2 α 2 (1 -α) 2 1 2 0 3 16 3GcE ∞ 256 27 Table 2.1: Properties of the gradient damage models considered in this work: the elastic limit σ e , the material strength σ c , the width of the damage band D, and the conventional material length ch defined in (2.5). We use the classical convention E = E in three dimension and in plane stress, and E = E/(1 -ν 2 ) in plane strain. In all the numerical simulations presented below, the energy (2.2) is discretized using linear Lagrange finite elements, and minimization performed by alternating minimization with respect to u and α. Minimization with respect to u is a simple linear problem solved using preconditioned gradient conjugated while constrained minimization with respect to α is reformulated as a variational inequality and implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. All computations were performed using the open source implementations mef901 and gradient-damage2 . Effect of stress concentrations The discussion above suggests that variational phase-field models, as presented in Section 2.1.2, can account for strength and toughness criteria simultaneously, on an idealized geometry. We propose to investigate this claim further by focusing on more general geometries, a V-shaped notch to illustrate nucleation near stress singularities and a Ushaped notch for stress concentrations. There is a wealth of experimental literature on crack initiation in such geometries using three-point bending (TPB), four-point bending (FPB), single or double edge notch tension (SENT and DENT) allowing us to provide qualitative validation and verification simulations of the critical load at nucleation. Initiation near a weak stress singularity: the V-notch Consider a V-shaped notch in a linear elastic isotropic homogeneous material. Let (r, θ) be the polar coordinate system emanating from the notch tip with θ = 0 corresponding to the notch symmetry axis, shown on Figure 2.2(left). Assuming that the notch lips Γ + ∪ Γ -are stress-free, the mode-I component of the singular part of the stress field in 2.2. Effect of stress concentrations plane strain is given in [START_REF] Leguillon | Computation of Singular Solutions in Elliptic Problems and Elasticity[END_REF]: σ θθ = kr λ-1 F (θ), σ rr = kr λ-1 F (θ) + (λ + 1)F (θ) λ(λ + 1) , σ rθ = -kr λ-1 F (θ) (λ + 1) , (2.7) where F (θ) = (2π) λ-1 cos((1 + λ)θ) -f (λ, ω) cos((1 -λ)θ) 1 -f (λ, ω) , (2.8) and f (λ, ω) = (1 + λ) sin((1 + λ)(π -ω)) (1 -λ) sin((1 -λ)(π -ω)) , (2.9) and the exponent of the singularity λ ∈ [1/2, 1], see (2.11) Note that this definition differs from the one often encountered in the literature by a factor (2π) λ-1 , so that when ω = 0 (i.e. when the notch degenerates into a crack), k corresponds to the mode-I stress intensity factor whereas when ω = π/2, k is the tangential stress, and that the physical dimension of [k] ≡ N/m -λ -1 is not a constant but depends on the singularity power λ. If ω < π/2 (i.e. ω > π/2), the stress field is singular at the notch tip so that a nucleation criterion based on maximum pointwise stress will predict crack nucleation for any arbitrary small loading. Yet, as long as ω > 0 (ω < π), the exponent of the singularity is sub-critical in the sense of Griffith, so that LEFM forbids crack nucleation, regardless of the magnitude of the loading. ūr = r λ E (1 -ν 2 )F (θ) + (λ + 1)[1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (λ + 1) ūθ = r λ E (1 -ν 2 )F (θ) + [2(1 + ν)λ 2 + (λ + 1)(1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (1 -λ 2 ) . (2.12) In the mode-I Pac-Man test, we apply a boundary displacement on the outer edge of the domain ∂ D Ω of the form tū on both components of u, t being a monotonically increasing loading parameter. We performed series of numerical simulations varying the notch angle ω and regularization parameter for the AT 1 and AT 2 models. Up to a rescaling and without loss of generality, it is always possible to assume that E = 1 and G c = 1. The Poisson ratio was set to ν = 0.3. We either prescribed the value of the damage field on Γ + ∪ Γ -to 1 (we refer this to as "damaged notch conditions") or let it free ("undamaged notch conditions"). The mesh size was kept at a fixed ratio of the internal length h = /5. For "small" enough loadings, we observe an elastic or nearly elastic phase during which the damage field remains 0 or near 0 away from an area of radius o( ) near the notch tip. Then, for some loading t = t c , we observed the initiation of a "large" add-crack associated with a sudden jump of the elastic and surface energy. Figure 2.3 shows a typical mesh, the damage field immediately before and after nucleation of a macroscopic crack and the energetic signature of the nucleation event. Figure 2.4 shows that up to the critical loading, the generalized stress intensity factor can be accurately recovered by averaging σ θθ (r, 0)/(2π r) λ-1 along the symmetry axis of the domain, provided that the region r ≤ 2 be excluded. Figure 2.5(left) shows the influence of the internal length on the critical generalized stress intensity factor for a sharp notch (ω = 0.18°) for the AT 1 and AT 2 models, using damaged and undamaged notch boundary conditions on the damage field. In this case, with the normalization (2.11), the generalized stress intensity factor coincides with the standard mode-I stress intensity factor K Ic . As suggested by the surfing experiment in t := k AT c K Ic = √ G c E . As reported previously in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF] for instance, undamaged notch conditions lead to overestimating the critical load. We speculate that this is because with undamaged notch condition, the energy barrier associated with bifurcation from an undamaged (or partially damaged) state to a fully localized state needs to be overcome. As expected, this energy barrier is larger for the AT 1 model than for the AT 2 model for which large damaged areas ahead of the notch tip are observed. For flat notches (2ω = 179.64°) as shown in Figure 2.5(right), the generalized stress intensity factor k takes the dimension of a stress, and crack nucleation is observed when k c reaches the -dependent value σ c given in Table 2.1, i.e. when σ θθ | θ=0 = σ c , as in the uniaxial tension problem. In this case the type of damage boundary condition on the notch seems to have little influence. For intermediate values of ω, we observe in Figure 2.6 that the critical generalized stress intensity factor varies smoothly and monotonically between its extreme values and remains very close to K Ic for opening angles as high as 30°, which justifies the common numerical practice of replacing initial cracks with slightly open sharp notches and damaged notch boundary conditions. See Table 2.3 for numerical data. k c /(K Ic ) ef f ω = 0.18 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D 10 -1 10 0 ch 1 2 4 6 8 k c 1 -0.5 ω = 89, 82 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D Validation For intermediate values 0 < 2ω < π, we focus on validation against experiments from the literature based on measurements of the generalized stress intensity factor at a V-shaped notch. Data from single edge notch tension (SENT) test of soft annealed tool steel, (AISI O1 at -50 • C) [START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF], four point bending (FPB) experiments of Divinycell® H80, H100, H130, and H200 PVC foams) [START_REF] Grenestedt | On cracks emanating from wedges in expanded PVC foam[END_REF], and double edge notch tension (DENT) experiments of poly methyl methacrylate (PMMA) and Duraluminium [START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF], were compiled in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF]. We performed a series of numerical simulations of Pac-Man tests using the material properties reported in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF] and listed in Table 2.2. In all cases, the internal length was computed using (2.6). Plexiglass DENT (Seweryn) numerical simulations with experimental values reported in the literature for V-notch with varying aperture. The definition (2.11) for k is used. For the AT 1 model, we observe a good agreement for the entire range of notch openings, as long as damaged notch conditions are used for small notch angles and undamaged notch conditions for large notch angles. For the AT 2 model, the same is true, but the agreement is not as good for large notch angles, due to the presence of large areas of distributed damage prior to crack nucleation. Effect of stress concentrations Material E ν K Ic σ c source [MPa] [MPa √ m] [MPa] Al 2 O 3 - k c [MPa.m 1-λ ] Steel SENT (Strandberg) AT 1 -U AT 1 -D AT 2 -U AT 2 -D 0 AT 1 -U AT 1 -D AT 2 -U AT 2 -D The numerical values of the critical generalized stress intensity factors for the AT 1 models and the experiments from the literature are included in Tables 2.4, 2.5, 2.6, and 2.7 using the convention of (2.11) for k. As suggested by Figure 2.5 and reported in the literature see [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF], nucleation is best captured if damaged notch boundary conditions are used for sharp notches and undamaged notch conditions for flat ones. These examples strongly suggest that variational phase-field models of fracture are capable of predicting mode-I nucleation in stress and toughness dominated situations, as seen above, but also in the intermediate cases. Conceptually, toughness and strength (or equivalently internal length) could be measured by matching generalized stress intensity factors in experiments and simulations. When doing so, however, extreme care has to be exerted in order to ensure that the structural geometry has no impact on the measured generalized stress. Similar experiments were performed in [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF][START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] for three and four point bending experiments on PMMA and Aluminum oxide-Zirconia ceramics samples. While the authors kept the notch angle fixed, they performed three and four point bending experiments or varied the relative depth of the notch as a fraction of the sample height (see Figure 2.9). Figure 2.9: Schematic of the geometry and loading in the four point bending experiments of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] (left) and three point bending experiments of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] (right). The geometry of the three point bending experiment of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is identical to that of their four point bending, up to the location of the loading devices. Figure 2.10 compares numerical values of the generalized stress intensity factor using the AT 1 model with experimental measurements, and the actual numerical values are included in Table 2.8 and 2.9. For the Aluminum oxide-Zirconia ceramic, we observe that the absolute error between measurement and numerical prediction is typically well within the standard deviation of the experimental measurement. As expected, damaged notch boundary conditions lead Chapter 2. Crack nucleation in variational phase-field models of brittle fracture to better approximation of k c for small angles, and undamaged notches are better for larger values of ω. k c [MPa.m 1-λ ] Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U AT 1 -D 20 For the three point bending experiments in PMMA of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] later reported in [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF], the experimental results suggest that the relative depth a/h of the notch has a significant impact on k c . We therefore performed full-domain numerical simulation using the geometry and loading from the literature, and compared the critical force upon which a crack nucleates in experiments and simulations. All computations were performed using the AT 1 model in plane strain with undamaged notch boundary conditions. Figure 2.11 compares the experimental and simulated value of the critical load at failure, listed in Table 2.10 and 2.11. These simulations show that a robust quantitative prediction of the failure load in geometries involving a broad range of stress singularity power can be achieved numerically with the AT 1 model, provided that the internal length be computed using (2.6), which involves only material properties. In other words, our approach is capable of predicting crack nucleation near a weak stress singularity using only elastic properties, fracture toughness G c , the tensile strength σ c , and the local energy minimization principle (2.3). In light of Figure 2.11, we suggest that both toughness and tensile strength (or equivalently toughness and internal length) can be measured by matching full domain or Pac-Man computations and experiments involving weak elastic singularity of various power (TPB, FPB, SENT, DENT with varying notch depth or angle) instead of measuring σ c directly. We expect that this approach will be much less sensitive to imperfections than the direct measurement of tensile strength, which is virtually impossible. Furthermore, since our criterion is not based on crack tip asymptotics, using full domain computations do not require that the experiments be specially designed to isolate the notch tip singularity from structural scale deformations. PMMA TPB (Dunn) Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U FPB AT 1 -U TPB a h = .1 PMMA TPB (Dunn) a h = .2 PMMA TPB (Dunn) a h = .3 PMMA TPB (Dunn) a h = .4 AT 1 -U a h = .1 AT 1 -U a h = .2 AT 1 -U a h = .3 AT 1 -U a h = .4 Figure Initiation near a stress concentration: the U-notch Crack nucleation in a U-shaped notch is another classical problem that has attracted a wealth of experimental and theoretical work. Consider a U-shaped notch of width ρ and length a ρ subject to a mode-I local loading (see Figure 2.12 for a description of notch geometry in the context of a double edge notch tension sample). Assuming "smooth" loadings and applied boundary displacements, elliptic regularity mandates that the stress field be non-singular near the notch tip, provided that ρ > 0. Within the realm of Griffith fracture, this of course makes crack nucleation impossible. As it is the case for the Vnotch, introducing a nucleation principle based on a critical stress is also not satisfying as it will lead to a nucleation load going to 0 as ρ → 0, instead of converging to that of an infinitely thin crack given by Griffith's criterion. There is a significant body of literature on "notch mechanics", seeking to address this problem introducing stress based criteria, generalized stress intensity factors, or intrinsic material length and cohesive zones. A survey of such models, compared with experiments on a wide range of brittle materials is given [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. In what follows, we study crack nucleation near stress concentrations in the AT 1 and AT 2 models and compare with the experiments gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. The core of their analysis consist in defining a generalized stress intensity factor K U = K t σ ∞ c πρ 4 , (2.13) where K t , the notch stress concentration factor, is a parameter depending on the local (a and ρ), as well as global sample geometry and loading. Through a dimensional analysis, they studied the dependence of the critical generalized stress intensity factor at the onset Chapter 2. Crack nucleation in variational phase-field models of brittle fracture 25, and 0.5 for which the value K t , computed in [START_REF] Lazzarin | A generalized stress intensity factor to be applied to rounded v-shaped notches[END_REF] is respectively 5.33, 7.26, and 11.12. In each case, we leveraged the symmetries of the problem by performing computations with the AT 1 and AT 2 models on a quarter of the domain for a number of values of the internal length corresponding to ρ/ ch between 0.05 and 20. In all cases, undamaged notch boundary conditions were used. In Figure 2.13, we overlay the outcome of our simulations over the experimental results gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. As for the V-notch, we observe that the AT 2 model performs poorly for weak stress concentrations (large values of ρ/ ch ), as the lack of an elastic phase leads to the creation of large partially damaged areas. For sharp notches (ρ 0), our simulations concur with the experiments in predicting crack nucleation when K U = K Ic . As seen earlier, the AT 1 slightly overestimates the critical load in this regime when undamaged notch boundary conditions are used. In light of Figure 2.13, we claim that numerical simulations based on the variational phase-field model AT 1 provides a simple way to predict crack nucleation that does not require the computation of a notch stress concentration factors K t or the introduction of an ad-hoc criterion. Size effects in variational phase-field models Variational phase-field models are characterized by the intrinsic length , or ch . In this section, we show that this length-scale introduces physically pertinent scale effects, corroborating its interpretation as a material length. To this end, we study the nucleation of a crack in the uniaxial traction of a plate (-W, W ) × (-L, L) with a centered elliptical hole with semi-axes a and ρa (0 ≤ ρ ≤ 1) along the x-and y-axes respectively, see Figure 2.14. In Section 2.3.1, we study the effect of the size and shape of the cavity, assumed to be small with respect to the dimension of the plate (a W, L). In Section 2.3.2, we investigate material and structural size effects for a plate of finite width in the limit case of a perfect crack (ρ = 0). For a small hole (a W, L), up to a change of scale, the problem can be fully characterized by two dimensionless parameters: a/ , and ρ. For a linear elastic and isotropic material occupying an infinite domain, a close form expression of the stress field as a function of the hole size and aspect ratio is given in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF]. The stress is maximum at the points A = (a, 0) and A = (-a, 0), where the radial stress is zero and the hoop stress is given by: σ max = t 1 + 2 ρ , (2.14) t denoting the applied tensile stress along the upper and lower edges of the domain, i.e. the applied macroscopic stress at infinity. We denote by ū the corresponding displacement field for t = 1, which is given in [START_REF] Gao | A general solution of an infinite elastic plate with an elliptic hole under biaxial loading[END_REF]. As for the case of a perfect bar, (2.14) exposes a fundamental issue: if ρ > 0, the stress remains finite, so that Griffith-based theories will only predict crack nucleation if ρ = 0. In that case the limit load given by the Griffith's criterion for crack nucleation is t = σ G := G c E aπ . (2.15) However, as ρ → 0, the stress becomes singular so that the critical tensile stress σ c is exceeded for an infinitesimally small macroscopic stress t. Following the findings of the previous sections, we focus our attention on the AT 1 model only, and present numerical simulations assuming a Poisson ratio ν = 0.3 and plane-stress conditions. We perform our simulations in domain of finite size, here a disk of radius R centered around the defect. Along the outer perimeter of the domain, we apply a boundary displacement u = tū, where ū is as in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF], and we use the macroscopic stress t a loading parameter. Assuming a symmetric solution, we perform our computations on a quarter domain. For the circular case ρ = 1, we use a reference mesh size h = min /10, where min is the smallest value of the internal length of the set of simulations. For ρ < 1, we selectively refine the element size near the expected nucleation site (see Figure 2.14-right). In order to minimize the effect of the finite size of the domain, we set R = 100a. We performed numerical simulations varying the aspect ratio a/ from 0.1 to 50 and the ellipticity ρ from 0.1 to 1.0. In each case, we started from an undamaged state an monotonically increased the loading. In all numerical simulations, we observe two critical loading t e and t c , the elastic limit and structural strength, respectively. For 0 ≤ t < t e the solution is purely elastic, i.e. the damage field α remains identically 0 (see Figure 2.15left). For t e ≤ t < t c , partial distributed damage is observed. The damage field takes its maximum value α max < 1 near point A (see Figure 2.15-center). At t = t c , a fully developed crack nucleates, then propagates for t > t c (see Figure 2.15-right). As for the Pac-Man problem, we identify the crack nucleation with a jump in surface energy, and focus on loading at the onset of damage. From the one-dimensional problem of Section 2.1.2 and [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], we expect damage nucleation to take place when the maximum stress σ max reaches the nominal material strength σ c = 3G c E /8 (see Table 2.1), i.e. for a critical load t e = ρ 2 + ρ σ c = ρ 2 + ρ 3G c E 8 . (2.16) Figure 2.16-left confirms this expectation by comparing the ratio t e /σ c to its expected value ρ/(2 + ρ) for ρ ranging from 0.1 to 1. Figure 2.16-right highlights the absence of size effect on the damage nucleation load, by comparing t e /σ c for multiple values of a/ while keeping ρ fixed at 0.1 and 1. Figure 2.17 focuses on the crack nucleation load t c , showing its dependence on the defect shape (left) and size (right). Figure 2.17-right shows the case of circular hole (ρ = 1) and an elongated ellipse, which can be identified to a crack (ρ = 0.1). It clearly highlights a scale effect including three regimes: i. For "small" holes (a ), crack nucleation takes place when t = σ c , as in the uniaxial traction of a perfect bar without the hole: the hole has virtually no effect on crack nucleation. In this regime the strength of a structure is completely determined by that of the constitutive material. Defects of this size do not reduce the structural strength and can be ignored at the macroscopic level. ii. Holes with length of the order of the internal length (a = O( )), have a strong impact on the structural strength. In this regime the structural strength can be approximated by log(t c /σ c ) = D log(a/ ) + c, (2.17) where D is an dimensionless coefficient depending on the defect shape. For a circular hole ρ = 1, we have D ≈ -1/3. iii. When a , the structural failure is completely determined by the stress distribution surrounding the defect. We observe that for weak stress singularities (ρ ≡ 1), nucleation takes place when the maximum stress reaches the elastic limit σ e , whereas the behavior as ρ ≡ 0 is consistent with Griffith criterion, i.e. the nucleation load scales as 1/ √ a. Figure 2.17-right shows that the shape of the cavity has a significant influence on the critical load only in the latter regime, a . Indeed, for a/ of the order of the unity or smaller, the critical loads t c for circular and highly elongated cavities are almost indistinguishable. This small sensitivity of the critical load on the shape is the result of the stress-smoothing effect of the damage field, which is characterized by a cut-off length of the order of . Figure 2.17-left shows the critical stress t c at nucleation when varying the aspect ratio ρ for a/ = 48, for which σ G /σ c = 2/15. As expected, the critical stress varies smoothly from the value σ G (2.15) predicted by the Griffith theory for a highly elongated cavity identified to a perfect crack, to t e (2.16) for circular cracks, where the crack nucleates as soon as the maximum stress σ max attains the elastic limit. This series of experiments is consistent with the results of Section 2.2.2 showing that variational phase-field models are capable of simultaneously accounting for critical elastic energy release rate and critical stress. Furthermore, they illustrate how the internal length can be linked to critical defect size as the nucleation load for a vanishing defect of size less than approaches that of a flawless structure. Competition between material and structural size effects We can finally conclude the study of size effects in variational phase-field models by focusing on the competition between material and structural size effects. For that matter, we study the limit case ρ = 0 of a perfect crack of finite length 2a in a plate of finite width 2W (see Figure 2.18-left). Under the hypotheses of LEFM, the critical load upon which the crack propagates is σ G (a/ ch , a/W ) = G c E cos( aπ 2W ) aπ = σ c 1 π ch a cos aπ 2W , (2.18) which reduces to (2.15) for large plate (W/a → ∞). As before, we note that σ G /σ c → ∞ as a/ ch → 0, so that for any given load, the material's tensile strength is exceeded for short enough initial crack. We performed series of numerical simulations using the AT 1 model on a quarter of the domain with W = 1, L = 4, ν = 0.3, = W/25, h = /20, and the initial crack's halflength a ranging from from 0.025 to 12.5 (i.e. 0.001W to 0.5W ). The pre-existing crack was modeled as a geometric feature and undamaged crack lip boundary conditions were prescribed. The loading was applied by imposing a uniform normal stress of amplitude t to its upper and lower edge. theories linking size-effect on the strength of the material [START_REF] Bažant | Scaling of Structural Strength[END_REF]. When a , i.e. when the defect is large compared to the material's length, crack initiation is governed by Griffith's criterion (2.18). As noted earlier, the choice of undamaged notch boundary conditions on the damage fields leads to slightly overestimating the nucleation load. Our numerical simulations reproduce the structural size effect predicted by LEFM when the crack length is comparable to the plate width W . When a , we observe that the macroscopic structural strength is very close to the material's tensile strength. Again, below the material's internal length, defects have virtually no impact on the structural response. LEFM and Griffith-based models cannot account for this material size-effect. These effects are introduced in variational phase-field model by the additional material parameter . In the intermediate regime a = O( ), we observe a smooth transition between strength and toughness criteria, where the tensile strength is never exceeded. When a , our numerical simulations are consistent with predictions from Linear Elastic Fracture Mechanics shown as a dashed line in Figure 2.18, whereas when a , the structural effect of the small crack disappear, and nucleation takes place at or near the material's tensile strength, i.e. t c /σ c 1. Conclusion In contrast with most of the literature on phase-field models of fracture focusing validation and verification in the context of propagation "macroscopic" cracks [START_REF] Mesgarnejad | Validation simulations for the variational approach to fracture[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF], we have studied crack nucleation and initiation in multiple geometries. We confirmed observations reported elsewhere in the literature that although they are mathematically equivalent in the limit of → 0, damaged notch boundary conditions lead to a more accurate computation near strong stress singularities whereas away from singularities, undamaged notch boundary conditions are to be used. Our numerical simulations also highlight the superiority of phase-field models such as AT 1 which exhibit an elastic phase in the one-dimensional tension problem over those who don't (such as AT 2 ), when nucleation away from strong singularity is involved. Our numerical simulations suggest that it is not possible to accurately account for crack nucleation near "weak" singularities using the AT 2 model. We infer that a strictly positive elastic limit σ e is a required feature of a phase-field model that properly account for crack nucleation. We have shown that as suggested by the one-dimensional tension problem, the regularization parameter must be understood (up to a model-dependent multiplicative constant) as the material's characteristic or internal length ch = G c E/σ 2 c , and linked to the material strength σ c . With this adjustment, we show that variational phasefield models are capable of quantitative prediction of crack nucleation in a wide range of geometries including three-and four-point bending with various type of notches, single and double edge notch tests, and a range of brittle materials, including steel and Duraluminium at low temperatures, PVC foams, PMMA, and several ceramics. We recognize that measuring a material's tensile strength is difficult and sensitive to the presence of defect, so that formulas (2.6) may not be a practical way of computing a material's internal length. Instead, we propose to perform a series of experiments such as three point bending with varying notch depth, radius or angle, as we have demonstrated in Figure 2.11 that with a properly adjusted internal length, variational phase-field models are capable of predicting the nucleation load for any notch depth or aperture. Furthermore, since variational phase-field models do not rely on any crack-tip asymptotic, this identification can be made even in a situation where generalized stress or notch intensity factors are not known or are affected by the sample's structural geometry. We have also shown that variational phase-field models properly account for size effects that cannot be recovered from Griffith-based theories. By introducing the material's internal length, they can account for the vanishing effect of small defects on the structural response of a material, or reconcile the existence of a critical material strength with the existence of stress singularity. Most importantly, they do not require introducing ad-hoc criteria based on local geometry and loading. On the contrary, we see that in most situation, criteria derived from the asymptotic analysis of a micro-geometry can be recovered a posteriori. Furthermore, variational phase-field models are capable of quantitative prediction of crack path after nucleation. Again, they do so without resolving to introduce additional ad-hoc criteria, but only rely on a general energy minimization principle. In short, we have demonstrated that variational phase-field models address some of the most vexing issues associated with brittle fracture: scale effects, nucleation, existence of a critical stress, and path prediction. Of course, there are still remaining issues that need to be addressed. Whereas the models are derived from irreversibility, stability and energy balance, our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities. Note that to this day, devising an evolution principle combining the strength of (2.3) while ensuring energy balance is still an open Appendix B Tables of experimental an numerical data for V-notch experiments [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three point bending experiments of a PMMA sample compared to full domain numerical simulations using the AT 1 model with undamaged notch boundary conditions. The value a/h refers to the ratio depth of the notch over sample thickness. See Figure 2.9 for geometry and loading. ω λ k c k c k c k c (AT 1 -U) (AT 1 -D) (AT 2 -U) (AT 2 -D) 0 01°0. Chapter 3 A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Hydraulic fracturing is a process to initiate and to extend fractures by injecting fluid into subsurface. Mathematical modeling of hydraulic fracturing requires coupled solution of models for fluid flows and reservoir-fracture deformation. The governing equations for these processes are fairly well understood and includes, for example, the Reynold's equation, cubic law, diffusivity equation and Darcy's law for fluid flow modeling, linear poro-elasticity equation for reservoir-fracture deformation and Griffith's criterion for fracture propagation. Considering that fracture propagation is a moving boundary problem, the numerical and computational challenges of solving these governing equations on the fracture domain limit the ability to comprehensively model hydraulic fracturing. These challenges include but are not limited to, finding efficient ways of representing numerically the fracture and reservoir domains in the same computational framework while still ensuring hydraulic and mechanical coupling between both subdomains. To address these issues, several authors have assumed a known propagation path that is limited to a coordinate direction of the computational grid [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF] while some others simply treated fractures as external boundaries of the reservoir computational domain [START_REF] Ji | A novel hydraulic fracturing model fully coupled with geomechanics and reservoir simulation[END_REF][START_REF] Dean | Hydraulic-fracture predictions with a fully coupled geomechanical reservoir simulator[END_REF]. Special interface elements called zero-thickness elements have also been used to handle fluid flow in fractures embedded in continuum media [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Segura | On zero-thickness interface elements for diffusion problems[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes. part I: Theoretical model[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes part II: Verification and application[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF][START_REF] Lobão | Modelling of hydrofracture flow in porous media[END_REF]. Despite the simplicity of these approaches and contrary to field evidence of complex fracture geometry and propagation paths, they have limited ability to reproduce realistic fracture behaviors. Where attempts have been made to represent fractures and reservoir in the same computational domain, for instance using the extended finite element method (XFEM) [START_REF] Mohammadnejad | An extended finite element method for hydraulic fracture propagation in deformable porous media with the cohesive crack model[END_REF][START_REF] Dahi | Analysis of hydraulic fracture propagation in fractured reservoirs: an improved model for the interaction between induced and natural fractures[END_REF] and the generalized finite element method (GFEM) [START_REF] Gupta | Simulation of non-planar three-dimensional hydraulic fracture propagation[END_REF], the computational cost is high and the numerics cumbersome, characterized by continuous remeshing to provide grids 3.1. A phase fields model for hydraulic fracturing that explicitly match the evolving fracture surface. Some of these challenges can be overcome using a phase field representation for fractures as evident in the work of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] and [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF]. This chapter extends the works of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] by applying the variational phase field model to a network of fractures. The hydraulic fracture model is developed by incorporating fracturing fluid pressure in Francfort and Marigo's variational approach to fracture [START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Specifically, the fracture model recast Griffith's propagation criteria into a total energy minimization problem, where the global energy is the sum of the elastic and fracture surface energies, the fracturing fluid pressure force and the work done by in-situ stresses. We assume quasi static fracture propagation and in this setting, the fractured state of the reservoir is the solution of a series of minimizations of this total energy with respect to any kinematically admissible crack sets and displacement field. Numerical implementation of the model is based on a phase field representation of the fracture and subsequent regularization of the total energy function. The phase field technique avoids the need for explicit knowledge of fracture location, it permits the use of a single computational domain for fracture and reservoir representation. The strength of this method is to provide a unified setting for handling path determination, nucleation and growth of arbitrary number of stable cracks in any dimensions based on the energy minimization principle. This work focuses on the fracture propagation stability through various examples such as, a pressurized single fracture stimulated by a controlled injected volume in a large domain, a network of multiple parallel fractures and a pressure driven laboratory experiment to measure rocks toughness. The Chapter is organized as follows: Section 3.1 is devoted to recall phase field models for hydraulic fracturing in the toughness dominated regime with no fluid loss to the impermeable elastic reservoir [START_REF] Detournay | The near tip region of a fluid driven fracture propagating in a permeable elastic solid[END_REF]. Then, our numerical implementation scheme and algorithm for volume driven hydraulic fracturing simulations is exposed in section 3.1.3. Tough the toughness dominated regime may not cover the whole spectrum of fracture propagation but provides an appropriate framework for verifications since it does not require the solution of a flow model. Therein, section 3.2 is concerned with comparisons between our numerical results and the closed form solutions provided by Sneddon [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF][START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] for the fluid pressure, fracture length/radius and fracture volume in a single crack case. Section 3.3 focuses on the propagation of infinite pressurized parallel fractures and it is compared with the derived solution. Section 3.4 is devoted to study the pre-fracture stability in the burst experiment at a controlled pressure. This test proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is designed to measure the fracture toughness of the rock and replicates situations encountered downhole with a borehole and bi-wing fracture. A phase fields model for hydraulic fracturing A variational model of fracture in a poroelastic medium Consider a reservoir consisting of a perfectly brittle isotropic homogeneous linear poroelastic material with A the Hooke's law tensor and G c the critical energy release rate Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures occupying a domain Ω ⊂ R n , n = 2 or 3 in its reference configuration. The domain is partially cut by a sufficiently regular crack set Γ ⊂ Ω with Γ ∩ ∂Ω = ∅. A uniform pressure denoted by p applies on both faces of the fracture lips i.e. Γ = Γ + ∪ Γ -and pore pressure denoted by p p applies in the porous material which follows the Biot poroelastic coefficient λ. The sound region Ω \ Γ is subject to a time independent boundary displacement ū(t) = 0 on the Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div σ = 0 where the Cauchy stress tensor follows Biot's theory [START_REF] Biot | General theory of three-dimensional consolidation[END_REF], i.e. σ = σλp p I, σ being the effective stress tensor. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u, e(u) = ∇u + ∇ T u 2 . The stress-strain relation is σ = Ae(u), so that, σ = A e(u) - λ 3κ p p I , where 3κ is the material's bulk modulus. Those equations can be rewritten in a variational form, by multiplying the equilibrium by the virtual displacement v ∈ H 1 0 (Ω \ Γ; R n ) and using Green's formula over Ω \ Γ. After calculation, we get that, Ω\Γ σ : e(v) dx - ∂ N Ω g(t) • v dH n-1 - Γ p v • ν dH n-1 = 0 (3.1) where H n-1 denotes the n -1-dimensional Hausdorff measure, i.e. its aggregate length in 2 dimensions and surface area in 3 dimensions. Finally, we remark that the above equation (3.1) can be seen as the Euler-Lagrange equation for the minimization of the elastic energy, E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 (3.2) amongst all displacement fields u ∈ H 1 (Ω \ Γ; R n ) such that u = 0 on ∂ D Ω. A phase fields model for hydraulic fracturing Remark 2 Of course, fluid equilibrium mandates continuity of pressure so that p p = p along Γ. Our choice to introduce two pressure fields is motivated by our focus on lowpermeability reservoirs. In this situation, assuming very small leak-off, it is reasonable to assume that for short injection time, the pore pressure is "almost" constant away from the crack, hence that p = p p . We follow the formalism of [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] and propose a time-discrete variational model of crack propagation. To any crack set Γ ⊂ Ω and any kinematically admissible displacement field u, we associate the fracture energy, E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 + G c H n-1 (Γ) (3.3) Considering then a time interval [0, T ] and a discrete set of time steps 0 = t 0 < t 1 < • • • < t N = T , and denoting by p i , p p i and g i , the crack pressure, pore pressure and external stress at time t i (i > 0), we postulate that the displacement and crack set (u i , Γ i ) are minimizers of E amongst all kinematically admissible displacement fields u and all crack sets Γ satisfying a growth condition Γ j ⊂ Γ for all j < i, with Γ 0 possibly representing pre-existing cracks. It is worth emphasizing that in this model, no assumptions are made on the crack geometry Γ i . As in Francfort and Marigo's pioneering work [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], minimization of the total fracture energy is all that is needed to fully identify the crack geometry (path) and topology (nucleation, merging, branching). Variational phase-field approximation Several techniques have been proposed for the numerical implementation of the fracture energy E, the main difficulty being to handle discontinuous displacements along unknown surfaces. In recent years, variational phase-field models, originally devised in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF], and extended to brittle fracture [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] have become very popular. We follow this approach by introducing a regularization length , an auxiliary field α with values in [0, 1] representing the unknown crack surface, and the regularized energy. E (u, α) = Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 + Ω pu • ∇α dx + 3G c 8 Ω α + |∇α| 2 dx (3.4) where α = 0 is the undamaged state material and α = 1 refers to the broken part. One can recognize the AT 1 model introduced in the Chapter 1 which differs from one used in [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF]. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures At each time step, the constrained minimization of the fracture energy E is then replaced with that of E , with respect to all (u i , α i ) such that u i is kinematically admissible and 0 ≤ α i-1 ≤ α i ≤ 1. The Γ-convergence of (3.4) to (3.3), which constitutes the main justification of variational phase-field models is a straightforward extension of [START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF], or [START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF]. It is quite technical and not quoted here. The form of the regularization of the surface energy in (3.4) is slightly different from the one originally proposed in [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF][START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] but this choice is motivated by the work of [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. In the context of poro-elasticity, the regularization of the elastic energy of the form of, Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx is different from that of [START_REF] Mikelic | A quasistatic phase field approach to fluid filled fractures[END_REF] and follow-up work, or [START_REF] Miehe | Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF] which use a regularization of the form Ω 1 2 (1 -α) 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx. This choice is consistent with the point of view that damage takes place at the sub-pore scale, so that the damage variable α should impact the Cauchy stress and not the effective poro-elastic stress. Note that as → 0, both expressions will satisfy Γ-convergence to E. A fundamental requirement of hydraulic fracturing modeling is volume conservation, that is the sum of the fracture volume and fluid lost to the surrounding reservoir must equal the amount of fluid injected denoted V . In the K-regime, the injected fluid is inviscid and no mass is transported since the reservoir is impermeable. Of course, reservoir impermeability means no fluid loss from fracture to reservoir and this lack of hydraulic communication means that the reservoir pressure p p and fracture fluid pressure p are two distinct and discontinuous quantities. Furthermore, the zero viscosity of the injected fluid is incompatible with any fluid flow model, leaving global volume balance as the requirement for computing the unknown fracturing fluid pressure p. In the sequel we set aside the reservoir pressure p p and consider this as a hydrostatic stress offset in the domain, which can be recast by applying a constant pressure on the entire boundary of the domain. Numerical implementation The numerical implementation of the variational phase-field model is well established. In the numerical simulations presented below, we discretized the regularized fracture energy using linear or bilinear finite elements. We follow the classical alternate minimizations approach of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] and adapt to volume-driven fractures where main steps are: i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this, we employed preconditioned conjugate gradient methods solvers. 3.2. Numerical verification case of a pressurized single fracture in a two and three dimensions ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≥ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. iii. For a fixed (u, α), the total volume of fluid can be computed, such that, V = - Ω u • ∇α dx. The idea is to rescale the fluid pressure using the secant method (a root-finding algorithm) based on a recurrence relation. A possible algorithm to solve volume-driven hydraulic fracturing is to use nested loops. The inner loop solves the elastic problem i. and rescale the pressure iii. until the error between the target and the computed volume is below a fixed tolerance. The outer loop is composed of ii. and the previous procedure and the exit is triggered once the damage has converged. This leads to the following Algorithm 2 where δ V and δ α are fixed tolerances. Remark that the inner loop solves a linear problem, hence, finding the pressure p associated to the target volume V should converge in strictly less than four iterations. All computations were performed using the open source mef90 1 . In-situ stresses play a huge role in hydraulic fracture propagation and the ability to incorporate them in a numerical model is an important requirement for robust hydraulic fracturing modeling. Our numerical model easily accounts for these compressive stresses on boundaries of the reservoir. However in-situ stresses simulated cannot exceeded the maximum admissible stress of the material given by σ c = 3EG c /8 . We run a series of two-and three-dimensions computations to verify our numerical model and investigate stability of fractures. Numerical verification case of a pressurized single fracture in a two and three dimensions Using the Algorithm 2 a pressurized line and penny shape fractures have been respectively simulated in two-and three-dimensions, and their results compared with the closed form solutions. Both problems have a symmetric axis, i.e. its aggregate a reflexion axis in 2d and a rotation in 3d, leading to a invariant geometry drawn on Figure 3.1. Also, all geometric and material parameters are identically set up for both problems and summarized in the Table 3.1. The closed form solutions provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF][START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] are recalled in the Appendix C and assume an infinite domain with vanishing stress and displacement at the boundary. To satisfy those boundary conditions we performed simulations on a huge domain clamped at the boundary, where the reservoir size is 100 times larger than the pre-fracture length as reported in the Table 3.1. To moderate the number of elements in the domain, a casing (W, H) with a constant refined mesh size of resolution h is encapsulated around the fracture. Outside the casing a coarsen mesh is spread out see Figure 3.1. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Algorithm 2 Volume driven hydraulic fracturing algorithm at the step i 1: Let j = 0 and α 0 := α i-1 2: repeat 3: Set, p k-1 i = p k i and V k-1 i = V k i 4: p k+1 i := p k i -V k i (p k i -p k-1 i )/(V k i -V k-1 i ) 6: Compute the equilibrium, u k+1 := argmin u∈C i E (u, α j ) 7: Compute volume of fractures, V k+1 i := - Ω u k+1 • ∇α j dx 8: k := k + 1 9: until V k i -V i L ∞ ≤ δ V 10: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α) 11: j := j + 1 12: until α j -α j-1 L ∞ ≤ δ α 13: Set, u i := u j and α i := α j refined mesh, size h coarsen mesh symmetry axis 3.1: Parameters used for the simulation of a single fracture in two and three dimensions. A loading cycle is preformed by pressurizing the fracture until propagation, then, pumping all the fluid out of the crack. The pre-fracture of length l 0 is measured by a isovalues contour plot for α = .8 before refilling the fracture of fluid again. The reason of this is we do not have an optimal damage profile at the fracture tips, leading to underestimate the critical pressure p c . Similar issues have been observed during the nucleation process in [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF] where G c is overshoot due to the solution stability. Snap-shots of the damage before and after the loading cycle in the Figure 3.3 illustrate differences between damage profiles at the crack tips. Since the critical crack pressure is a decreasing function with respect to the crack length, the maximum value is obtained at the loading point when the crack initiates (for the pre-fracture). One can see on the Figure 3.4 that the penny shape fracture growth is not necessary symmetrical with respect to the geometry but remains a disk shape which is consistent with the invariant closed form solution. We know from prior work see [START_REF] Bourdin | The variational approach to fracture[END_REF] that the "effective" numerical toughness is quantified by (G c ) eff = G c (1 + 3h/(8 ) ) in two dimensions. However, for the penny shape crack (G c ) eff = G c (1 + 3h/(8 ) + 2h/l ), where 2h is the thickness of the crack and l the radius. The additional term of 2h/l comes from the lateral surface contribution which becomes negligible for thin fractures. The fluid pressure p and the fracture length l closed form solution with respect to the total injected volume of fluid V is provided by [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] and is recalled in the Appendix C. Figure 3.2 shows a perfect match between the numerical results and the closed solution for the line fracture and penny shape crack. In both cases as long as the V ≤ V c the crack does not grow, and since V > V c the pressure drop as p ∼ V -1/3 (line fracture) and p ∼ V -1/5 (penny shape crack). Notice that the pressure decreases when the crack grows, therein a pressure driven crack is necessary unstable, indeed there is no admissible pressure over the maximum value p c . Remark 3 The Griffith regime requires σ c = 3E G c /(8 ) ≥ πE G c /(4l) = p c in two dimensions, leading to l ≥ 2π /3. Therefore, the pre-fracture must be longer than twice the material internal length to avoid any size effects phenomena as reported in Chapter 2. Those simulations show that the variational phase field model to hydraulic fracturing recovers Griffith's initiation and propagation for a single pressurized crack. Even if this can be seen as a toy example because the fracture propagation is rectilinear, without any changes on the implementation multi-fracking can be simulated as illustrated in the Figure 3.5. Fracture paths are obtained by total energy minimization and satisfies Griffith's propagation criterion. Multi fractures in two dimensions Multi fractures in two dimensions One of the most important features of our phase field hydraulic fracturing model is its ability to handle multiple fractures without additional computational or modeling effort than is required for simulating single fracture. This capability is highlighted in the following study of the stimulation of a network of parallel fractures. All cracks are subject to the same pressure and we control the total amount of fluid injected into cracks, i.e. fluid can migrate from a crack to another via a wellbore. The case where all fractures of a parallel network propagate (multi fracking scenario) is often postulated. However, considering the variational structure of Griffith leads to a different conclusion. For the sake of simplicity consider only two parallel fractures. A virtual extension of one of the cracks (variational argument) induces a drop of pressure in both fractures. Consequently the shorter fracture is sub-critical and remains unchanged since the pressure p < p c . Moreover the longer fracture requires less pressure to propagate than the shorter because the critical pressure decreases with the crack length. Finally the longer crack continues to propagate. This non restrictive situation can be extended to multiple fractures (parallel and the same size). In the sequel, we propose to revisit the hypothesis of multi-fracking by performing numerical simulations using the Algorithm 2. Consider a network of infinite parallel cracks with the same pressure p where their individual length is l and the spacing between cracks is δ drawn in the Figure 3.6 (left). At the initial state all pre-cracks have the same length denoted l 0 and no in-situ stresses is applied on the reservoir domain. Multi-fracking closed form solution This network of parallel cracks is a duplication of an invariant geometry, precisely a strip domain Ω = (-∞, +∞) × [-δ, δ] cut in the middle by a fracture Γ = [-l, l] × {0}. An asymptotic solution of this cell domain problem is provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] V (ρ) = 8pδ 2 E π ρ 2 f (ρ), (3.5) where the density of fractures ρ = lπ/(2δ) and f (ρ) = 1 -ρ 2 /2 + ρ 4 /3 + o(ρ 6 ). The Taylor series of f (ρ) in 0 provided by Sneddon differs from one given in the reference [START_REF] Murakami | Handbook of stress intensity factors[END_REF] where f (ρ) = 1ρ 2 /2 + 3ρ 4 /8 + o(ρ 6 ). The latter is exactly the first three terms of the expansion of f (ρ) = 1 1 + ρ 2 . (3.6) The critical pressure satisfying Griffith propagation for this network of fractures problem is p(ρ) = E G c δ(ρ 2 f (ρ)) (3.7) Of course the closed form expression consider that all cracks grow by symmetry. It is convenient for numerical reason to consider an half domain and an half fracture (a crack lip) of the reference geometry such that we have (Ω 1 , Γ 1 ) and by symmetry expansion (Ω 2 , Γ 2 ), (Ω 4 , Γ 4 ),..,(Ω 2n , Γ 2n ) illustrated in the Figure 3.6 (right). Numerical simulation of multi-fracking by computation of unit cells construction The idea is to reproduce numerically multi-fracking scenario, thus simulation is performed on stripes of length 2L with pre-fractures of length 2l 0 such that, geometries considered are: Ω 2n = [-L, L] × [0, (2n -2)δ] Γ 0, 2n = [-l 0 , l 0 ] × n k=1 {2(k -1)δ} (3.8) for n ≥ 1, n being the number of crack lips. Naturally a crack is composed of two lips. The prescribed boundary displacement on the top-bottom extremities is u y (0) = u y (2(n -1)δ) = 0, and on the left-right is u(±L) = 0. All numerical parameters used are set up in the Table 3.2. h L δ l 0 E ν G c 0.005 10 1 0.115 1 0 1 3h Table 3.2: Parameters used in the numerical simulation for infinite cracks Using the same technique of loading cycle as in section 3.2 and after pumping enough fluid into the system of cracks we observed in all simulations performed that only one fracture grows, precisely the one at the boundary as illustrated in the Figure 3.7. By Multi fractures in two dimensions using reflexion symmetry we have a periodicity of large fractures of 1/n. We notice that simulations performed never stimulate middle fracture. Indeed, by doing so after reflexions this will lead to an higher periodicity cases. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table 3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6. To compare the total injected fluid V between simulations, we introduce the fluid volume density i.e the fluid volume for a unit geometry cell given by 2V /n. The evolution of normalized pressure, volume of fluid per cell and length are plotted in Figure 3.8 and show that the multi-fracking situation (one periodic) match perfectly with the close form solution provided by the equations (3.7),(3.5) and (3.6). Also, one can see that Sneddon approximation is not accurate for dense fractures. We can observe from simulations in Figure 3.8 that a lower periodicity (1/n) of growing cracks implies a reduction in pressure evolution. Also notice that the rate of pressure drop increases when the number of long cracks decrease, so that rapid pressure drop may indicate a poor stimulation. Also this loss of multi fracking stimulation decreases the fracture surface are for resource recovery. All cracks propagating simultaneously case is not stable in the sense that there exits a lower energy state with fewer growing crack. However as we will be discussed in the section 3.3.3 multi fracking may work for low fracture density since their interactions are negligible. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Multi-fracking for dense fractures In the following we investigate critical pressure with respect to the density of fracture for different periodicity. Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF]. Let us focus on fractures propagation when their interactions become stronger i.e. higher fracture density ρ = lπ/(2δ). We start by normalizing the pressure relation for multi-fracking equation (3.7) with p = E G c /(lπ) which is a single fracture problem studied in section 3.2. r p (ρ) = 2ρ (ρ 2 f (ρ)) = 2(ρ 2 +1) 3/2 ρ 2 +2 . (3.9) Remark that r p (0) = 1 means that critical pressure for largely spaced fractures are identical to a line fracture in a infinite domain problem, thus cracks behave without interacting each other. We run a set of numerical simulations using the same set of parameters than previously recalled in the Table 3.2 except that δ varies. For a high fractures density we 3.4. Fracture stability in the burst experiment with a confining pressure discovered another loss of symmetry shown on Figure 3.9 such that the fracture grows only in one direction. Figure 3.9: Domains in the deformed configuration for respectively Ω 2 and Ω 4 with 2δ = .5. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulation domain and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack tip propagates in one direction in the simulated domain. We report pressures obtained numerically depending on the fracture density in the Figure 3.10 and comparison with (3.9). One can see that the closed form solution is in good agreement with numerical simulation for the periodicity one and also lower periodicity obtained by doing ρ ← ρ/n in the equation (3.9). We see that for low fractures density the critical pressure is equal to a line fracture in a infinite domain. For higher fractures density, interactions become stronger and propagating all fractures require a high pressure compare to grow only one of them. As an example, a network of pre-fractures of length l = 6.36m and spaced δ = 10m thus ρ = 1, in this situation the required pressure is equal to r(1)K Ic / √ lπ with r(1) = 1.4 to propagate all cracks together compare to r(1) = 1 for only one single fracture. Naturally the system bifurcate to less fractures propagation leading to a drop of the fluid pressure. Fracture stability in the burst experiment with a confining pressure This section focuses on the stability of fractures propagation in the burst experiment. This laboratory experiment was conceived to measure the resistance to fracturing K Ic (also called the fracture toughness) of rock under confining pressure which is a critical parameter to match the breakdown pressure in mini-frac simulation. The idea is to provide a value of K Ic for hydraulic fracturing simulations in the K-regime [START_REF] Detournay | Propagation regimes of fluid-driven fractures in impermeable rocks[END_REF]. However, past experimental studies suggest that the fracture toughness of rock is dependent on the confining pressure under which the rock is imposed. Various methodologies exist for the Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6. measurement of K Ic under confining pressure and results differ in each study. The most accepted methodology in petroleum industry is the so called burst experiment, which was proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF], as the experimental geometry replicates a situation encountered downhole with a borehole and bi-wing fracture. Under linear elastic fracture mechanics, stable and unstable crack growth regime have been calculated depending on the confining pressure and geometry. During unstable crack propagation the phase-field models for hydraulic fracturing do not bring information. Instead we perform a Stress Intensity Factor (SIF) analysis along the fracture path to determine propagation stability regimes, herein this section is different from the phase-field sprite of the dissertation. However at the end we will verify the ability of the phase-field model to capture fracture stability transition from stable to unstable. The burst experiment The effect of confining pressure on the fracture toughness was first studied by Schmidt and Huddle [START_REF] Schmidt | Effect of Confining Pressure on Fracture Toughness of Indiana Limestone[END_REF] on Indiana limestone using single-edge-notch samples in a pressure vessel. In their experiments, increase in the fracture toughness up to four fold have been reported. Other investigations to quantify the confining pressure dependency were performed on the three point bending [START_REF] Müller | Brittle crack growth in rocks[END_REF][START_REF] Vásárhelyi | Influence of pressure on the crack propagation under mode i loading in anisotropic gneiss[END_REF], modified ring test [START_REF] Thiercelin | Fracture Toughness and Hydraulic Fracturing[END_REF], chevron notched Brazillian disk [START_REF] Roegiers | Rock fracture tests in simulated downhole conditions[END_REF], cylinder with a partially penetrating borehole [START_REF] Holder | Measurements of effective fracture toughness values for hydraulic fracture: Dependence on pressure and fluid rheology[END_REF][START_REF] Sitharam Thallak | The pressure dependence of apparent hydrofracture toughness[END_REF], and thick wall cylinder 3.4. Fracture stability in the burst experiment with a confining pressure with notches [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF][START_REF] Chen | Laboratory measurement and interpretation of the fracture toughness of formation rocks at great depth[END_REF] and without notches [START_REF] Stoeckhert | Mode i fracture toughness of rock under confining pressure[END_REF]. Published results on Indiana limestone are shown in Figure 3.11 and the data suggest the fracture toughness dependency on the confining pressure with a linear relationship. Provided increasing reports on confining pressure dependent fracture toughness, theoretical works to describe the mechanisms focus mainly on process zones ahead of the fracture as a culprit of the "apparent" fracture toughness including Dugdale type process zone [START_REF] Zhao | Determination of in situ fracture toughness[END_REF][START_REF] Sato | Cohesive crack analysis of toughness increase due to confining pressure[END_REF], Barenblatt cohesive zone model [START_REF] Allan M Rubin | Tensile fracture of rock at high confining pressure: implications for dike propagation[END_REF], and Dugdale-Barenblatt tension softening model [START_REF] Hashida | Numerical simulation with experimental verification of the fracture behavior in granite under confining pressures based on the tension-softening model[END_REF][START_REF] Fialko | Numerical simulation of high-pressure rock tensile fracture experiments: Evidence of an increase in fracture energy with pressure?[END_REF]. The burst experiment developed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is one of the most important methods to determine the critical stress intensity factor of rocks subject to confining pressure in the petroleum industry as the geometry closely represents actual downhole conditions of hydraulic fracturing stimulation (Figure 3.12). A hydraulic internal pressure is applied on a jacketed borehole of the thick-walled cylinder with pre-cut notches. Also, a confining pressure is applied on the outer cylinder. The inner and the outer pressures increase keeping a constant ratio of the outer to the inner pressure until the complete failure of the sample occurs and the inner and outer pressures will equilibrate to the ambient pressure abruptly. This test has great advantages in sample preparation, no fluid leak off to the rock, and easeness of measurement with various confining pressures. In this section, we firstly revisit the derivation of the stress intensity factor and analyze stabilities of fracture growth from actual burst experiment results. Subsequent analytical results indicate that fracture growth is not necessarily unstable and can have a stable phase in our experiments. In fact, stable fracture propagation has been observed also in past studies with PMMA samples [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF] and sandstone and shale rocks without confining pressure [START_REF] Zhixi | Determination of rock fracture toughness and its relationship with acoustic velocity[END_REF]. Evaluation and computation of the stress intensity factor for the burst experiment Under Griffith's theory and for a given geometry (a, b, L) see Figure 3.12, the fracture stability is governed by, K I (P i , L, b, a, P o ) ≤ K Ic where K Ic is a material property named the critical fracture toughness. The stress intensity factor (SIF) denoted K I is such that, K I < 0 when crack lips interpenetrate and K I ≥ 0 otherwise. Let us define dimensionless parameters as, w = b a , l = L b -a , r = P o P i (3.10) Hence, the dimensionless crack stability becomes K * I (1, l, w, r) ≤ K Ic (P i √ aπ) (3.11) where K * I (1, l, w, r) = K I (1, l, w, r)/ √ aπ. Necessarily, the inner pressure must be positive P i > 0 to propagate the crack. For a given thick wall ratio w and pressure confinement r, we are able to evaluate the fracture toughness of the material by computing K * I if the experiment provides a value of the inner pressure P i and the crack length L at the time when the fracture propagates. The difficulty is to measure the fracture length in-situ during the experiment whose technique is yet to be established. However the burst experiment should be designed for unstable crack propagation. The idea is to maintain the crack opening by keeping the tensile load at the crack tips all along the path, so that the sample bursts (unstable crack propagation) after initiation. Therefore the fracture toughness is computed for the pre-notch length and the critical pressure measured. 3.4. Fracture stability in the burst experiment with a confining pressure Let us study the evolution of K * I (1, l, w, r) with the crack length l for the parameter analysis (w, r) to capture stability crack propagation regimes. Using Linear Elastic Fracture Mechanics (LEFM) the burst problem denoted (B) is decomposed into the following elementary problems: a situation where pressure is applied only on the inner cylinder called the jacketed problem (J) and a problem with only a confining pressure applied on the outer cylinder problem named (C). This decomposition is illustrated in Figure 3.13. Therefore, the SIF for (B) can then be superposed as K B * I (1, l, w, r) = K J * I (1, l, w) -rK C * I (1, l, w) (3.12) where K C * I (1, l, w) is positive for negative applied external pressure P o . In Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] the burst problem is decomposed following the Figure 3.14 such that, the decomposition is approximated by the jacketed problem (J) and the unjacketed problem (U) in which the fluid pressurized all internal sides. We get the following SIF, K B * I (1, l, w, r) ≈ K J * I (1, l, w) -rK U * I (1, l, w) (3.13) where K U * I (1, l, w) ≥ 0 for a positive P o applied in the interior of the geometry. Note that in our decomposition, no pore pressure (P p ) is considered in the sample, i.e. a drain evacuates the embedded pressure in the rock. L 2a 2b L L 2a 2b L L 2a 2b L L 2a 2b L = + L 2a 2b L + = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M) + = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M) Normalized stress intensity factor for the jacketed and unjacketed problems have been derived in Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The Figure 3.15 shows a good agreement between our results (computational SIF based on the G θ methods) and one provided by Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The G θ technique [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Suo | On the application of g (θ) method and its comparison with de lorenzi's approach[END_REF] is an estimation of the second derivatives of the potential energy with respect to the crack length, i.e. to make a virtual perturbation of the domain (vector θ) in the crack propagation direction. Then, the SIF is calculated using Irwin formula K I = EG/(1ν 2 ) based on the computed G. Influence of the confinement and wall thickness ratio on stability of the initial crack Based on the above result we compare K C * I with K U * I (Figures 3.13 and 3.14), and we found out their relative error is less than 15% for l ∈ [.2, .8] and w ∈ {3, 7, 10}. So, in a first approximation both problems are similar. For the burst experiment, the fracture propagation occurs when (3.11) becomes an equality, thus we have P i = K Ic /(K B * I √ aπ). A decreasing K B * I induces a growing P i , a contrario a growing K B * I implies to decrease the inner pressure which contradicts the burst experiment set up (monotonic increasing pressure). Consequently the fracture growth is unstable (brutal) for a growing K B * I , and vice versa. In the Figure 3.16 we show different evolutions of the stress intensity factor with the crack length for various wall thickness ratio and confinement. We observe that when the confining pressure r increases fractures propagation are contained and the same effect is noticed for larger thick wall ratio w. Depending where the pre-fracture tip is located we can draw different fracture regime summarized in three possible evolutions (see Figure 3.17 (a) For this evolution K B * I is strictly increasing thus for any pre-fracture length l 0 the sample will burst. The idea is the fracture initiates once the pressure is critical, then propagates along the sample until the failure. A sudden drop of the pressure is measured signature of the initiation pressure. By recording this pressure P i the fracture toughness K Ic is calculated using equation (3.11). (b) By making a pre-fracture l 0 ≥ l SU , this leads to the same conclusion than (a). However for l U S ≤ l 0 ≤ l SU the fracture propagation is stable. To get an estimation of the fracture toughness, we need to track the fracture and to measure its length otherwise is vain. A risky calculation is to assume the fracture initiation length be at the inflection point l SU before the burst. Reasons are the critical point can be a plateau shape leading to imprecise measure of l SU , secondly, since the rock is not a perfect brittle materials the l SU can be slightly different. (c) For Griffith and any cohesive models which assume compressive forces in front of the notch tips, the fracture propagation is not possible. Of course others initiation criterion are possible as critical stress as an example. Application to sandstone experiments A commercial rock mechanics laboratory provided fracture toughness results for different pressure ratios on sandstones and the geometries summarized in the Table 3.3. As their end-caps and hardware are built for 0.25' center hole diameter with 2.5" diameter sample, w values are restricted to 9. Considering no pore pressure and applying stricto sensu the following equation . l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST by taking l equals to the dimensionless pre-notch length l 0 and the critical pressure recorded P i = P ic , we obtain that the fracture toughness K Ic is influenced by the confining pressure r as reported in the last column of the Table 3.3. However, the evolutions of K B * I with respect to l in the Figure 3.18 (right) shows that all confining experiments (Id 1-5) have a compressive area in front of the fracture tips. Moreover pre-fractures are located in the stable propagation regime, in fine the sample cannot break according to Griffith's theory. P i √ aπK B * I (1, l, w, r) = K Ic , Chapter 3. Sample ID 2a [in] w The wall thickness cylinder w and the confining pressure ratio r play a fundamental role in the crack stability regime, to obtain a brutal fracture propagation after initiation smaller (w, r) is required. A possible choice is to take w = 3 for r = {1/8, 1/6} as shown in Figure 3. [START_REF] Bažant | Scaling of Structural Strength[END_REF]. P ic [Psi] r l 0 K Ic [Psi √ in] Id 0 0. A stable-unstable regime is observed for (r = 1/6, w = 5). We performed a numerical simulation with the phase-field model to hydraulic fracturing to verify the ability of the simulation to capture the bifurcation point. For that we fix K Ic = 1, the geometric parameters (a = 1, b = 5, l 0 = .15, r = 1/5) and the internal length = 0.01. Then, by pressuring the sample (driven-pressure) damage grows until the critical point. After this 3.4. Fracture stability in the burst experiment with a confining pressure loading, the damage jumps to the external boundary and break the sample. The normalized SIF is computed using the K Ic /(P i √ aπ) for different fracture length and reported in the Figure 3.19 Remark 4 Stability analysis can be also done by volume-driven injection into the inner cylinder using phase-field models. This provides stable fracture propagation, and normalized stress intensity factor can be rebuild using simulations outputs. Conclusion Through this chapter we have shown that the phase-field models for hydraulic fracturing is a good candidate to simulate fractures propagation in the toughness dominated regime. The verification is done for a single-fracture and multi-fracking propagation scenario. Simulations show that the multi-fractures propagation is the worst case energetically speaking contrary to the growth of a single fracture in the network which is the best total energy minimizer. Moreover the bifurcation to a loss of symmetries (e.g. single fracture tip propagation) is intensified by the density of fractures in the network. The pressure-driven burst experiment focuses on fracture stability. The confining pressure and the thickness of the sample might contain fractures growth. By carefully selecting those two parameters (confinement pressure ratio and the geometry) the experiment can be designed to calculate the fracture toughness for rocks. In short those examples illustrate the potential of the variational phase-field models for hydraulic fracturing associated with the minimization principle to account for stable volume-driven fractures. The loss of symmetry in the multi-fracking scenario is a relevant example to illustrate the concept of variational argument. Same results is confirmed by coupling this model with fluid flow as detailed in Chukwudozie [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF]. Substituting (3.19) into (3.14), the fluid pressure is obtained. p = 3 2 G 2 c E π V (3.20) Similarly, the fracture length during propagation is obtained by substituting (3.16) into (3.14). l = 3 E V 2 4π G c (3.21) Penny-Shaped Fracture (3d domain): For a penny-shaped fracture in a 3d domain, the fracture volume is V = 16pl 3 3E (3.22) where l denotes the radius, while the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is p c = πG c E 4l 0 (3.23) For an initial fracture radius l 0 , the critical volume is, V c = 64πl 5 0 G c 9E (3.24) If one follows a procedure similar to that for the line fracture, we will obtain the following relationships for the evolution of the fluid pressure and fracture radius p c = 5 π 3 G 3 c E 2 12V l = 5 9E V 2 64πG c (3.25) Chapter 4. Variational models of perfect plasticity Definition 7 (Generalized standard plasticity models) i. A choice of independent states variables which includes one or multiple internal variables. ii. Define a convex set where thermodynamical forces lie in. Concerning i. we choose the plastic strain tensor (symmetric) p and the infinitesimal total deformation denoted e(u). The total strain is the symmetrical part of the spatial gradient of the displacement u, i.e. e(u) = ∇u + ∇ T u 2 . The kinematic admissibility is the sum of the plastic and elastic strains denoted ε, given by, e(u) = ε + p. For ii. consider a free energy density ψ a differentiable convex state function which depends on internal variables. Naturally, thermodynamical forces are defined from the free energy by σ = ∂ψ ∂e (e, p), τ = - ∂ψ ∂p (e, p). (4.1) Commonly the free energy takes the form of ψ(e, p) = 1 2 A(e(u)p) : (e(u)p), where A is the Hooke's law tensor. It follows that, σ = τ = A(e(u)p). However for clarity we continue to use τ instead. Internal variables (e, p) and their duals (σ, τ ) are second order symmetric tensors and become n × n symmetric matrices denoted M n s after a choice of an orthonormal basis and space dimension of the domain (Ω ⊂ R n ). To complete the second statement ii., let K be a non empty closed convex subset of M n s where τ lies in. This subset is called elastic domain for τ . Assume that K is fixed and time independent, such that its boundary is the convex yield surface f Y : M n s → R, defined by, τ ∈ K = {τ * ∈ M n s : f Y (τ * ) ≤ 0} (4.2) Precisely, for any τ that lies in the interior of K denoted int(K) the yield surface is strictly negative. Otherwise, τ belongs to the boundary noted ∂K and the yield function vanishes: f Y (τ ) < 0, τ ∈ int(K) f Y (τ ) = 0, τ ∈ ∂K . (4.3) Let us apply the normality rule on it to get the plastic evolution law. In the case where ∂K is differentiable the plastic flow rule is defined as, 4.1. Ingredients for generalized standard plasticity models ṗ = η ∂f Y ∂τ (τ ), with η = 0 if f Y (τ ) < 0 ≥ 0 if f Y (τ ) = 0 (4.4) where η is the Lagrange multiplier. Sometimes the convex K has corners and the outer normal cannot be defined (f Y is not differentiable), thus, the normality rule is written using Hill's principle, also known as maximum dissipation power principle, i.e., τ ∈ K, (τ -τ * ) : ṗ ≥ 0, ∀τ * ∈ K. (4.5) This is equivalent to say that ṗ lies in the outer normal cone of K in τ , ṗ ∈ N K (τ ) := { ṗ : (τ * -τ ) ≤ 0 ∀τ * ∈ K}. (4.6) However we prefer to introduce the indicator function of τ ∈ K, and write equivalently the normality rule as, ṗ lies in the subdifferential set of the indicator function. For that, the indicator function is, I K (τ ) = 0 if τ ∈ K +∞ if τ / ∈ K (4.7) and is convex by construction. The normality rule is recovered by applying the definition of subgradient, such that, ṗ is a subgradient of I K at a point τ ∈ K for any τ * ∈ K, given by, τ ∈ K, I K (τ * ) ≥ I K (τ ) + ṗ : (τ * -τ ), ∀τ * ∈ K ⇔ ṗ ∈ ∂I K (τ ), τ ∈ K (4.8) where the set of all sub-gradients at τ is the sub-differential of I K at τ and is denoted by ∂I K (τ ). At this stage of the analysis, Hill's principle is equivalent to convex properties of the elastic domain K and the normality plastic strain flow rule. For τ ∈ K, Hill ⇔ ṗ ∈ N K (τ ) ⇔ ṗ ∈ ∂I K (τ ) (4.9) Dissipation of energy during plastic deformations All ingredients are settled, such as, we have the variable set (u, p) and their duals (σ, τ ) which lie in the convex set K. Also, the plastic evolution law is given by ṗ ∈ ∂I K (τ ). It is convenient to compute the plastic dissipated energy during a plastic deformation process. For that, the dissipated plastic power density can be constructed from the Clausius-Duhem inequality. To construct such dissipation energy let us define first the support function H(q), q ∈ M 3 s → H(q) := sup τ ∈K {τ • q} ∈ (-∞, +∞] (4.10) The support function is convex, 1-homogeneous, Chapter 4. Variational models of perfect plasticity H(λq) = λH(q), ∀λ > 0, ∀q ∈ M n s (4.11) and it follows the triangle inequality, i.e., H(q 1 + q 2 ) ≤ H(q 1 ) + H(q 2 ), for every q 1 , q 2 ∈ M n s . (4.12) The support function of the plastic strain rate H( ṗ) is null if the plastic flow is zero, non negative when 0 ∈ K, and takes the value +∞ when K is not bounded. Using Clausius-Duhem inequality for an isotherm transformation, the dissipation power is defined by D = σ : ė -ψ, (4.13) and the second law of thermodynamics enforce the dissipation to be positive or null, D = τ : ṗ ≥ 0. (4.14) Using Hill's principle, the definition of the support function and some convex analysis, one can show that the plastic dissipation is equal to the support function of the plastic flow. D = H( ṗ) (4.15) The starting point to prove (4.15) is the Hill's principle or equivalently the plastic strain flow rule. For τ ∈ K, τ : ṗ ≥ τ * : ṗ, ∀τ * ∈ K. By passing the right term to the left and taking the supremum over all ṗ ∈ M n s , we get, sup ṗ∈M n s {τ : ṗ -H( ṗ)} ≥ 0. (4.18) Since K is a non empty close convex set, H( ṗ) is convex and lower semi continuous, we have built the convex conjugate function of H( q) in the sense of Legendre-Fenchel. Moreover, one observes that the conjugate of the support function is the indicator function, given by, I K (τ ) := sup ṗ∈M n s {τ : ṗ -H( ṗ)} = 0 if τ ∈ K + ∞ if τ / ∈ K (4.19) Hence, the following equality holds for τ ∈ K, D = τ : ṗ = H( ṗ). (4.20) Variational formulation of perfect plasticity models Remark 5 The conjugate subgradient theorem says that, for τ ∈ K a non empty closed convex set, ṗ ∈ ∂I K (τ ) ⇔ D = τ : ṗ = H( ṗ) + I K (τ ) ⇔ τ ∈ ∂H( ṗ) Finally, once the plastic dissipation power defined, by integrating over time [t a , t b ] for smooth evolution of p, the plastic dissipated energy is, D(p; [t a , t b ]) = t b ta H( ṗ(s)) ds (4.21) This problem is rate independent because the dissipation does not depend on the strain rate , i.e. D( ė, ṗ) = D( ṗ) and is 1-homogeneous. Variational formulation of perfect plasticity models Consider a perfect elasto-plastic material with a free energy ψ(e, p) occupying a smooth region Ω ⊂ R n , subject to time dependent boundary displacement ū(t) on a Dirichlet part ∂ D Ω of its boundary. For the sake of simplicity the domain is free of stress and no body force applies on it, such that, σ • ν = 0 on the complementary portion ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. Assume the initial state of the material being (e 0 , p 0 ) = (0, 0) at t = 0. Internal variables e(u) and p are supposed to be continuous-time solution of the quasi-static evolution problem. At each time the body is in elastic equilibrium with the prescribed loads at that time, such as it satisfies the following equations,                  σ = ∂ψ ∂e (e, p) in Ω τ = - ∂ψ ∂p (e, p) ∈ ∂H( ṗ) in Ω div(σ) = 0 in Ω u = ū(t) on ∂ D Ω σ • ν = 0 on ∂ N Ω We set aside problems where plasticity strain may develop at the interface ∂ D Ω. The problem can be equivalently written in a variational formulation, which is based on two principles, i. Energy balance ii. Stability condition Let the total energy density be defined as the sum of the elastic energy and the dissipated plastic energy, E t (e(u), p) = ψ(e(u), p)ψ(e 0 , p 0 ) + D(p; [0, t]) Energy balance The concept of energy balance is related to the evolution of state variables in a material point, and enforce the total energy rate be equal to the mechanical power energy at each time, i.e. Ėt = σ t : ėt . ( The total energy rate is, Ėt = ∂ψ ∂e (e t , p t ) : ėt + ∂ψ ∂p (e t , p t ) : ṗt + H( ṗt ), (4.23) and using the definition of τ = -∂ψ/∂e and σ = ∂ψ/∂e, we obtain, τ t • ṗt = sup τ ∈K {τ : ṗt } (4.24) Stability condition for the plastic strain The stability condition for p is finding stable p t ∈ M n s for a given loading deformation e t . We propose to approximate the continuous time evolution by a time discretization, such that, 0 = t 0 < • • • < t i < • • • < t N = t b and at the limit max i |t it i-1 | → 0. At the current time t i = t, let the material be at the state e t i = e and p t i = p and the previous state (e t i-1 , p t i-1 ). The discretized plastic strain rate is ṗt (p-p t i-1 )/(t-t i-1 ). During the laps time from t i-1 to t the increment of plastic energy dissipated is t t i-1 H( ṗt )ds H(pp t i-1 ). Hence taking into account all small previous plastic dissipated energy events, the total dissipation is approximated by, D(p) := H(p -p t i-1 ) + D(p t i-1 ) (4.25) At the current time, a plastic strain perturbation is performed for a fixed total strain changing the system from (e, p) to (e, q). The definition of the stability condition adopted here is written as a variation of the total energy between this two states, p stable, e given ⇔ ψ(e, q) + H(qp t i-1 ) ≥ H(pp t i-1 ) + ψ(e, p), ∀q ∈ M 3 s (4.26) We wish to highlight the stability definition adopted, which is for infinitesimal transformations the flow rule. H(qp t i-1 ) ≥ H(pp t i-1 ) -ψ(e, q)ψ(e, p) qp : (qp), ∀q ∈ M n s , q = p (4.27) Consider small variations of the plastic strain p in the direction p for a growing total energy, such that for some h > 0 small enough and p + hp ∈ M n s we have, Using the Legendre transform, we get, τ ∈ ∂H(p -p t i-1 ) ⇔ (p -p t i-1 ) ∈ ∂I K (τ ). (4.29) To recover the continuous-time evolution stability for p, divide by δt = tt i-1 and pass δt to the limit. We recover the flow rule ṗ ∈ ∂I K (τ ), or equivalently in the conjugate space τ ∈ ∂H( ṗ). Let us justify the definition adopted of the stability by showing that there is no lowest energy that can be found for a given e t . Without loss of any generality assume a continuous straight smooth path p(t) starting at p(0) = p and finishing at p(1) = q, such as, (4.31) t ∈ [0, 1] → p(t) = (1 -t)p + tq, ∀q ∈ M n s (4. The right hand side is path independent, by taking the infimum over all plastic strain paths, we get, inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ τ * : (q -p) (4.32) The left hand side does not depends on τ * , taking the supremum for all τ * ∈ K, and applying the triangle inequality for any p t i-1 , one obtains, inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ H(q -p) ≥ H(q -p t i-1 ) -H(p -p t i-1 ). (4.33) which justifies the a posteriori adopted definition of the stability. The stability condition for the displacement is performed on the first chapter and we simply recover the equilibrium constitutive equations for the elastic problem with the prescribed boundary conditions. Numerical implementation and verification of perfect elasto-plasticity models - ∂ N Ω g(t) • u dH n-1 , where H n-1 denotes the Hausdorff n -1-dimensional measure of the boundary. Typical plastic yields criterion used for metal are Von Mises or Tresca, which are well known to have only a bounded deviatoric part of the stress, thus they are insensitive to any stress hydrostatic contributions. Consequently, the plastic strain rate is also deviatoric ṗ ∈ dev(M n s ) and it is not restrictive to assume that p ∈ dev(M n s ). For being more precise but without going into details, existence and uniqueness is given for solving the problem in the stress field, σ ∈ L 2 (Ω; M n s ) (or e(u) ∈ L 2 (Ω; M n s ) ) with a yield surface constraint σ ∈ L ∞ (Ω; dev(M n s )). Experimentally it is observed that plastic strain deformations concentrate into shear bands, as a macroscopic point of view this localization creates sharp surface discontinuities of the displacement field. In general the displacement field cannot be solved in the Sobolev space, but find a natural representation in a bounded deformation space u ∈ BD(Ω) when the plastic strain becomes a Radon measure p ∈ M(Ω ∪ ∂ D Ω; dev(M 3 s )). The problem of finding (u, p) minimizing the total energy and satisfying the boundary conditions is solved by finding stable states variables trajectory i.e. stationary points. This quasi-static evolution problem, is numerically approximated by solving the incremental time problem, i.e. for a given time interval [0, T ] subdivided into (N + 1) steps we have, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . The discrete problem converges to the continuous time evolution provided max i (t it i-1 ) → 0, and the total energy at the time t i in the discrete setting is, E t i (u i , p i ) = Ω 1 2 A(e(u i ) -p i ) : (e(u i ) -p i ) + D i (p i ) dx - ∂ N Ω g(t i ) • u i dH n-1 where, 4.3. Numerical implementation and verification of perfect elasto-plasticity models D i (p i ) = H(p i -p i-1 ) + D i-1 (4.34) for a prescribed u i = ūi on ∂ D Ω. Let i be the the current time step, the problem is finding (u i , p i ) that minimizes the discrete total energy, i.e (u i , p i ) := argmin u∈C i p∈M(Ω∪∂ D Ω;dev(M 3 s )) E t i (u, p) (4.35) where p = (ū iu) • ν on ∂ D Ω and C i is the set of admissible displacement, C i = {u ∈ H 1 (Ω) : u = ūi on ∂ D Ω}. The total energy E(u, p) is quadratic and strictly convex in u and p separately. For a fixed u or p, the minimizer of E(•, p) or E(u, •) exists, is unique and can easily be computed. Thus, a natural algorithm technique employed is the alternate minimization detailed in Algorithm 3, where δ p is a fixed tolerance. More precisely, at the loading time t i , for a given p j i , let find u j i that minimizes E(u, p j i ), notice that the plastic dissipation energy does not depend on the strain e(u), thus, u j i := argmin u∈C i Ω 1 2 A(e(u)p j i ) : (e(u)p j i ) dx - ∂ N Ω g(t) • u dH n-1 (4.36) This is a linear elastic problem. Then, for a given u j i let find p on each element cell, such as it minimizes E(u j i , p). This problem is not easy to solve in the primal formulation, p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) 1 2 A(e(u j i )p) : (e(u j i )p) + H(pp i-1 ) but from the previous analysis, the stability condition of this problem is A(e(u j i )p) ∂H(pp i-1 ). Using the Legendre-transform, the stability of the conjugate problem is given by (p -p i-1 ) ∈ ∂I K (A(e(u j i ) -p)). One can recognize the flow rule in the discretized time. This is the stability condition of the problem, p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) A(e(u j i )-p)∈K 1 2 A(p -p i-1 ) : (p -p i-1 ). The minimization with respect to u is a simple linear problem solved using preconditioned conjugated gradient while minimization with respect to p can be reformulated Solve the equilibrium, u j+1 := argmin u∈C i E i (u, p j ) 4: Solve the plastic strain projection on each cell, p j+1 := argmin j := j + 1 6: until p jp j-1 L ∞ ≤ δ p 7: Set, u i := u j and p i := p j Numerical verifications A way to do a numerical verification is to recover the closed form solution of a bi-axial test in 3D provided in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF]. In the fixed orthonormal basis (e 1 , e 2 , e 3 ), consider a domain Ω = (-d/2, d/2) × (-l/2, l/2) × (0, l), (d < l), with the boundary conditions:    σ 11 = 0 on x 1 = ±d/2 σ 22 = g 2 on x 1 = ±l/2 σ 13 = σ 23 = 0 on x 3 = 0, l and, add u 3 = 0 on x 3 = 0 u 3 = tl on x 3 = l. Considering the classical problem to solve,    div(σ) = 0 in Ω σ = Ae(u) in Ω e(u) = (∇u + ∇ T u)/2 in Ω constrained by a Von Mises plasticity yield criterion, 3 2 dev(σ) : dev(σ) ≤ σ p It is shown in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] that the domain remains elastic until the plasticity is triggered at a critical loading time t c as long as 0 ≤ g 2 ≤ σ p / √ 1 -ν + ν 2 , t c = 1 2E (1 -2ν)g 2 + 4σ 2 p -3g 2 2 where (E, ν) denote respectively the Young's modulus and the Poisson ratio. For 0 ≤ t ≤ t c the elastic solution stands for        σ(t) = g 2 e 2 ⊗                              σ(t) =g 2 e 2 ⊗ e 2 + σ3 e 3 ⊗ e 3 , σ3 = 1 2 g 2 + 4σ 2 p -3g 2 2 e(t) = -ν(1 + ν) g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E e 2 ⊗ e 2 + t(-νe 1 ⊗ e 1 -νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) p(t) =(t -t c ) - g 2 + σ3 2 σ3 -g 2 e 1 ⊗ e 1 + 2g 2 -σ3 2 σ3 -g 2 e 2 ⊗ e 2 + e 3 ⊗ e 3 u(t) = -ν(1 + ν) g 2 E -νt c - g 2 + σ3 2 σ3 -g 2 (t -t c ) x 1 e 1 + (1 -ν 2 ) g 2 E -νt c + 2g 2 -σ3 2 σ3 -g 2 (t -t c ) x 2 e 2 + tx 3 e 3 (4.38) A numerical simulation has been performed on a domain parametrized by l = .5 and d = .2, pre-stressed on opposite faces by g 2 = .5 with the material parameters E = 1,σ p = 1 and a Poisson ratio set to ν = .3. For those parameters, numerical results and exact solution have been plotted see Figure 4.1, and matches perfectly. One difficulty is to get closed form for different geometry and plasticity criterion. Alternate minimization technique converge to the exact solution on this example for Von Mises in 3D Conclusion The adopted strategy to model a perfect elasto-plastic material is to prescribe the elastic stress domain set (closed convex) with plastic yields functions without dealing with corners and approximate the continuous evolution problem by discretized time steps. The 99 Chapter 4. Variational models of perfect plasticity implemented algorithm solves alternately the elastic problem and the plastic projection onto the yield surface. Hence, there is no difficulty to implement other perfect plastic yield criteria. A verification is performed on the biaxial test for Von Mises plastic yield criteria. Chapter 5 Variational phase-field models of ductile fracture by coupling plasticity with damage Phase-field models referred to as gradient damage models of brittle fracture are very efficient to predict cracks initiation and propagation in brittle and quasi-brittle materials [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]. They were originally conceived as an approximation of Francfort Marigo's variational formulation [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] which is based on Griffith's idea of competition between elastic and fracture energy. Their model inherits a fundamental limitation of Griffith's theory which is a discontinuity of the displacement belongs to the damage localization strip, and this is not observed during fractures nucleation in ductile materials. Moreover, they cannot be used to predict cohesive-ductile fractures since no permanent deformations are accounted for. Plasticity models [START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Maso | Quasistatic crack growth in elasto-plastic materials: The two-dimensional case[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF] are widely used to handle with the aforementioned effects by the introduction of the plastic strain variable. To capture ductile fracture patterns the idea is to couple the plastic strain coming from plasticity models with the damage in the phase-field approaches to fracture. The goal of this chapter is to extend the Alessi-Marigo-Vidoli work [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] by considering any associated perfect plasticity and to provide a general algorithm to solve the problem for any dimensions. We provide a qualitative comparison of crack nucleation in various specimen with published experimental results on metals material. We show capabilities of the model to recover cracks patterns characteristics of brittle and ductile fractures. After the set of parameters being adjusted to recover ductile fracture we focus solely on such regime to study cracks nucleation and propagation phenomenology in mild notched specimens. The chapter is organized as follow: Section 5.1.1 starts by aggregating some experiments illustrating mechanisms of ductile fracture which will constitute basis of numerical comparisons provided in the last part of this chapter. Section 5.1.2 is devoted to the introduction of variational phase-field models coupled with perfect plasticity and to recall some of their properties. Section 5.1.3 focuses on one dimension bar in traction to provide the cohesive response of the material and draw some fundamental properties similarly 5.1. Phase-field models to fractures from brittle to ductile to [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. A numerical implementation technique to solve such coupled models is provided in section 5.2. For the remainder we investigate ductile fracture phenomenology by performing simulations on various geometries such as, rectangular specimen, a mild notch 2d plane strain and 3d round bar respectively exposed in sections 5. Numerous experimental evidences show a common phenomenology of fracture nucleation in a ductile materials. To illustrate this, we have selected relevant experiments showing fracture nucleation and propagation in a plate and in a round bar. For instance in [START_REF] Spencer | The influence of iron content on the plane strain fracture behaviour of aa 5754 al-mg sheet alloys[END_REF] the role of ductility with the influence of the iron content in the formation of shear band have been investigated. Experiments on Aluminum alloy AA 5754 Al -Mg show fractures nucleation and evolution in the thickness direction of the plate specimen illustrated in Figure 5.2. The tensile round bar is another widely used test to investigate ductile fractures. However, tracking fractures nucleation inside the material is a challenging task and requires special equipment like tomography imaging to probe. Nevertheless Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF][START_REF] Amine Benzerga | Synergistic effects of plastic anisotropy and void coalescence on fracture mode in plane strain[END_REF] and Luu [START_REF] Luu | Déchirure ductile des aciers à haute résistance pour gazoducs (X100)[END_REF] results show pictures of cracks nucleation and propagation inside those types of samples see Figure 5.19. A simpler method is the fractography which consists in studding fracture surfaces of materials after failure of the samples. Typical ductile fractures 5.1. Phase-field models to fractures from brittle to ductile and powerful approach to study theoretically and solve numerically those problems. The coupling between both models is done at the proposed total energy level. We start by recalling some important properties of variational phase-field models interpreted as gradient-damage models and variational perfect plasticity. Consider an elasto-plastic-damageable material with A the Hooke's law tensor occupying a region Ω ⊂ R n in the reference configuration. The region Ω is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. A safe load condition is required for g(t) to set aside issues in plasticity theory. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div(σ) = 0 in Ω The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u, i.e. e(u) = ∇u + ∇ T u 2 Since the material has permanent deformations, it is usual in small deformations plasticity to consider the plastic strain tensor p (symmetric) such that the kinematic admissibility is an additive decomposition, e(u) = ε + p where ε is the elastic strain tensor. The material depends on the damage variable denoted α which is bounded between two extreme states, α = 0 is the undamaged state material and α = 1 refers to the broken part. Let the damage deteriorate the material properties by making an isotropic modulation of the Hooke's law tensor a(α)A, where the stiffness function a(α) is continuous and decreasing such that a(0) = 1, a(1) = 0. In linearized elasticity the recoverable energy density of the material stands for, ψ(e(u), α, p) := 1 2 a(α)A(e(u)p) : (e(u)p) Consequently the relation which relates the stress tensor σ to the strain is, One can recognize the Hill's principle by applying the definition of subdifferential and the indicator function. Since b(α)K is a none empty closed convex set, using Legendre-Fenchel, the conjugate of the plastic flow is σ ∈ b(α)∂H( ṗ), where the plastic dissipation potential H(q) = sup τ ∈K {τ : q} is convex, subadditive, positively 1-homogeneous for all q ∈ M n×n s . The dissipated plastic energy is obtained by integrating the plastic dissipation power over time, such that, φ p := t 0 b(α)H( ṗ(s)) ds (5.1) This dissipation is not unique and we have to take into account the surface energy produced by the fracture. Inspired by the phase-field models to brittle fracture [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] we define the surface dissipation term as, φ d := t 0 σ 2 c 2Ek w (α) α + 2 ∇α • ∇ α + b (α) α t 0 H( ṗ(s)) ds dt (5.2) where the first term is the classical approximated surface energy in brittle fracture and the last term is artificially introduced to be combined with φ p . Precisely, after summation of the free energy ψ(e(u), α, p), the work force, the dissipated plastic energy φ p and the dissipated damage energy φ d , the total energy has the following form, E t (u, α, p, p) = Ω 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t) • u dH n-1 + Ω b(α) t 0 H( ṗ(s)) ds dx + σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx (5.3) where p = t 0 ṗ(s) ds is the cumulated plastic strain which is embedded in the cumulated plastic dissipation energy t 0 H( ṗ(s)) ds. The surface dissipation potential w(α) is a continuous increasing function such that w(0) = 0 and up to a rescaling, w(1) = 1. Since the damage is a dimensionless variable, the introduction of ∇α enforce to have > 0 a regularized parameter which has a dimension of the length. Note that the total energy (5.3) is composed of two dissipations potentials ϕ p and ϕ d coupled where, ϕ p = Ω b(α) t 0 H( ṗ(s)) ds dx, ϕ d = σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx. (5.4) 5.1. Phase-field models to fractures from brittle to ductile Taking p = 0 in (5.3), the admissible stress space is bounded by, A -1 σ : σ ≤ σ 2 c Ek max α w (α) c (α) where E is the Young's modulus, the compliance function is c(α) = 1/a(α) and let k = max α w (α) c (α) . Therefore, without plasticity in one dimensional setting an upper bound of the stress is σ c . A first conclusion is the total energy (5.3) is composed of two coupled dissipation potentials associated with two yields surfaces and their evolutions will be discussed later. In the context of smooth triplet state variable ζ = (u, α, p) and since the above total energy (5.3) must be finite, we have α ∈ H 1 (Ω) and e(u), p belong to L 2 (Ω). However, experimentally it is observed that plastic strain concentrates into shear bands. In our model since ṗ ∈ b(α)K, the plastic strain concentration is driven by the damage localization and both variables intensifies on the same confined region denoted J(ζ), where J is a set of "singular part" which a priori depends on all internal variables. Also, the damage is continuous across the normal surfaces of J(ζ) but not the gradient damage term which may jump. Accordingly, the displacement field cannot be solved in the Sobolev space, but find a natural representation in special bounded deformation space SBD if the Cantor part of e(u) vanishes, so that the strain measure can be written as, e(u) = e(u) + u ν H n-1 on J(ζ(x)) where e(u) is the Lebesgue continuous part and denotes the symmetrized tensor product. For the sake of simplicity, consider the jumps set of the displacement being a smooth enough surface, i.e the normal ν is well defined, and there is no intersections with boundaries such that J(ζ)∩∂Ω = ∅. The plastic strain turns into a Dirac measure on the surface J(ζ). Without going into details, the plastic strain lies in a non-conventional topological space for measures called Radon space denoted M. Until now, the damage evolution have not been set up and the plastic flow rule is hidden in the total energy adopted. Let us highlight this by considering the total energy be governed by three principles; damage irreversibility, the stability of E t (u, α, p) with respect to all admissible variables (u, α, p) and the energy balance. We focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps such that, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . The following discrete problem converges to the continuous time evolution provided max(t it i-1 ) → 0. At any time t i , the sets of admissible displacement, damage and plastic strain fields respectively denoted C i , D i and Q i are: 107 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage C i = u ∈ SBD(Ω) : u = ū(t i ) on ∂ D Ω , D i = α ∈ H 1 (Ω) : α i-1 ≤ α < 1 in Ω , Q i = p ∈ M( Ω; M n×n s ) such that, p = u ν on J ζ(x) (5.5) and because plastic strains may develop at the boundary, we know from prior works on plasticity [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] that we cannot expect the boundary condition to be satisfied, thus we will have to set up p = (ū(t i )u) ν on ∂ D Ω. It is convenient to introduce Ω ⊃ Ω a larger computational domain which includes the jump set and ∂ D Ω, this will become clearer. Note that the damage irreversibility is in the damage set D i . The total energy of the time-discrete problem is composed of (5.3) on the regular part and b(α)D ( u ν, [0, t i ]) on the singular part, such that, E t i (u, α, p) = Ω\J(ζ) 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t i ) • u dH n-1 + Ω b(α)D i (p) dx + σ 2 c 2Ek Ω\J(ζ) w(α) + 2 |∇α| 2 dx (5.6) where D i (p) = H(p -p i-1 ) + D i-1 (5.7) the total energy is defined over the regular and singular part of the domain, and the evolution is governed by, Definition 8 (Time discrete coupled plasticity-damage evolution by local minimization) At every time t i find stable variables trajectory (u i , α i , p i ) ∈ C i × D i × Q i that satisfies the variational evolution: i. Initial conditions: u 0 = 0, α 0 = 0 and p 0 = 0 ii. Find the triplet ζ i = (u i , α i , p i ) which minimizes the total energy, E t i (u, α, p) iii. Energy balance, E t i (u i , α i , p i ) =E t 0 (u 0 , α 0 , p 0 ) + i k=1 ∂ D Ω (σ k ν) • (ū k -ūk-1 ) dH n-1 - ∂ N Ω (g(t k ) -g(t k-1 )) • u k dH n-1 (5.8) The damage and plasticity criterion are obtained by writing the necessary first order optimality condition of the minimizing problem E t i (u, α, p). Explicitly, there exists h > 0 small enough, such that for (u i + hv, α i + hβ, p i + hq) ∈ C i × D i × Q i , E t i (u i + hv, α i + hβ, p i + hq) ≥ E t i (u i , α i , p i ) (5.9) Consider that the displacement at u i in the direction v might extend the jump set of J(v). The variation of the total energy E t i (u i + hv, α i + hβ, p i + hq) is equal to, Ω\(J(ζ i )∪J(v)) 1 2 a(α i + hβ)A e(u i + hv) -(p i + hq) : e(u i + hv) -(p i + hq) dx - ∂ N Ω g(t i ) • (u i + hv) dH n-1 + Ω\(J(ζ i )∪J(v)) b(α i + hβ)D i (p i + hq) dx + J(ζ i )∪J(v) b(α i + hβ)D i (( u i + hv) ν) dH n-1 + σ 2 c 2Ek Ω\(J(ζ i )∪J(v)) w(α i + hβ) + 2 |∇(α i + hβ)| 2 dx (5.10) Note that the plastic dissipation term is split over the regular part and the singular part and for simplicity we set aside the plastic strain localization on the Dirichlet boundary. Equilibrium and kinematic admissibility: Take β = 0 and q = 0 in (5.9) and (5.10) such that E t i (u i +hv, α i , p i ) ≥ E t i (u i , α i , p i ). Using (5.7) we just have to deal with the current plastic potential H which is subadditive and 1-homogeneous. Hence, the fourth term in (5.10) becomes, 5.1. Phase-field models to fractures from brittle to ductile 2. The damage yield criteria in the bulk, f D (σ t , α t (x), pt (x)) := - 1 2 c (α t (x)) E σ 2 t + σ 2 c 2kE w (α t (x)) -2 2 α t (x) + b (α t (x))σ p pt (x) ≥ 0 (5.31) 3. The damage yield criteria on x 0 , b (α t (x 0 )) u(x 0 ) σ p - 2 σ 2 c kE α t (x 0 ) ≥ 0 (5.32) 4. The damage yield criteria on ±L, α t (-L) ≥ 0, α t (L) ≤ 0 (5.33) 5. The plastic yield criteria in the bulk and on the jump, We restrict our study to r Y = σ c /σ p > 1, meaning that the plastic yield surface is below the damage one. Consequently after the elastic stage, the bar will behave plastically. During the plastic stage, the cumulation of plastic strain decreases f D until the damage yield criteria is reached. On the third stage both damage and plasticity evolves simultaneously such that f D = 0 and f Y = 0 on the jumps x 0 . Of course there is no displacement jump on the bar before the third stage. Let expose the solution (u, α, p) Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage for the elastic, plastic and plastic damage stages. f Y (σ t , α t (x)) := |σ t | -b(α t (x))σ p ≤ 0 ( 5 The elastic response of the bar ends once the tension reached u t = σ p /E. During this regime the damage and plastic strain remain equal to zero. After this loading point, the plasticity stage begins and we have a uniform p = p = u tσ p /E and α = 0 in Ω. Since b (α) < 0 and p increases during the plastic stage, the damage yield criteria f D decreases until the inequality (5.31) becomes an equality. At this loading time both criterion are satisfied, such that, f Y = 0 and f D = 0. Hence, plugging the equation (5.34) into (5.31), we get, -b (α t (x))p t (x) = σ p E - 1 2 c (α t (x))b 2 (α t (x)) + r 2 Y 2k w (α t (x)) -2 2 α t (x) (5.39) By taking α t (x) = 0 in the above equation, we get the condition when the plastic stage ends, for a uniform plastic strain, p = u t - σ p E = σ p (-b (0))E r 2 Y 2k w (0) - 1 2 c (0)b 2 (0) (5.40) The last stage is characterized by the evolution of the damage. For a given x 0 take L long enough to avoid any damage perturbation at the boundary such that, the damage remains equal to zero at the extremities of the bar α(±L) = 0 and assume being maximum at x 0 , α(x 0 ) = β. Let α ≥ 0 over [-L, x 0 ) with α (-L) = 0, multiplying the equation (5.31) by 2α and integrate over [-L, x 0 ), we get, - 2E σ p x 0 -L b (α t (x))α t (x)p t (x) dx = c(β) -c(0) σ 2 t σ 2 p + r 2 Y k w(β) -2 β 2 (5.41) A priori, the cumulated plastic strain evolves along the part of the bar [-L, x 0 ), but since the maximum damage value β is reached on x 0 and the stress is uniform in the bar we have σ t (x) ≤ b(β)σ p . In other words the plasticity does not evolve anymore in the bar except on x 0 , and p is equal to (5.40). We obtain a first integral of the form of, 2 β 2 = k r 2 Y c(β) -c(0) b 2 (β) + w(β) + 2 b(β) -b(0) p Ek σ p r 2 Y (5.42) We know that on the jump set, we have, b (β) u(x 0 ) σ p - 2 σ 2 c kE β = 0 (5.43) Since β is known, the stress on the bar and the displacement jump on x 0 can be computed. We define the energy release rate as the energy dissipated by the damage process, 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity G t := Ω\{x 0 } σ c 2kE w(α t (x)) + 2 α 2 t (x) + b(α t (x))σ p p dx + b(α t (x 0 ))σ p u(x 0 ) and the critical value is given for complete damage localization once σ = 0. Let us recall some fundamental properties for a(α), b(α) and w(α) to satisfy. Naturally the stiffness function must satisfy a (α) < 0, a(0) = 1 and a(1) = 0, and the damage potential function w (α) > 0, w(0) = 0 and up to a rescaling w(1) = 1. The required elastic phase is obtained for α → -a 2 (α)w (α)/a (α) is strictly increasing. The coupling function b (α) < 0 ensure that the damage yield surface decreases with the cumulated plastic strain and b(0) = 1. For numerical reason (a, b, w) must be convex with respect to α which is not the case for the provided closed from solution in [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF] for AT k see Table 5.1. Consequently, we prefer the model named AT 1 where a 1d computed solution example (dark lines) is compared with the numerical simulation (colored lines) see Figure 5.3. The numerical implementation is detailed in the following section 5.2. For this 1d example, we see the three phases described below in the stress-displacement plot, precisely the stress softening leads to a localization of the damage in which a cohesive response is obtained at the center. Name a(α) w(α) b(α) AT 1 (1 -α) 2 α a(α) + η b AT k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 (1 -w(α)) c 2 Numerical implementation of the gradient damage models coupled with perfect plasticity In the view to numerically implement the gradient damage model coupled with perfect plasticity it is common to discretized in time and space. For the time discretization evolution we refer to the Definition 8. However in the numerical implantation we do not enforce energy balance condition justified by following the spirit of [START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Functions space are discretized through standards finite elements methods over the domain. Both damage and displacement fields are projected over linear Lagrange elements. Whereas the plastic strain tensor is approximated by piecewise constant element. By doing so we probably use the simplest finite element to approximate the evolution problem. Conversely, the chosen finite element space cannot describe the jumps set of u and the localization of p, however it might be possible to account such effects by using instead discontinuous Galerkin methods. Nevertheless, as you will see on numerical simulations performed, the plasticity concentrates in a strip of few elements once the damage localizes. Numerically 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity we are not restricted to Von Mises plastic criterion only but any associated plasticity. Since a(α), b(α) and w(α) are convex the total energy is separately convex with respect to all variables (u, α, p) but that is not convex. A proposed algorithm to solve the evolution is alternate minimization which guarantees a path decreasing of the energy, but the solution might not be unique. At each time step t i , the minimization for each variables are performed as follows: i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this we employed preconditioned conjugate gradient methods solvers. ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≤ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. iii. For a fixed (u, α) the minimization of E with respect to p is not straight forward the raw formulation however reformulated as a constraint optimization problem turns being a plastic strain projection onto a convex set which is solved using SNLP solvers provided by the open source snlp 1 . Boundaries of the stress elastic domain is constrained by a series of yields functions describing the convex set without dealing with none differentiability issues typically corners. The retained strategy to solve the evolution problem is to use nested loops. The inner loop solves the elasto-plastic problem by alternate i. and iii. until convergence. Then, the outer loop is composed of the previous procedure and ii., the exit is triggered once the damage has converged. This leads to the following Algorithm 4, where δ α and δ p are fixed tolerances. Argument in favor of this strategy is the elasto-plastic is a fast minimization problem, whereas compute ii. is slow, but changing loops orders haven't be tested. All computations were performed using the open source mef90 2 . Verifications of the numerical implementation have been performed on the elastodamage problem and elasto-plasticity problem separately considering three and two dimensions cases. The plasticity is verified with the existence and uniqueness of the bi axial test for elasto-plasticity in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF]. The implementation of the damage have been checked with propagation of fracture in Griffith regime, the optimal damage profile in 2d and many years of development by Bourdin. The verification of the coupling is done by comparison with the one dimensional setting solution in section 5.1.3. Solve the equilibrium, u k+1 := argmin u∈C i E t i (u, α j , p k ) 6: Solve the plastic strain projection on each cells, p k+1 := argmin p∈M n s a(α j )A(e(u k+1 )-p)∈b(α j )K 1 2 A(p -p i-1 ) : (p -p i-1 ) 7: k := k + 1 8: until p k -p k-1 L ∞ ≤ δ p 9: Set, u j+1 := u k and p j+1 := p k 10: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E t i (u j+1 , α, p j+1 ) 11: j := j + 1 12: until α jα j-1 L ∞ ≤ δ α 13: Set, u i := u j , α i := α j and p i := p j 5.3 Numerical simulations of ductile fractures Plane-strain ductility effects on fracture path in rectangular specimens The model offer a large variety of possible behaviors depending on the choice of functions a(α), b(α), w(α) and the plastic yield function f Y (τ ) considered. From now, the presentation is limited to AT 1 in Table 5.1 and Von Mises plasticity such that, f Y (σ) = ||σ|| eq -σ p where ||σ|| eq = n n-1 dev(σ) : dev(σ) and dev(σ) denotes the deviatoric stresses. Considering an isotropic material, the set of parameters to calibrate is (E, ν, σ p , σ c , ) where the Young's modulus E, the Poisson ratio ν and the plastic yield stress σ p can be easily characterized by experiments. However, σ c and are still not clear but in brittle fracture nucleation they are estimated by performing experiments on notched specimen 5.3. Numerical simulations of ductile fractures see [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF]. Hence, a parameter analysis for our model is to study influences of the ratio r Y = σ c /σ p and independently. Consider a rectangular specimen of length (L = 2) and width (H = 1) in plane strain setting, made of a sound material with the set up E = 1, ν = . Let first performed numerical simulations by varying the stress ratio of initial yields surfaces r Y ∈ [. [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] with an internal length equal to = .02 smaller than the geometric parameters (L, H) and let others parameter unchanged. The damage fields obtained after failure of samples are summarized on the Figure 5.5. A transition from a straight to a slant fracture for an increasing r Y is observed similarly to the Ti glass alloy in the Figure 5.1. A higher initial yields stress ratio induces a larger plastic strain accumulation leading to a thicker damage localization strip. The measure of the fracture angle reported in Figure 5.5 does not take into account the turning crack path profile around free surfaces caused by the damage condition ∇α • ν = 0. Clearly, for the case σ c < σ p the fracture is straight and there is mostly no accumulation of plastic strain. However due to plasticity, damage is triggered along one of shears bands, resulting of a slant fracture observation in both directions but never two at the same time. Now, let us pick up one of this stress ratio r Y = 5 for instance and vary the internal length ∈ [0.02, 0.2]. The stress vs. displacement is plotted in Figure 5.6 and shows various stress jumps amplitude during the damage localization due to the snap-back intensity. This effect is well known in phase-field models to brittle fracture and pointed out by [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. A consequence of this brutal damage localization is a sudden drop of the stress, when this happens the energy balance is not satisfied. Continuous and discontinuous energies evolution is observed for respectively = 0.2 and = 0.02 plotted on Figure 5.7. The attentive reader may notice that the plastic energy decreases during the damage localization which contradicts the irreversibility hypothesis of the accumulation of dissipated plastic energy. Actually the plotted curve is not accurately representative of the dissipated plasticity energy but it is a combination of damage and plasticity such that a part of this energy is transformed into a surface energy contribution. Hence, those dis- Snap-shots of damage, accumulated plastic strain and damage in a deformed configuration fields are illustrated in Figure 5.9 for different loading time (a, b, c, d) shown in the Figure 5.6. The cumulated plastic strain is concentrated in few mesh elements across the surface of discontinuity (fracture center). Because damage and plasticity evolve together along this strip it is not possible to dissociate mechanism coming from pure plasticity or damage independently. It can be interpreted as a mixture of permanent deformation and voids growing with mutual cause and effects relationship. Plane-strain simulations on two-dimensional mild notched specimens In the sequel we restrict our scope to study fractures nucleation and propagation in ductile regime (r Y = σ c /σ p large enough) for a mild notched specimen. Experimentally this design shape samples favor fractures around the smallest cross section size. Necking is a well known instability phenomena during large deformations of a ductile material. A consequence of the necking on a specimen is a cross sectional reductions which implies a curved profile to the deformed sample. Since we are in small deformations setting, necking cannot be recovered, thus we artificially pre-notch the geometry (sketched in Figure 5.10 with the associated Table 5.2) to recover a plastic strain concentrations. For more realistic numerical simulations and comparisons with pictures of the experiments on Aluminum alloy AA 5754 Al -Mg in Figure 5.2, we set material properties (see Table 5.3) such that the internal length is in the range of grain size, σ c is chosen to recover 7% elongation and (E, ν, σ p ) are given. We assume that the material follows Von Mises perfect plasticity criteria and the elastic stress domain shrinks from σ p to the lower limit of 15% of σ p . The experiments are built such that displacements are controlled at the extremities of the plate and observations are in the sheet thickness direction. Hence, the 2d plane strain theory is adopted for numerical simulations. Also we have studied two types of boundaries conditions, clamped and rollers boundary condition respectively named set-up A and set-up B. .10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary. considered, such that, the set-up B provides a slant fracture shear dominating with nucleation at the center and propagation along one of the shear band, and for the set-up A, the fracture nucleates at the center, propagates along the specimen section and bifurcate following shear bands. Final crack patterns are pure shear configuration and a slant-flatslant path. Again some snap shots of damage, cumulated plastic strain and damage in deformed configuration are presented in Figure 5.12 and Figure 5.13 for respectively the set-up A and B. Time loadings highlighted by letter are reported in the stress vs. strain plot in Figure 5.11. Main phenomenon are: (a) during the pure plastic phase there is no damage and the cumulated plastic strain is the sum of two large shear bands where the maximum value is located at the center, (b) the damage is triggered on the middle and develops following shear bands as a "X" shape, (c) a macro fracture nucleates at the center but stiffness remained and the material is not broken, (d) failure of the specimen with the final crack pattern. Close similarities between pictures of ductile fracture nucleations from simulations and experimental observations can be drawn. However, we were not able to capture cup-cones fractures. To recover the desired effect we introduced a perturbation in the geometry such that the parabola shape notch is no more symmetric along the shortest cross section axis, i.e. an eccentricity is introduced by taking ρ < 1 see the Figure 5.10. In a sense there is no reason that necking induces a perfectly symmetric mild notch specimen. Leaving all parameters unchanged and taking ρ = .9 we observed two cracks patterns: a shear dominating and cup-cones for respectively set-up B and set-up A illustrated in Figure 5.14. This type of non-symmetric profile with respect to the shortest cross section axis implies a different stress concentration between the right and the left side of the sample which consequently leads to unbalance the plastic strain concentrations intensity on both parts. Since damage is guided by the dissipated plastic energy we have recovered this cup cones fracture with again a macro fracture has nucleated at the center. Also the set-up B with ρ = .9 is not significantly perturbed to get a new crack path but still in the shear dominating mode. Ductile fracture in a round notched bar A strength of the variational approach is that it will require no modification to perform numerical simulations in three dimensions. Also this part is devoted to recover common observations made on ductile fracture in a round notched bar such as cup-cones and shear dominating fractures shapes. The ductile fracture phenomenology for low triaxility (defined as the ratio of the hydrostatic over deviatoric stresses) have been investigated by Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], relevant pictures of cracks nucleation and propagation into a round bar Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage with none destructive techniques is summarized in the Figure 5.19 . Since we focus on the fracture phenomenology we do not attribute physical values to material parameter but give attentions to the yield stress ratio r Y and the internal length . The internal length governs the thickness of the localization which has to be small enough compared to the specimen radius to observe a distinct fracture. In the other sides, drives the characteristics mesh size, typically ∼ 3h which constraint the numerical cost. For clarity the cumulated plastic strain will not be shown anymore since it does not provide further information on the fracture path than the damage. Based on the above results, boundary conditions play a fundamental role in our simulations so we will consider two cases: an eccentric mild notched shape (ρ = .7) specimens in the set-up A and B respectively associated to clamped and rollers boundary conditions. Both geometries are solids of revolution (tensile axis revolution) based on the sketch Figure 5.10 and Table 5 Those simulations were performed with 48 cpus during 48 hours on a 370 000 mesh nodes for 100 time steps with the provided resources of high performance computing of Louisiana State University3 . Results of numerical simulations are shown on the Figures 5.17 The ductile fracture phenomenology is presented by Benzerga-Leblond [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], and shows the voids growing and coalescence during the early stage of stress softening, then a macro fracture nucleates at the center end propagates following shear lips formations. Numerical simulations at the loading time (a) for the set-up A and B show a diffuse damage in the middle of the specimen which is exactly a loss of stiffness in the material. This can be interpreted as an homogenization of voids density. A sudden macro crack appears around the loading time (b) which corresponds to the observation made. From (b) to (c) the crack follows shear lips formation in a shear dominating or cup-cones crack patterns depending on the prescribed boundary conditions clamped (set-up A) or rollered (set-up B). These numerical examples suggest that variational phase-field models of ductile fracture are capable of predicting crack nucleation and propagation in low triaxiality specimen for the 2d plane strain specimen and round bar for a simple model considered. Conclusion In contrast with most of literature on ductile fracture we proposed a variational model by coupling gradient damage models and perfect plasticity following seminal papers of [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. In this chapter, we have investigated crack nucleation and propagation in multiple geometries in simple case of functions under Von Mises perfect plasticity. We confirmed observations reported elsewhere in the literature that fracture nucleates at the center of the specimen and propagates following shear bands before reaching free surfaces for low triaxiality configuration in ductile materials. Our numerical simulations also highlight that crack patterns observed is strongly dependent of the prescribed boundary conditions and geometry which leads to a plastic dissipated energy concentrations path. The strength of the proposed phase-field model is the ability to handle with both ductile and brittle fractures which mostly have been separated like oil and water. The key parameter to capture this transition is the ratio of initial yields surfaces of damage over plastic one. We show that variational phase-field models are capable of qualitative predictions of crack nucleation and propagation in a mild notch range of geometries including two and three dimensions, hence, this model is a good candidate to address the aforementioned issues. Also, the energy balance is preserved since the fracture evolution is smooth driven by and internal length. Of course, there are still many investigations to performed before claiming the superiority of the model such that, fracture nucleation at a notch of a specimen (high triaxiality) which due to the unbounded hydrostatics pressure for the plasticity criteria (Von Mises for instance) leads to hit the damage yield surface first, consequently a brittle response is attended. To get a cohesive response a possible choice of plastic yield surface is to consider a cap model closing the hydrostatic pressure in the stress space domain. Chapter 6 Concluding, remarks and recommended future work In this dissertation, we studied the phenomena of fracture in various structures using phase-field models. The phase-field models have been derived from Francfort Marigo's variational models to fracture which have been conceived as an approximation of Griffith's theory. In Chapter 1 we exposed a complete overview and main properties of the model. In Chapter 2, we applied the phase-field models to study fracture nucleation in a V-and U-notches geometries. Supported by numerous validation we have demonstrated the ability of the model to make quantitative prediction of crack nucleation in mode I. The model is based on general energy minimization principle and does not require any ad-hoc criteria, just to adjust the internal length. Moreover the model properly accounts for size effects that cannot be recovered from Griffith-based theory. In Chapter 3 we have shown that the extended model to hydraulic fracturing satisfies Griffith's propagation criterion and there is no issues to handle with multi-fracking scenario. The fracture path is dictated by the minimization principle of the total energy. A loss of crack symmetry is observed in the case of a pressurized network of parallel fractures. In Chapter 4, we solely focused on the perfect elasto-plasticity models and we started by the classical approach to its variational formulation. A verification of the alternated algorithm technique is exposed. The last chapter was devoted to combine models exposed in the first and the fourth chapter to perform cohesive and ductile fractures. Our numerical simulations have shown the capability of the model to retrieve main features of ductile fractures in a mild notch specimen, precisely nucleation and propagation phenomenon. Also, we have observed that crack paths are sensitive to the geometry and boundary conditions applied on it. In short, we have demonstrated that variational phase-field models address some of vexing issues associated with brittle fractures: scale effects, nucleation, existence of a critical stress and path prediction. By a simple coupling with the well known perfect plasticity theory, we recovered phenomenology of ductile fractures patterns. Of course, there are still remaining issues that need to be addressed. Our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities illustrated in Chapter 2. Perhaps extensions into phase field models of dynamic fracture will address this issue. Also fracture in compression remains an issue in variational phase-field models. It is not clear of either of this models is capable of simultaneously accounting for nucleation under compression and self-contact. A recommended future work is to study ductile fractures following the spirit of Chapter 2. The idea is by varying the yields stress ratio recover first the brittle initiation criterion and then study the ductile fracture for different notch angles. Primal feasibility l ≥ l(t) 3. Dual feasibility λ ≥ 0 4. Complementary slackness λ(ll(t)) = 0 10 ) 10 Splitting the second integral over ∂Ω = (∂ N Ω ∪ ∂ D Ω) \ Γ(l) into the Dirichlet and the remainder Neumann boundary part and using the condition v = 0 on ∂ D Ω, the Gateaux derivative of E becomes, Figure 1 . 1 : 11 Figure 1.1: Sketch on the left shows the evolution of the crack (red curve) for a strict decreasing function G(1, l) subject to the irreversibility (l ≥ l 0 ) and G(1, l) ≤ G c /t 2 . Picture on the right shows the crack evolution for a local minimality principle (red curve) and for a global minimality (blue curve) without taking into account the energy balance. w 1 . 3 .Remark 1 131 Limit of the damage energy The above expression (1.68) is invariant by a change of variable x = x, thus β(x) = β( x) Chapter 1 . 1 Combining this two statements, we deduce that there exists a , b , c in I such that, a ≤ b ≤ c , and lim →0 α (a ) = lim →0 α (c ) = 0 and lim →0 α (b ) = 1 thus, I w(α ) + (α ) 2 dx = b a w(α ) + (α ) 2 dx + c b w(α ) + (α ) 2 dx (1.78) Again using the identity, a 2 + b 2 ≥ 2|ab|, we have that, Variational phase-field models of brittle fracture where (x) := x 0 w(s)ds. Using the substitution rule we then get, b a w(α ) + (α ) 2 dx ≥ 2 | (b ) -(a )| , (1.80) and since (0) = 0 and (1) = c w , we obtain, ) + (α ) 2 dx ≥ 2c w , 82) and α = 1 for |x| ≤ b , we get that, b -b 0 H n- 1 2 x∈ΩChapter 1 . 0121 ({x; d(x) = y}) dy = 2H n-1 (J(u)) (1.100) and for the second term, b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = w(α (d(x))) + α (d(x))∇d(x) 2 dH n-1 (x) dy = δ b x∈Ω w(α (y)) + α (y)∇d(x) 2 dH n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) dH n-1 ({x; d(x) = y}) dy (1.101) Making the change of variable y = x , Variational phase-field models of brittle fracture b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = δ b w(α (y)) + α (y) 2 H n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) 2 s (y) dy ) 1 . 4 .Definition 5 ( 145 Numerical implementationand the discrete time evolution problem is given by, Damage discrete evolution by local minimizers) Figure 2.1(left) shows the outcome of a surfing experiment on a rectangular domain Ω = [0, 5] × [-1 2 , 1 2 Figure 2 . 1 : 21 Figure 2.1: Mode-I "surfing" experiment along straight (left) and circular (right) paths. Dependence of the crack length and elastic energy release rate on the loading parameter for multiple values of . Figure 2 . 2 : 22 Figure 2.2: Pac-man geometry for the study of the crack nucleation at a notch. Left: sketch of the domain and notation. Right: relation between the exponent of the singularity λ and the notch opening angle ω determined by the solution of equation (2.10). For any opening angle ω we apply on ∂ D Ω the displacement boundary condition obtained by evaluating on ∂ D Ω the asymptotic displacement (2.12) with λ = λ(ω). The mode-I Pac-Man test Consider a Pac-Man-shaped 3 domain with radius L and notch angle ω as in Figure 2.2(left). In linear elasticity, a displacement field associated with the stress field (2.7) is Figure 2 . 3 :σ 23 Figure 2.3: Pac-Man test with the AT 1 model, L = 1, = 0.015, ω = 0.7π, and ν = 0.3.From left to right: typical mesh (with element size ten times larger than that in typical simulation for illustration purpose), damage field immediately before and after the nucleation of a crack, and plot of the energies versus the loading parameter t. Note the small damaged zone ahead of the notch tip before crack nucleation, and the energetic signature of a nucleation event. Figure 2 . 4 : 24 Figure 2.4: Identification of the generalized stress intensity factor: σ θθ (r,0) (2π r) λ-1 along the domain symmetry axis for the AT 1 (left) and AT 2 (right) models with undamaged notch conditions, and sub-critical loadings. The notch aperture is ω = π/10 Figure 2 . 5 : 25 Figure 2.5: Critical generalized critical stress intensity factor at crack nucleation as a function of the internal length for ω 0 (left) and ω π/2 (right). AT 1 -U, AT 1 -D, AT 2 -U, and AT 2 -D refer respectively to computations using the AT 1 model with damaged notch and undamaged notch boundary conditions, and the AT 2 model with damaged notch and undamaged notch boundary conditions. (K Ic ) eff := G eff E 1-ν 2 denotes the critical mode-I stress intensity factor modified to account for the effective toughness G eff . Figure 2 . 6 : 26 Figure 2.6: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3. .m 1-λ ] Figure 2 . 7 : 27 Figure 2.7: Critical generalized stress intensity factor k c vs notch angle.Comparison between numerical simulations with the AT 1 and AT 2 models and damaged and undamaged boundary conditions on the notch edges with experiments in steel from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] (top-left), and Duraluminium (top-right) and PMMA (bottom) from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF]. Figure 2 . 8 : 28 Figure 2.8: Critical generalized stress intensity factor k c vs notch angle and depth in PVC foam samples from [94]. Numerical simulations with the AT 1 model with damaged and undamaged notch conditions (left), and AT 2 model with damaged and undamaged notch conditions (right). Figure 2 . 10 : 210 Figure 2.10: Critical generalized stress intensity factor k c vs notch angle for Al 2 O 3 -7%ZrO 2 (left) and PMMA (right). The black markers represents all experimental results. The numerical results are obtained through the Pac-Man test using the AT 1 model. See Tables 2.8-2.9 in the Appendix B for the raw data. Figure 2 . 2 Figure 2.12: DENT geometry 2. 3 .AT 2 - 32 Size effects in variational phase-field models 1 -U ρ = 0.5AT 1 -U ρ = 1.25 AT 1 -U ρ = 2.5 U ρ = 0.5 AT 2 -U ρ = 1.25 AT 2 -U ρ = 2.5 Figure 2 . 13 :h = ρ 2 ah h = R 100 Figure 2 . 14 : 213100214 Figure 2.13: Crack nucleation at U-notches. Comparison between experimental data of [92] and numerical simulations using the AT 1 (top) and AT 2 (bottom) models. Figure 2 . 15 : 215 Figure 2.15: Damage field at the boundary of the hole in the elastic phase 0 < t < t e (left), the phase with partial damage t e < t < t c (center), and after the nucleation of a crack t > t c (right). Blue: α = 0, red: α = 1. The simulation is for ρ = 1.0 and a/ = 5. Figure 2 . 16 : 216 Figure 2.16: Normalized applied macroscopic stress t e /σ c at damage initiation as a function of the aspect ratio ρ for a/ = 1 (left) and of the relative defect sizes a/ for ρ = 1 and ρ = 0.1 (right). 52 2. 3 .Figure 2 . 17 : 523217 Figure2.17: Normalized applied macroscopic stress t c /σ e at crack nucleation for an elliptic cavity in an infinite plate. Left: shape effect for cavities of size much larger than the internal length (a/ = 48); the solid line is the macroscopic stress at the damage initiation t e (see also Figure2.16) and dots are the numerical results for the AT 1 model. Right: size effect for circular (ρ = 1.0) and highly elongated (ρ = 0.1) cavities. Figure 2 . 53 Chapter 2 .Figure 2 . 18 : 2532218 Figure 2.18: Initiation of a crack of length 2a in a plate of finite width 2W . The numerical results (dots) are obtained with the AT 1 model for = W/25. The strength criterion and the Griffith's criterion (2.18). Figure 3 . 1 : 31 Figure 3.1: Sketch of the geometry (invariant). The symmetry axis being a reflection for 2d and a revolution axis in 3d. 3. 2 . 2 Numerical verification case of a pressurized single fracture in a two and three dimensions Chapter 3 .Figure 3 . 2 : 332 Figure 3.2: Evolutions of normalized p, V and l for the line fracture (left column figures) and penny shape crack (right column figures). Colored dots refer to numerical results and solid black lines to the closed form solution given in Appendix C. For the line fracture, V c = 4πl 3 0 (G c ) eff /E and p c = E (G c ) eff /(πl 0 ), where E = E/(1ν 2 ) in plane strain theory and E = E in plane stress. For the penny shape crack, V c = 8/3 πl 5 0 (G c ) eff /E and p c = πE(G c ) eff /(4l 0 ). 3. 2 .Figure 3 . 3 : 233 Figure 3.3: Snap-shots of damage for the line fracture example at different loadings, such that, before the loading cycle (top), before refilling the fracture (middle) and during the propagation (bottom). The red color is fully damage material and blue undamaged. We see the casing mesh which encapsulates the fracture. Figure 3 . 4 :Figure 3 . 5 : 3435 Figure 3.4: Snap shots (view from above) of fracture damage (α ≥ .99) for the penny shape crack example at different loadings, that is before refilling the fracture (left) and during the propagation (right). The solid black lines are the limit of the casing. Figure 3 . 6 : 36 Figure 3.6: Infinite network of parallel cracks domain (left). Domain duplications form the smallest invariant domain (right). Figure 3 . 7 : 37 Figure 3.7: Domains in the deformed configuration for respectively Ω 1 , Ω 2 , Ω 4 and Ω 6 .The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6. 6 Figure 3 . 8 : 638 Figure 3.8: Plots of normalized variables such that crack pressure, average fracture length and energy density (per Ω) vs. fluid volume density (per Ω) respectively on the (top-left) and (top-right) and (bottom-right). The aperture of the longest crack for 2V /(nV c ) = 13.Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon[START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF]. Chapter 3 . 6 Figure 3 . 10 : 36310 Figure 3.10: Ratio of critical pressures (multi-fracking over single fracture) vs. the inverse of the fracture density (hight density on the left x-axis and low density on the right side).Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6. Figure 3 . 11 : 311 Figure 3.11: Fracture toughness vs. confining pressure for the Indiana limestone Chapter 3 .LFigure 3 . 12 : 3312 Figure 3.12: Schematic of burst experiment for jacketed bore on the (left). Pre-(middle) and Post-(right) burst experiment photos. Figure 3 . 13 : 313 Figure 3.13: Rigorous superposition of the burst problem.L Figure 3 . 14 : 314 Figure 3.14: Superposition of the burst problem applied in Abou-Sayed (1978). Figure 3 . 15 : 315 Figure 3.15: Comparison of the normalized stress intensity factor for the jacketed and unjacketed problems receptively denoted K J * I and K U * I vs. the normalized crack length l. Numerical computational SIF based on G θ method (colored lines) overlay plots provided by Clifton in [53]. 10 Figure 3 . 16 : 10316 Figure 3.16: Computed normalized SIF vs. normalized crack length l for two confining pressure ratios r = 1/8 (dash lines) and r = 1/6 (solid lines) and various w = {3, 4, 7, 10} (colored lines). Figure 3 . 17 : 317 Figure 3.17: Three possible regime for K B * I denoted (a), (b) and (c). l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST Figure 3 . 18 : 318 Figure 3.18: Computed normalized SIF vs. normalized crack length for the unconfined (left) and confined (right) burst experiments according to the Table 3.3. 5 r = 1 /6 w = 5 Figure 3 . 19 : 515319 Figure 3.19: Colored lines are computed normalized SIF vs. normalized crack length for unstable propagation (l 0 ≥ .5). Red markers are time step results obtained using the phase-field model. (4. 16 ) 16 By applying the supremum function for all τ * ∈ K, it comes that, for τ ∈ K, τ : ṗ ≥ sup τ * ∈K {τ * : ṗ}.(4.17) 4. 2 . 2 Variational formulation of perfect plasticity models q = p + hp, ∀p ∈ M n s Plug this into (4.27) and send h → 0, then using the definition of Gateaux derivative and the subgradient, the stability definition leads to τ = -∂ ψ ∂p (e, p) = lim h→0 -ψ(e, p + hp) -ψ(e, p) hp ∈ ∂H(pp t i-1 ). (4.28) 30 )τ 30 For any given τ * a fixed element of K, * : p(s) ds = τ * : (qp). Chapter 4 .Algorithm 3 1 : 431 Variational models of perfect plasticity as a constraint optimization problem implemented using SNLP solvers provided by the open source snlp 1 . All computations were performed using the open source mef90 2 . Elasto-plasticity alternate minimization algorithm for the step i Let j = 0 and p 0 := p i-1 2: repeat 3: p i-1 ) : (pp i-1 ) 5: line segment [( d/2, l/2, 0), (d/2, l/2, l)] Figure 4 . 1 : 41 Figure 4.1: The closed-form solution equations (4.37),(4.38) are denoted in solid-lines and dots referred to numerical results is in dots. (Top-left) and (-right) figures show respectively the hydrostatics evolution of stresses and plastic strains with the time loading. The figure in the bottom shows displacements for t = 2.857 along the lineout axis [(-d/2, -l/2, 0) × (d/2, l/2, l)] 3.1, 5.3.2 and 5.3.3. 5.1 Phase-field models to fractures from brittle to ductile 5.1.1 Experimental observations of ductile fractures It is common to separate fractures into two categories; brittle and ductile fractures with different mechanisms. However relevant experiments [110] on Titanium alloys glass show a transition from brittle to ductile fractures response (see Figure 5.1) by varying only one parameter: the concentration of Vanadium. Depending on the Vanadium quantity, they observed a brutal formation of a straight crack, signature of brittle material response for low concentrations. Conversely a smooth stress softening plateau is measured before failure for higher concentrations. The post mortem samples show a shear dominating fracture characteristic of ductile behaviors. / Plot of uniaxial tension test data with optical images of dogbone specimens post-failure. (top eering stress as a function of engineering strain is plotted from tensile tests on dogbone samples o series composites. Samples were loaded until failure at a constant strain rate of 0.2 mm/min. Curv n the x-axis to highlight differences in plastic deformation behavior between alloys. (top right) h of complete V0 dogbone sample after failure in tension. (bottom) Optical microscope images at ilure in deformed alloys V2-V10 and DV1. Figure 5 . 1 : 51 Figure 5.1: Pictures produced by [110] show post failure stretched specimens of Ti-based alloys V x --Ti 53 -x/2 Zr 27 -x/2 Cu 5 Be 15 V x . From left to right: transition from brittle to ductile with a concentration of Vanadium respectively equal to 2%, 6% and 12%. α)A(e(u)p) Plasticity occurs in the material once the stress reaches a critical value defined by the plastic yield function f Y : M n×n s → R convex such that f Y (0) < 0. We proposed to couple the damage with the admissible stress set through the coupling function b(α) such that, the stress is constrained by σ ∈ b(α)K, where K := {τ ∈ M n×n s s.t. f Y (τ ) ≤ 0} is a non empty close convex set. The elastic stress domain is subject to isotropic transformations by b(α) a state function of the damage. Naturally to recover a stress-softening response, Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage the coupling function b(α) is continuous decreasing such that b(0) = 1 and b(1) = η b , where η b is a residual. By considering associated plasticity the plastic potential is equal to the yield function and the plastic flow occurs once the stress hits the yield surface, i.e. σ ∈ b(α)∂K. At this moment, the plastic evolution is driven by the normality rule such that the plastic flow lies in the subdifferential of the indicator function denoted I at σ, written as, ṗ ∈ ∂I b(α)K (σ) .34) 6 . 7 . 2 c( 5 . 36 ) 8 . 6725368 Plastic flow rule in the bulk, b(α t (x))σ p | ṗt (x)|σ t ṗt (x) = 0(5.35) The damage consistency in the bulk, jump and boundary,f D (α t (x), p t (x), pt (x)) αt (x) = 0 b (α t (x 0 )) u(x 0 ) σ p -2 σ kE α t (x 0 ) αt (x) = 0 α t (±L) αt (±L) = 0The energy balance at the boundary, α t (±L) αt (±L) = 0 (5.37)9. The irreversibility which applies everywhere in Ω, 0 ≤ α t (x) ≤ 1, αt (x) ≥ 0 (5.38) Chapter 5 .Figure 5 . 3 : 553 Figure 5.3: Comparisons of the computed solution (dark lines) for AT 1 see Table5.1 with the numerical simulation (colored lines) for parameters E = 1, σ p = 1, = 0.15, σ c = 1.58, L = .5 and η b = 0. The (top-left) picture shows the stress-displacement evolution, the (top-right) plot is the displacement jump vs. the stress during the softening behavior. The (bottom-left) figure shows the damage profile during the localization for three different loadings. The (bottom-right) is the evolution of the energy release vs. the displacement jump also known as the cohesive law(Barenblatt). Chapter 5 .Algorithm 4 1 :: repeat 3 : 5413 Variational phase-field models of ductile fracture by coupling plasticity with damage Alternate minimization algorithm at the step i Let, j = 0 and α 0 := α i-1 , p 0 := p i-1 2Let, k = 0 and p 0 := p j 3 Figure 5 . 4 : 354 Figure 5.4: Rectangular specimen in tensile with rollers boundary condition on the leftright extremities and stress free on the remainder. The characteristic mesh size is h = /5. Figure 5 . 5 : 55 Figure 5.5: Shows fracture path angle vs. the initial yields stress ratio r Y . Transition form straight to slant crack characteristic of a brittle -ductile fracture transition. Figure 5 . 6 : 9 . 5 . 3 .Figure 5 . 7 :Figure 5 . 8 : 569535758 Figure 5.6: Stress vs. displacement plot for σ c /σ p = 5, shows the influence of the internal length on the stress jump amplitude signature of the snap back intensity. Letters on the curve = .1 referees to loading times when snap-shots of α and p are illustrated in Figure 5.9. 123 Chapter 5 .Figure 5 . 9 : 123559 Figure 5.9: Rectangular stretched specimen with rollers boundary displacement for parameters σ c /σ p = 5 and = .1, showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1%) at different loading time refereed to the plot 5.6 for (a, b, c, d). The cumulated plastic strain defined as p = t 0 || ṗ(s)||ds has a piecewise linear color table with two pieces, [0, 14] for the homogeneous state and [14, 600] for visibility during the localization process. Moreover the maximum value is saturated. Figure 5 5 Figure 5.10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary. Figure 5 . 11 : 511 Figure 5.11: Plot of the stress vs. strain (tensile axis component) for the mild notch specimen with clamped and rollers interfaces conditions respectively set-up A and set-up B 126 5. 3 .Figure 5 . 12 :Figure 5 . 13 : 1263512513 Figure 5.12: Zoom in the center of mild notched stretched specimen with clamped boundary displacement (set-up A) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at different loading time refers to Figure 5.11 for (a, b, c, d). The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure. 5. 3 .Figure 5 . 14 : 3514 Figure 5.14: Zoom in the center of eccentric mild notched stretched specimen (ρ = .9) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at the failure loading time, for the set-up A and B. The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure. and 5.18 were fractures patterns are similar to one observed in the literature see pictures 5.15 and 5.16. An overview of the fracture evolution in round bar are exposed in the Figures 5.19 . Figure 5 . 15 : 515 Figure 5.15: Photo produced by [107] showing cup cones fracture in a post mortem rounded bar. Figure 5 . 16 : 516 Figure 5.16: Photo produced by [107] showing shear dominating fracture in a post mortem rounded bar. Figure 5 . 5 Figure 5.17: Snap-shot of the damage in deformed configuration for the set-up A after failure, two pieces next to each other. Figure 5 . 5 Figure 5.18: Snap-shot of the damage in deformed configuration for the set-up B after failure, two pieces next to each other. Chapter 5 .Figure 5 . 19 : 5519 Figure 5.19: Picture in Benzerga-Leblond[START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF] shows the phenomenology of ductile fracture in round notched bars of high strength steel: damage accumulation, initiation of macroscopic crack, crack growth and shear lip formation. Numerical simulations shows the overlapped stress vs. displacement blue and orange curves for respectively set-up A and setup B, and snap shots of damage slices in the deformed round bar. The hot color table illustrates the damage, the red color turns white for α ≥ 0.95 which correspond to less than 0.25% of stiffness. 2.11: Critical load in the three-and four-point bending experiments of a Al 2 O 3 -7%ZrO 2 sample (left) and four-point bending of a PMMA sample (right) from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. Due to significant variations in measurements in the first set of experiments, each data point reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is plotted. For the PMMA experiments, average values are plotted. See Table2.10 and 2.11 in the Appendix B for raw data. Table 2 . 2 3: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω from Figure2.5. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3. 500 1.292 1.084 1.349 1.284 10 0°0.500 1.308 1.091 1.328 1.273 20 0°0.503 1.281 1.121 1.376 1.275 30 0°0.512 1.359 1.186 1.397 1.284 40 0°0.530 1.432 1.306 1.506 1.402 50 0°0.563 1.636 1.540 1.720 1.635 60 0°0.616 2.088 1.956 2.177 2.123 70 0°0.697 2.955 2.704 3.287 3.194 80 0°0.819 4.878 4.391 5.629 5.531 85 0°0.900 6.789 5.890 7.643 7.761 89 9°0.998 9.853 8.501 9.936 9.934 Table 2 . 2 4: Generalized critical stress intensity factors as a function of the notch aperture in soft annealed tool steel, (AISI O1 at -50 • C). Experimental measurements from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] using SENT and TPB compared with Pac-Man simulations with the AT 1 model. Experiments Undamaged notch Damaged notch 2ω Mat k (exp) c stdev k c (num) rel. error k (num) c rel. error 0°H80 0.14 0.01 0.18 22.91 % 0.15 5.81 % H100 0.26 0.02 0.34 24.62 % 0.28 7.61 % H130 0.34 0.01 0.44 29.34 % 0.36 5.09 % H200 0.57 0.02 0.74 47.60 % 0.61 6.53 % 90°H80 0.20 0.02 0.22 12.65 % 0.21 4.73 % H100 0.36 0.02 0.41 12.29 % 0.38 4.10 % H130 0.49 0.05 0.54 11.33 % 0.50 0.50 % H200 0.81 0.08 0.91 20.54 % 0.83 2.21 % 140°H80 0.53 0.06 0.53 0.37 % 0.48 9.26 % H100 0.89 0.04 0.92 3.43 % 0.84 5.91 % H130 1.22 0.10 1.25 2.95 % 1.13 7.48 % H200 2.02 0.14 2.07 4.92 % 1.89 6.80 % 155°H80 0.86 0.07 0.83 3.63 % 0.75 14.36 % H100 1.42 0.08 1.42 0.14 % 1.29 10.63 % H130 1.90 0.10 1.95 2.82 % 1.76 8.06 % H200 3.24 0.15 3.23 0.89 % 2.92 11.02 % Table 2.5: Generalized critical stress intensity factors as a function of the notch aper- ture in Divinycell® PVC foam. Experimental measurements from [94] using four point bending compared with Pac-Man simulations with the AT 1 model. Table 2 . 2 6: Generalized critical stress intensity factors as a function of the notch aperture in Duraluminium. Experimental measurements from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF] using single edge notch tension compared with Pac-Man simulations with the AT 1 model. Experiments Undamaged notch Damaged notch ω Type k c (exp) stdev k c (num) rel. error k (num) c rel. error 10°DENT 1.87 0.03 2.50 25.29 % 2.07 10.03 % 20°DENT 1.85 0.03 2.53 26.89 % 2.13 12.97 % 30°DENT 2.17 0.03 2.65 18.17 % 2.33 6.92 % 40°DENT 2.44 0.02 3.07 20.65 % 2.73 10.70 % 50°DENT 3.06 0.05 3.94 22.31 % 3.54 13.63 % 60°DENT 4.35 0.18 5.95 26.97 % 5.41 19.69 % 70°DENT 8.86 0.18 11.18 20.74 % 10.10 12.26 % 80°DENT 28.62 0.68 27.73 3.20 % 24.55 16.56 % 90°DENT 104.85 10.82 96.99 8.11 % 85.37 22.82 % Table 2.7: Generalized critical stress intensity factors as a function of the notch aper- ture in PMMA. Experimental measurements from [165] using single edge notch tension compared with Pac-Man simulations with the AT 1 model. Table 2 . 2 8: Generalized critical stress intensity factors as a function of the notch aperture in Aluminium oxide ceramics. Experimental measurements from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three and four point bending compared with Pac-Man simulations. Experiments Undamaged notch Damaged notch 2ω a/h k c (exp) stdev k c (num) rel. error k c (num) rel. error 60°0.1 1.41 0.02 1.47 4.5% 1.29 9.3% 0.2 1.47 0.04 1.47 0.4% 1.29 14.0% 0.3 1.28 0.03 1.47 13.0% 1.29 0.4% 0.4 1.39 0.04 1.47 5.8% 1.29 7.8% 90°0.1 2.04 0.02 1.98 3.0% 1.81 12.9% 0.2 1.98 0.01 1.98 0.0% 1.81 9.6% 0.3 2.08 0.03 1.98 5.1% 1.81 15.2% 0.4 2.10 0.03 1.98 5.9% 1.81 16.1% 120°0.1 4.15 0.02 3.87 7.3% 3.63 14.3% 0.2 4.03 0.06 3.87 4.2% 3.63 11.0% 0.3 3.92 0.18 3.87 1.4% 3.63 8.0% 0.4 3.36 0.09 3.87 13.0% 3.63 7.4% Table 2.9: Generalized critical stress intensity factors as a function of the notch aperture in PMMA. Experimental measurements from [71] using three and four point bending compared with Pac-Man simulations. The value a/h refers to the ratio depth of the notch over sample thickness. See Figure 2.9 for geometry and loading. Table 2 . 2 10: Critical load reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three-and four-point bending experiments of an Al 2 O 3 -7%ZrO 2 sample compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. TPB and FPB refer respectively to three point bending and four point bending. See Figure2.9 for geometry and loading. 2ω a/h P c (exp) [N] stdev P (num) c [N] rel. error 60°0.1 608.50 6.69 630.81 3.5% 0.2 455.75 12.48 451.51 0.9% 0.3 309.00 8.19 347.98 11.2% 0.4 258.75 6.61 268.69 3.7% 90°0.1 687.33 5.19 668.69 2.8% 0.2 491.00 2.94 491.41 0.1% 0.3 404.33 5.44 383.33 5.5% 0.4 316.00 4.24 297.48 6.2% 120°0.1 881.75 4.60 822.22 7.2% 0.2 657.25 9.36 632.32 3.9% 0.3 499.60 25.41 499.50 0.0% 0.4 336.25 9.09 386.87 13.1% Table 2 . 2 11: Load at failure reported in Table 3 . 3 33 288 10.07 1310 0 0.218 365 Id 1 0.279 8.93 9775 1/8 0.2025 1462 Id 2 0.258 9.65 14907 1/8 0.2060 1954 Id 3 0.273 9.12 11282 1/6 0.2128 1023 Id 4 0.283 8.82 17357 1/6 0.1102 2550 Id 5 0.257 9.70 18258 1/6 0.2022 1508 : Rock specimen dimensions provided by the commercial laboratory and calculated fracture toughness. where A is the Hooke's law tensor. The domain is subject to time dependent stress boundary condition σ • ν = g(t) on ∂ N Ω. A safe load condition g(t) is prescribed to prevent issues in plasticity theory. The total energy is formulated for every x ∈ Ω and every t by E t (u, p) = 4.3.1 Numerical implementation of perfect plasticity models Consider the same problem with stress conditions at the boundary and a free energy of the form of, ψ(e(u), p) = 1 2 A(e(u) -p) : (e(u) -p), Ω 1 2 A(e(u) -p) : (e(u) -p) + 0 t sup τ ∈K {τ : ṗ(s)}ds dx e 2 + νg 2 e 3 ⊗ e 3 + tEe 3 ⊗ e 3 ⊗ e 2 + t(-νe 1 ⊗ e 1νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) x 2 e 2 + t(-νx 1 e 1νx 2 e 2 + x 3 e 3 ) e(t) = -ν(1 + ν) e 2 u(t) = -g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E ν(1 + ν) E g 2 x 1 e 1 + 1 -ν 2 E g 2 (4.37) After the critical loading, permanent deformation takes place in the structure and the solution is Table 5 . 5 1: Variety of possible models, where c 1 , c 2 are constants. Table 5 . 5 2: Specimen dimensions. All measures are in [mm]. The internal length is specified in Table 5.3. We observed two patterns of ductile fractures depending on the boundary condition 125 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage E ν σ p σ c [GPa] [MPa] [GPa] [µm] 70 .33 100 2 400 Table 5 . 5 3: Material parameters used for AA 5754 Al -Mg. Table 5 . 5 .[START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF]. The smallest damageable plastic yield surface is given for 5% of σ p . 4: Specimen dimensions. For the internal length refer to the Table5.5. L H W r D d l h 4.5 2.2 1.05 .5 1.09 0.98 0.82 /2.5 E ν σ p r Y 1 .3 1 12 .03 Table 5 . 5 5: Parameters used for 3d simulations. Karush-Kuhn-Tucker available at https://www.bitbucket.org/bourdin/mef90-sieve available at https://www.bitbucket.org/bourdin/mef90-sieve available at https://bitbucket.org/cmaurini/gradient-damage https://en.wikipedia.org/wiki/Pac-Man available at https://www.bitbucket.org/bourdin/mef90-sieve available at http://abs-5.me.washington.edu/snlp/ and at https://bitbucket.org/bourdin/ snlp available at https://www.bitbucket.org/bourdin/mef90-sieve http://www.hpc.lsu.edu Remerciements problem. Perhaps extensions into phase field models of dynamic fracture will address this issue. Fracture in compression remains an issue in variational phase-field models. Although several approaches have been proposed that typically consist in splitting the strain energy into a damage inducing and non damage inducing terms, neither of the proposed splits are fully satisfying (see [START_REF] Amor | Regularized formulation of the variational brittle fracture with unilateral contact: Numerical experiments[END_REF][START_REF] Lancioni | The variational approach to fracture: A practical application to the french Panthéon[END_REF][START_REF] Li | Gradient Damage Modeling of Dynamic Brittle Fracture[END_REF] for instance). In particular, it is not clear if either of this models is capable of simultaneously accounting for nucleation under compression and self-contact. Finally, even though a significant amount of work has already been invested in extending the scope of phase-field models of fracture beyond perfectly brittle materials, to our knowledge, none of the proposed extensions has demonstrated its predictive power yet. Appendix C Single fracture in a infinite domain Line Fracture (2d domain): The volume of a line fracture in a 2d domain is where E = E/(1ν 2 ) in plane strain and E = E in plane stress theory. Before the start of propagation, l = l 0 and the fluid pressure in this regime is If we consider an existing line fracture with an initial length of l 0 . Prior to fracture propagation, the fracture length does not change so that l = l 0 . Since fracture length at the onset of propagation is l 0 , the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is The critical fracture volume at the critical fluid pressure is obtained by substituting (3.16) into (3.14) During quasi-static propagation of the fracture, l ≥ l 0 and the fracture is always in a critical state so that (3.16) applies. Therefore, the fluid pressure and fracture length in this regime are Chapter 4 Variational models of perfect plasticity Elasto-plasticity is a branch of solid mechanics which deals with permanent deformation in a structure once the stress reached a critical value at a macroscopic level. This topic is a vast research area and it is impossible to cover all contributions. We will focus on recalling basic mathematical and numerical aspects of perfect elasto-plasticity in small strain theory under quasi-static evolution problems. The perfect elasto-plastic materials fall into the theory of generalized standard materials developed by [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Marigo | From clausius-duhem and drucker-ilyushin inequalities to standard materials[END_REF][START_REF] Mielke | A mathematical framework for generalized standard materials in the rate-independent case[END_REF]. Recently, a modern formalism of perfect plasticity arose [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF][START_REF] Solombrino | Quasistatic evolution problems for nonhomogeneous elastic plastic materials[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF][START_REF] Francfort | Small-strain heterogeneous elastoplasticity revisited[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF], the idea is to discretize in time and find local minimizers of the total energy. In this chapter we focus only on perfect elasto-plasticity materials and set aside the damage. We start with concepts of generalized standard materials in the section 4.1. Then using some convex analysis [START_REF] Ekeland | Convex analysis and variational problems[END_REF][START_REF] Temam | Mathematical problems in plasticity[END_REF] we show the equivalence with the variational formulation presented in the section 4.2. The last part 4.3 presents an algorithm to solve perfect elasto-plasticity materials evolution problems. A numerical verification example is detailed at the end of the chapter. Ingredients for generalized standard plasticity models For the moment we set aside the evolution problem and we focus on main ingredients to construct standard elasto-plasticity models [START_REF] Germain | Continuum thermodynamics[END_REF][START_REF] Quoc | Stability and nonlinear solid mechanics[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF]. This theory requires a choice of internal variables, a recoverable and a dissipation potentials energies where both functionals are convex. The driving forces (conjugate variables) usually the stress and the thermodynamical force lie respectively in the elastic and dissipation potential energies. For smooth evolutions of the internal variables, the material response is dictated by the normality rule of the dissipation potential convex set (flow law rule). By doing so, it is equivalent to find global minimizers of the total energy sum of the elastic and dissipation potential energies. Consider that our material has a perfect elasto-plastic response and can be modeled by the generalized standard materials theory, which is based on two statements. In all of these experiments main observations of fractures nucleation reported are: (i) formations of shear bands in "X" shape intensified by necking effects, (ii) growing voids and coalescence, (iii) macro-crack nucleation at the center of the specimen, (iv ) propagation of the macro crack, straightly along the cross section or following shear bands depending on the experiment and (v ) failure of the sample when the fracture reaches external free surfaces stepping behind shear bands path. Observed fracture shapes are mostly cup-cones or shear dominating. The aforementioned ductile features examples will be investigated through this chapter by considering similar geometries such as, rectangular samples, round notched specimens in plane strain condition and round bars. Pioneers to model ductile fractures are Dugdale [START_REF] Dugdale | Yielding of steel sheets containing slits[END_REF] and Barenblatt [START_REF] Barenblatt | The mathematical theory of equilibrium of cracks in brittle fracture[END_REF] with their contributions on cohesive fractures following Griffith's idea. Later on, a modern branch focused on micro voids nucleations and convalescence as the driven mechanism of ductile fracture. Introduced by Gurson [START_REF] A L Gurson | Continuum Theory of Ductile Rupture by Void Nucleation and Growth: Part I -Yield Criteria and Flow Rules for Porous Ductile Media[END_REF] a yield surface criterion evolves with the micro-void porosity density. Then, came different improved and modified versions of this criterion, Gurson-Tvergaard-Needleman (GTN) [START_REF] Tvergaard | Material failure by void growth to coalescence[END_REF][START_REF] Tvergaard | Analysis of the cup-cone fracture in a round tensile bar[END_REF][START_REF] Needleman | An analysis of ductile rupture in notched bars[END_REF], Rousselier [START_REF] Rousselier | Ductile fracture models and their potential in local approach of fracture[END_REF], Leblond [START_REF] Leblond | An improved gurson-type model for hardenable ductile metals[END_REF] to be none exhaustive. The idea to couple phase-field models to brittle fracture with plasticity to recover cohesive fractures is not new and have been developed theoretically and numerically in [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF][START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Wadier | Mécanique de la rupture fragile en présence de plasticité : modélisation de la fissure par une entaille[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF]. Gradient damage models coupled with perfect plasticity Our model is settled on the basis of perfect plasticity and gradient damage models which has proved to be efficient to predict cracks initiation and propagation in brittle materials. Both mature models have been developed separately and are expressed in the variational formulation in the spirit of [START_REF] Mielke | Evolution of rate-independent systems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Piero | Variational Analysis and Aerospace Engineering, volume 33 of Optimization and Its Applications[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] which provides a fundamental Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage (5.11) Passing E t i (u i , α i , p i ) to left and dividing by h and letting h → 0 at the limit, we obtain, By integrating by part the integral term in e(v) over Ω \ (J(ζ i ) ∪ J(v)), we get, (5.13) where σ i = a(α i )A e(u i )p i . Without plasticity there is no cohesive effect, hence, σ i ν = 0 and the non-interpenetration condition leads to u i • ν ≥ 0 on J(ζ i ), however for a general cohesive model we do not have information for σ i ν on J(ζ i ). So, to overcome this issue we restrict our study to material with tr (p i ) = 0, consequently on the jump set J(ζ i ) we have tr ( The material can only shear along J(ζ i ) which is commonly accepted for Von Mises and Tresca plasticity criterion. Thus, we have v • ν = 0 on J(ζ i ) and naturally σ i ν = 0 on J(v). The last term of (5.13) stands for, Combining the above equation, (5.12) and (5.13), considering J(v) = ∅ and by a standard localization argument i.e. taking v concentrated around H n-1 and zero 5.1. Phase-field models to fractures from brittle to ductile almost everywhere, we obtain that all the following integrals must vanish, which leads to the equilibrium and the prescribed boundary conditions, Note that the normal stress σ i ν is continuous across J(ζ i ) but the tangential component might be discontinuous. 2. Plastic yield criteria on the jump set: Since the above equation (5.15) holds, for J(v) = ∅ in (5.12) we have, Thus, on each point of the jump set The right hand side of the above inequality, Considering Von Mises criterion we get on the left hand side, Taking the maximum for all ν = 1, and letting σ i = a(α i )ς i we obtain that (5.17) becomes, This condition is automatically satisfied for Von Mises since a(α i )/b(α i ) ≤ 1. We refer the reader to [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Francfort | The elastoplastic exquisite corpse: A suquet legacy[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] for more details. Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage 3. Damage yield criteria in the bulk: Taking v = 0 and q = 0 thus J(v) = ∅ in the optimality condition (5.9), such that E t i (u i , α i + hβ, p i ) ≥ E t i (u i , α i , p i ), then dividing by h and passing to the limit, we get, after integrating by parts the ∇α•∇β term over Ω \ J(ζ i ), The above equation holds for any β ≥ 0, hence, all contributions must be positive, such that in Ω \ J(ζ i ), we have, The damage yield criterion is composed of the classical part from gradient damage models and a coupling part in b (α). When the material remains undamaged and plasticity occurs, the cumulation of dissipated plastic energy combined with the property that b (α) < 0 leads to decrease the left hand side which becomes an equality up to a critical plastic dissipation. At this moment the damage is triggered. 4. Damage yield criteria in the jump set: From (5.18) we have, The gradient damage is discontinuous across the jump set J(ζ i ) due to plastic strain concentration and vice versa. 5. Damage boundary condition: From (5.18) we have, 6. Plastic yield criteria in the bulk: Take v = 0 and β = 0 thus J(v) = ∅ in the optimality condition (5.9) such that where Since ψ is differentiable by letting h → 0 and applying the subgradient definition to (5.22), we get -∂ψ/∂p i ∈ b(α i )∂H(p ip i-1 ). We recover the stress admissible constraint provided by the plastic yield surface. The damage state decreases the plastic yield surface leading to a stress softening property. 7. Flow rule in the bulk: Applying the convex conjugate (Legendre-Fenchel) to the above equation we get, which is the flow rule in a discrete settings, by letting max(t it i-1 ) → 0 we get the time continuous one. Damage consistency: The damage consistency is recovered using the energy balance condition which is not fully exposed here. However the conditions obtained are: Damage irreversibility in the domain: The damage irreversibility constraint is, All of this conditions are governing laws of the problem. The evolution of the yields surfaces are given by the equations (5.19) and (5.23). Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage Application to a 1d setting The goal of this section is to apply the gradient damage model coupled with perfect plasticity in 1d setting by considering a bar in traction. Relevant results are obtained through this example such as, the evolutions of the two yields functions, the damage localization process and the role of the gradient damage jump term which governs the displacement jump set. We refer the reader to Alessi-Marigo [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] for a complete exposition of this 1d application. In the sequel, we consider a one-dimensional evolution problem of an homogeneous elasto-plastic-damageable bar Ω = [-L, L] stretched by a time controlled displacements at boundaries where damage remains equal to zero. Assume that a unique displacement jump may occur on the bar located at the coordinate x 0 , thus the admissible displacement, damage and plastic strain sets are respectively, The state variables of the sound material is at the initial condition (u 0 , α 0 , p 0 ) = (0, 0, 0). In one dimensional setting the plastic yield criteria is |τ | ≤ σ p , thus the plastic potential power is given by, By integrating over the process, the dissipated plastic energy density is σ p p where the cumulated plastic strain is p = t 0 | ṗs |ds. Since no external force is applied, the total energy of the bar is given by, where E is the Young's modulus and (•) = ∂(•)/∂x. The quadruple state variables (u t , α t , p t , pt ) ∈ C t ×D ×Q×M(Ω, R) is solution of the evolution problem, if the following conditions holds: 1. The equilibrium, The stress is constant along the bar hence it is only function of time. Titre : Modèles variationnels à champ de phase pour la rupture de type fragile et ductile: nucléation et propagation Mots clefs : Modèles à champ de phase pour la rupture, nucléation de fissure, effet d'échelle dans les matériaux fragiles, modèles d'endommagement à gradient, fracturation hydraulique, stabilité des fissures, modèles de plasticités, approche variationnelle, rupture ductile. Résumé : Les simulations numériques des fissures de type fragile par les modèles d'endommagement à gradient deviennent maintenant très répandues. Les résultats théoriques et numériques montrent que dans le cadre de l'existence d'une pré-fissure la propagation suit le critère de Griffith. Alors que pour le problème à une dimension la nucléation de la fissure se fait à la contrainte critique, cette dernière propriété dimensionne le paramètre de longueur interne. Dans ce travail, on s'attarde sur le phénomène de nucléation de fissures pour les géométries communément rencontrées et qui ne présentent pas de solutions analytiques. On montre que pour une entaille en U-et Vl'initiation de la fissure varie continument entre la solution prédite par la contrainte critique et celle par la ténacité du matériau. Une série de vérifications et de validations sur différents matériaux est réalisée pour les deux géométries considérées. On s'intéresse ensuite à un défaut elliptique dans un domaine infini ou très élancé pour illustrer la capacité du modèle à prendre en compte les effets d'échelles des matériaux et des structures. Dans un deuxième temps, ce modèle est étendu à la fracturation hydraulique. Une première phase de vérification du modèle est effectuée en stimulant une pré-fissure seule par l'injection d'une quantité donnée de fluide. Ensuite on étudie la simulation d'un réseau parallèle de fissures. Les résultats obtenus montrent qu'une seule fissure est activée dans ce réseau et que ce type de configuration vérifie le principe de moindre énergie. Le dernier exemple se concentre sur la stabilité des fissures dans le cadre d'une expérience d'éclatement à pression imposée pour l'industrie pétrolière. Cette expérience d'éclatement de la roche est réalisée en laboratoire afin de simuler les conditions de confinement retrouvées lors des forages. La dernière partie de ce travail se concentre sur la rupture ductile en couplant le modèle à champ de phase avec les modèles de plasticité parfaite. Grâce à la structure variationnelle du problème on décrit l'implémentation numérique retenue pour le calcul parallèle. Les simulations réalisées montrent que pour une géométrie légèrement entaillée la phénoménologie des fissures ductiles comme par exemple la nucléation et la propagation sont en concordances avec ceux reportées dans la littérature. Title : Variational phase-field models from brittle to ductile fracture: nucleation and propagation Keywords: Phase-field models of fracture, crack nucleation, size effects in brittle materials, validation & verification, gradient damage models, hydraulic fracturing, crack stability, plasticity model, variational approach, ductile fracture Abstract : Phase-field models, sometimes referred to as gradient damage, are widely used methods for the numerical simulation of crack propagation in brittle materials. Theoretical results and numerical evidences show that they can predict the propagation of a pre-existing crack according to Griffith's criterion. For a one-dimensional problem, it has been shown that they can predict nucleation upon a critical stress, provided that the regularization parameter is identified with the material's internal characteristic length. In this work, we draw on numerical simulations to study crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from the one predicted by a strength criterion to the one of a toughness criterion when the strength of the stress concentration or singularity varies. We present validation and verification of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase field models properly account for structural and material size effects. In a second movement, this model is extended to hydraulic fracturing. We present a validation of the model by simulating a single fracture in a large domain subject to a control amount of fluid. Then we study an infinite network of pressurized parallel cracks. Results show that the stimulation of a single fracture is the best energy minimizer compared to multi-fracking case. The last example focuses on fracturing stability regimes using linear elastic fracture mechanics for pressure driven fractures in an experimental geometry used in petroleum industry which replicates a situation encountered downhole with a borehole called burst experiment. The last part of this work focuses on ductile fracture by coupling phase-field models with perfect plasticity. Based on the variational structure of the problem we give a numerical implementation of the coupled model for parallel computing. Simulation results of a mild notch specimens are in agreement with the phenomenology of ductile fracture such that nucleation and propagation commonly reported in the literature.
295,031
[ "1306573" ]
[ "1167" ]
01758434
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01758434/file/371182_1_En_22_Chapter.pdf
Edirlei Soares De Lima Antonio L Furtado email: [email protected] Bruno Feijó email: [email protected] Storytelling Variants: The Case of Little Red Riding Hood Keywords: Folktales, Variants, Types and Motifs, Semiotic Relations, Digital Storytelling, Plan Recognition A small number of variants of a widely disseminated folktale is surveyed, and then analyzed in an attempt to determine how such variants can emerge while staying within the conventions of the genre. The study follows the classification of types and motifs contained in the Index of Antti Aarne and Stith Thompson. The paper's main contribution is the characterization of four kinds of type interactions in terms of semiotic relations. Our objective is to provide the conceptual basis for the development of semi-automatic methods to help users compose their own narrative plots. Introduction When trying to learn about storytelling, in order to formulate and implement methods usable in a computer environment, two highly influential approaches come immediately to mind, both dealing specifically with folktales: Propp's functions [START_REF] Propp | Morphology of the Folktale[END_REF] and the comprehensive classification of types and motifs proposed by Antti Aarne and Stith Thompson, known as the Aarne-Thompson Index (heretofore simply Index) [START_REF] Aarne | The Types of the Folktale[END_REF][START_REF] Thompson | The Folktale[END_REF][START_REF] Uther | The Types of International Folktales[END_REF]. In previous work, as part of our Logtell project [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF], we developed prototypes to compose narrative plots interactively, employing a plan-generation algorithm based on Propp's functions. Starting from different initial states, and giving to users the power to intervene in the generation process, within the limits of the conventions of the genre on hand, we were able to obtain in most cases a fair number of different plots, thereby achieving an encouraging level of variety in plot composition. We now propose to invest on a strategy that is based instead on the analysis of already existing stories. Though we shall focus on folktales, an analogous conceptual formulation applies to any genre strictly regulated by conventions and definable in terms of fixed sets of personages and characteristic events. In all such genres one should be able to pinpoint the equivalent of Proppian functions, as well as of ubiquitous types and motifs, thus opening the way to the reuse of previously identified narrative patterns as an authoring resource. Indeed it is a well-established fact that new stories often emerge as creative adaptations and combinations of old stories: this is a most common practice among even the best professional authors, though surely not easy to trace in its complex ramifications, as eloquently expressed by the late poststructuralist theoretician Roland Barthes [3,p. 39]: Any text is a new tissue of past citations. Bits of code, formulae, rhythmic models, fragments of social languages, etc., pass into the text and are redistributed within it, for there is always language before and around the text. Intertextuality, the condition of any text whatsoever, cannot, of course, be reduced to a problem of sources or influences; the intertext is a general field of anonymous formulae whose origin can scarcely ever be located; of unconscious or automatic quotations, given without quotation marks. The present study utilizes types and motifs of the Aarne-Thompson's Index, under whose guidance we explore what the ingenuity of supposedly unschooled narrators has legated. We chose to concentrate on folktale type AT 333, centered on The Little Red Riding Hood and spanning some 58 variants (according to [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF]) from which we took a small sample. The main thrust of the paper is to investigate how such rich diversities of variants of traditional folktales came to be produced, as they were told and retold by successive generations of oral storytellers, hoping that some of their tactics are amenable to semi-automatic processing. An added incentive to work with folktale variants is the movie industry's current interest in adaptations of folktales for adult audiences, in contrast to early Disney classic productions. Related work is found in the literature of computational narratology [START_REF] Cavazza | Narratology for Interactive Storytelling: A Critical Introduction[END_REF][START_REF] Mani | Computational Narratology[END_REF] a new field that examines narratology from the viewpoint of computation and information processingwhich offers models and systems based on tale types/motifs that can be used in story generation and/or story comparison. Karsdorp et al. [START_REF] Karsdorp | In Search of an Appropriate Abstraction Level for Motif Annotations[END_REF] believe that oral transmission of folktales happens through the replication of sequences of motifs. Darányi et al. [START_REF] Darányi | Toward Sequencing 'Narrative DNA': Tale Types, Motif Strings and Memetic Pathways[END_REF] handle motif strings like chromosome mutations in genetics. Kawakami et al. [START_REF] Kawakami | On Modeling Conceptual and Narrative Structure of Fairytales[END_REF] cover 23 Japanese texts of Cinderella tales, whilst Swartjes et al use Little Red Riding Hood as one of their examples [START_REF] Swartjes | Iterative authoring using story generation feedback: debugging or co-creation?[END_REF]. Our text is organized as follows. Section 2 presents the two classic variants of AT 333. Section 3 summarizes additional variants. Section 4 has our analysis of the variant-formation phenomenon, with special attention to the interaction among types, explained in terms of semiotic relations. Section 5 describes a simple plan-recognition prototype working over variant libraries. Section 6 contains concluding remarks. The full texts of the variants cited in the text are available in a separate document. 1 2 The two classic variants In the Index, the type of interest, AT 333, characteristically named The Glutton, is basically described as follows, noting that two major episodes are listed [1, p. 125]: The wolf or other monster devours human beings until all of them are rescued alive from his belly. I. Wolf's Feast. By masking as mother or grandmother the wolf deceives and devours a little girl whom he meets on his way to her grandmother's. II. Rescue. The wolf is cut open and his victims rescued alive; his belly is sewed full of stones and he drowns, or he jumps to his death. The first classic variant, Le Petit Chaperon Rouge (Little Red Riding Hood), was composed in France in 1697, by Charles Perrault [START_REF] Perrault | Little Red Riding Hood[END_REF], during the reign of Louis XIV th . It consists of the first episode alone, so that there is no happy ending, contrary to what children normally expect from nursery fairy tales. The little girl, going through the woods to see her grandmother, is accosted by the wolf who reaches the grandmother's house ahead of her. The wolf kills the grandmother and takes her place in bed. When the girl arrives, she is astonished at the "grandmother"'s large, ears, large eyes, etc., until she asks about her huge teeth, whereat the wolf gobbles her up. Following a convention of the genre of admonitory fables, a "moralité" is appended, to the effect that well-bred girls should not listen to strangers, particularly when they pose as "gentle wolves" The second and more influential classic variant is that of the brothers Grimm (Jacob and Wilhelm), written in German, entitled Rotkäppchen (Little Red Cap) [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], first published in 1812. The girl's question about the wolf's teeth is replaced by: "But, grandmother, what a dreadful big mouth you have!" This is a vital changenot being bitten, the victims are gobbled up aliveand so the Grimm variant can encompass the two episodes prescribed for the AT 333 type. Rescue is effected by a hunter, who finds the wolf sleeping and cuts his belly, allowing girl and grandmother to escape. The wolf, his belly filled with heavy stones fetched by the girl, wakes up, tries to run away and falls dead, unable to carry the weight. As a moral addendum to the happy ending, the girl promises to never again deviate from the path when so ordered by her mother. Having collected the story from two distinct sources, the brothers wrote a single text with a second finale, wherein both female characters show that they had learned from their experience with the villain. A second wolf comes in with similar proposals. The girl warns her grandmother who manages to keep the animal outside, and eventually they cause him to fall from the roof into a trough and be drowned. Some other variants In [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF] no less than 58 folktales were examined as belonging to type AT 333 (and AT 123). Here we shall merely add seven tales to the classic ones of the previous section. Since several variants do not mention a red hood or a similar piece of clothing as attribute of the protagonist, the conjecture was raised that this was Perrault's invention, later imitated by the Grimms. However a tale written in Latin by Egbert de Liège in the 11 th century, De puella a lupellis seruata (About a Girl Saved from Wolf Cubs) [START_REF] Ziolkowski | A Fairy Tale from before Fairy Tales: Egbert of Liège's 'De puella a lupellis seruata' and the Medieval Background of 'Little Red Riding Hood[END_REF], arguably prefiguring some characteristics of AT 333, features a red tunic which is not merely ornamental but plays a role in the events. The girl had received it as a baptismal gift from her godfather. When she was once captured by a wolf and delivered to its cubs to be eaten, she suffered no harm. The virtue of baptism, visually represented by the red tunic, gave her protection. The cubs, their natural ferocity sub-dued, gently caressed her head covered by the tunic. The moral lesson, in this case, is consonant with the teaching of the Bible (Daniel VI, 27). Whilst in the variants considered so far the girl is presented as naive, in contrast to the clever villain, the situation is reversed in the Conte de la Mère-grand (The Story of Grandmother), collected by folklorist Achille Millien in the French province of Nivernais, circa 1870, and later published by Paul Delarue [START_REF] Delarue | The Story of Grandmother[END_REF]. In this variant, which some scholars believe to be closer to the primitive oral tradition, the villain is a "bzou", a werewolf. After killing and partly devouring the grandmother's body, he stores some of her flesh and fills a bottle with her blood. When the girl comes in, he directs her to eat and drink from these ghastly remains. Then he tells her to undress and lie down on the bed. Whenever the girl asks where to put each piece of clothing, the answer is always: "Throw it in the fire, my child; you don't need it anymore." In the ensuing dialogue about the peculiar physical attributes of the fake grandmother, when the question about her "big mouth" is asked the bzou gives the conventional reply: "All the better to eat you with, my child!"but this time the action does not follow the words. What happens instead is that the girl asks permission to go out to relieve herself, which is a ruse whereby she ends up outsmarting the villain and safely going back to home (cf. http://expositions.bnf.fr/contes/gros/chaperon/nivers.htm). An Italian variant published by Italo Calvino, entitled Il Lupo e le Tre Ragazze (The Wolf and the Three Girls) [START_REF] Calvino | Italian Folktales[END_REF], adopts the trebling device [START_REF] Propp | Morphology of the Folktale[END_REF] so common in folktales, making three sisters, one by one, repeat the action of taking victuals to their sick mother. The wolf intercepts each girl but merely demands the food and drink that they carry. The youngest girl, who is the protagonist, throws at the wolf a portion that she had filled with nails. This infuriates the wolf, who hurries to the mother's house to devour her and lay in wait for the girl. After the customary dialogue with the wolf posing as the mother, the animal also swallows the girl. The townspeople observe the wolf coming out, kill him and extract mother and girl alive from his belly. But that is not all, as Calvino admits in an endnote. Having found the text as initially collected by Giambattista Basile, he had deliberately omitted what he thought to be a too gruesome detail ("una progressione troppo truculenta"): after killing the mother, the wolf had made "a doorlatch cord out of her tendons, a meat pie out of her flesh, and wine out of her blood". Repeating the strange above-described episode of the Conte de la Mère-grand, the girl is induced to eat and drink from these remains, with the aggravating circumstance that they belonged to her mother, rather than to a more remotely related grandparent. Turning to China, one encounters the tale Lon Po Po (Grammie Wolf), translated by Ed Young [START_REF] Young | Lon Po Po: A Red-Riding Hood Story from China[END_REF], which again features three sisters but, unlike the Western folktale cliché, shows the eldest as protagonist, more experienced and also more resourceful than the others. The mother, here explicitly declared to be a young widow, goes to visit the grandmother on her birthday, and warns Shang, the eldest, not to let anyone inside during her absence. A wolf overhears her words, disguises as an old woman and knocks at the door claiming to be the grandmother. After some hesitation, the girls allow him to enter and, in the dark, since the wolf claims that light hurts his eyes, they go to bed together. Shang, however, lighting a candle for a moment catches a glimpse of the wolf's hairy face. She convinces him to permit her two sisters to go outside under the pretext that one of them is thirsty. And herself is also allowed to go out, promising to fetch some special nuts for "Grammie". Tired of waiting for their return, the wolf leaves the house and finds the three sisters up in a tree. They persuade him to fetch a basket mounted on which they propose to bring him up, in order to pluck with his own hands the delicious nuts. They pull on the rope attached to the basket, but let it go so that the wolf is seriously bruised. And he finally dies when the false attempt is repeated for the third time. Another Chinese variant features a bear as the villain: Hsiung chia P`o (Goldflower and the Bear) [START_REF] Mi | Goldflower and the Bear[END_REF], translated by Chiang Mi. The crafty protagonist, Goldflower, is once again an elder sister, living with her mother and a brother. The mother leaves them for one day to visit their sick aunt, asking the girl to take care of her brother and call their grandmother to keep them company during the night. The bear knocks at the door, posing as the grandmother. Shortly after he comes in, the girlin spite of the darknessends up disclosing his identity. She manages to lock the boy in another room, and then obeys the bear's request to go to bed at his side. The villain's plan is to eat her at midnight, but she asks to go out to relieve her tummy. As distrustful as the werewolf in the before-mentioned French variant, the bear ties one end of a belt to her handan equally useless precaution. Safely outside on top of a tree, Goldflower asks if he would wish to eat some pears, to be plucked with a spear, which the famished beast obligingly goes to fetch in the house. The girl begins with one fruit, but the next thing to be thrown into his widely open gullet is the spear itself. Coming back in the morning, the mother praises the brave little Goldflower. One variant, published in Portugal by Guerra Junqueiro, entitled O Chapelinho Encarnado [START_REF] Guerra Junqueiro | Contos para a Infância[END_REF], basically follows the Grimm brothers pattern. A curious twist is introduced: instead of luring the girl to pick up wild flowers, the wolf points to her a number of medicinal herbs, all poisonous plants in reality, and she mistakes him for a doctor. At the end, the initiative of filling the belly of the wolf with stones is attributed not to the girl, but to the hunter, who, after skinning the animal, merrily shares the food and drink brought by the girl with her and her grandmother. The highly reputed Brazilian folklorist Camara Cascudo included in his collection [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF] a variant, O Chapelinho Vermelho, which also follows the Grimm brothers pattern. The mother is introduced as a widow and the name of the girl is spelled out: Laura. Although she is known, as the conventional title goes, by a nickname translatable as "Little Red Hat", what she wears every day is a red parasol, given by her mother. One more particularity is that, upon entering her grandmother's house, the girl forgets to close the door, so that finding the door open is what strikes the hunter as suspicious when he approaches the house. The hunter bleeds the wolf with a knife and, noticing his distended belly, proceeds to open it thus saving the two victims. Nothing is said about filling the wolf's belly with stones, the wounds inflicted by the hunter's knife having been enough to kill him. Two prudent lessons are learned: (1) Laura would not forget her mother's recommendation to never deviate from the path, the specific reason being given here that there existed evil beasts in the wood; (2) living alone should no longer be an option for the old woman, who from then on would dwell with her daughter and granddaughter. Comments on the formation of variants It is a truism that people tend to introduce personal contributions when retelling a story. There are also cultural time and place circumstances that require adaptations; for example, in the Arab world the prince would in no way be allowed to meet Cinderella in a ballroomhe falls in love without having ever seen her (cf. "Le Bracelet de Cheville" in the Mardrus translation of One Thousand and One Nights [START_REF] Mardrus | Les Mille et une Nuits[END_REF]). Other differences among variants may result from the level of education of the oral storytellers affecting how spontaneous they are, and the attitude of the collectors who may either prefer to reproduce exactly what they hear or introduce corrections and rational explanations while omitting indecorous or gruesome scenes. On the storyteller's part, however, this tendency is often attenuated by an instinctive pact with the audiencewith children, in specialin favour of faithful repetition, preferably employing the very same words. Indeed the genre of folktales is strongly marked by conventions which, to a remarkable extent, remain the same in different times and places. The folklorist Albert Lord called tension of essences the compulsion that drives all singers (i.e. traditional oral storytellers) to strictly enforce such conventions [29, p. 98]: In our investigation of composition by theme this hidden tension of essences must be taken into consideration. We are apparently dealing here with a strong force that keeps certain themes together. It is deeply imbedded in the tradition; the singer probably imbibes it intuitively at a very early stage of his career. It pervades his material and the tradition. He avoids violating the group of themes by omitting any of its members. [We shall see] that he will even go so far as to substitute something similar if he finds that for one reason or another he cannot use one of the elements in its usual form. The notion of tension of essences may perhaps help explaining not only the total permanence of some variants within the frontiers of a type, but also the emergence of transgressive variants, which absorb features pertaining to other types, sometimes even provoking a sensation of strangeness. When an oral storyteller feels the urge "to substitute something similar" in a story, the chosen "something" should, as an effect of the tension-of-essences forceful compulsion, still belong to the folktale genrebut what if the storyteller's repertoire comprises more than one folktale type? As happens with many classifications, the frontiers between the types in the Index are often blurred, to the point that one or more motifs can be shared and some stories may well be classified in more than one type. So a viable hypothesis can be advanced that some variants did originate through, so to speak, a type-contamination phenomenon. Accordingly we propose to study type interactions as a possible factor in the genesis of variants. We shall characterize the interactions that may occur among types, also involving motifs, by way of semiotic relations, taking an approach we applied before to the conceptual modelling of both literary genres and business information systems [START_REF] Ciarlini | Event relations in plot-based plot composition[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF]. We distinguish four kinds of semiotic relations, associated with the so-called four master tropes [START_REF] Burke | A Grammar of Motives[END_REF][START_REF] Chandler | Semiotics: the Basics[END_REF], whose significance has been cogently stressed by a literary theory scholar, Jonathan Culler, who regards them "as a system, indeed the system, by which the mind comes to grasp the world conceptually in language" [15, p. 72]. For the ideas and for the nomenclature in the table below, we are mainly indebted to the pioneering semiotic studies of Ferdinand de Saussure [START_REF] Saussure | Cours de Linguistique Générale[END_REF]: The itemized discussion below explores the meaning of each of the four semiotic relations, as applied to the derivation of folktale type variants stemming from AT 333. relation (1) Syntagmatic relation with type AT 123. As mentioned at the beginning of section 2, the Index describes type AT 333 as comprising two episodes, namely Wolf's Feast and Rescue, but the classic Perrault variant does not proceed beyond the end of the first episode. As a consequence, one is led to assume that the Rescue episode is not essential to characterize AT 333. On the other hand the situation created by Wolf's Feast is a long distance away from the happy-ending that is commonly expected in nursery fairy tales. A continuation in consonance with the Rescue episode, exactly as described in the Index, is suggested by AT 123: The Wolf and the Kids, a type pertaining to the group of Animal Tales, which contains the key motif F913: Victims rescued from swallower's belly. The connection (syntagmatic relation) whereby AT 123 complements AT 333 is explicitly declared in the Index by "cf." cross-references [1, p. 50, p. 125]. Moreover the Grimm brothers variant, which has the two episodes, is often put side by side with another story equally collected by them, The Wolf and the Seven Little Kids [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], clearly of type AT 123. Still it must be noted that several of the variants reported here do not follow the Grimm pattern in the Rescue episode. They diverge with respect to the outcome, which, as seen, may involve the death of the girl, or her rescue after being devoured, or even her being totally preserved from the villain's attempts either by miraculous protection or by her successful ruses. (2) Paradigmatic relation with type AT 311B*. For the Grimm variant, as also for those that follow its pattern (e.g. the Italian and the two Portuguese variants in section 3), certain correspondences or analogies can be traced with variants of type AT 311B*: The Singing Bag, a striking example being another story collected in Brazil by Camara Cascudo [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF], A Menina dos Brincos de Ouro (The Girl with Golden Earrings). Here the villain is neither an animal nor a werewolf; he is a very ugly old man, still with a fearsome aspect but no more than human. The golden earrings, a gift from her mother, serve as the girl's characteristic attribute and have a function in the plot. As will be noted in the summary below, the villain's bag becomes the wolf's belly of the Grimm variant, and what is done to the bag mirrors the act of cutting the belly and filling it with stones. In this sense, the AT 311B* variant replaces the Grimm variant. One day the girl went out to bring water from a fountain. Having removed her earrings to wash herself, she forgot to pick them up before returning. Afraid to be reprimanded by her mother, she walked again to the fountain, where she was caught by the villain and sewed inside a bag. The man intended to use her to make a living. At each house that he visited, he advertised the magic bag, which would sing when he menaced to strike it with his staff. Everywhere people gave him money, until he came inadvertently to the girl's house, where her voice was recognized. He was invited to eat and drink, which he did in excess and fell asleep, whereat the bag was opened to free the girl and then filled with excrement. At the next house visited, the singing bag failed to work; beaten with the staff, it ruptured spilling its contents. (3) Meronymic relation with type AT 437. In The Story of Grandmother the paths taken by the girl and the werewolf to reach the old lady's house are called, respectively, the Needles Road and the Pins Road. And, strangely enough, while walking along her chosen path, the little girl "enjoyed herself picking up needles" [START_REF] Delarue | The Story of Grandmother[END_REF]. Except for this brief and puzzling mention, these objects remain as meaningless details, having no participation in the story. And yet, browsing through the Index, we see that needles and pins are often treated as wondrous objects (motifs D1181: Magic Needle and D1182: Magic Pin). And traversing the Index hierarchy upwards, from motifs to types, we find them playing a fundamental role in type AT 437: The Needle Prince (also named The Supplanted Bride), described as follows [1, p. 140]: "The maiden finds a seemingly dead prince whose body is covered with pins and needles and begins to remove them ... ". Those motifs are thus expanded into a full narrative in AT 437. Especially relevant to the present discussion is a variant from Afghanistan, entitled The Seventy-Year-Old Corpse reported by Dorson [START_REF] Dorson | Folktales Told Around the World[END_REF], which has several elements in common with the AT 333 variants. An important difference, though, also deserves mention: the girl lives alone with her old father, who takes her to visit her aunt. We are told that, instead of meeting the aunt, the girl finds a seventy year old corpse covered with needles, destined to revive if someone would pick the needles from his body. At the end the girl marries the "corpse", whereas no further news are heard about her old father, whom she had left waiting for a drink of water. One is tempted to say that Bruno Bettelheim would regard this participation of two old males, the father and the daunting corpse, as an uncannily explicit confirmation of the presence in two different formsof the paternal figure, in an "externalization of overwhelming oedipal feelings, and ... in his protective and rescuing function" [4, p. 178]. (4) Antithetic relation with type AT 449. Again in The Story of Grandmother we watch the strange scene of the girl eating and drinking from her grandmother's remains, punctuated by the acid comment of a little cat: "A slut is she who eats the flesh and drinks the blood of her grandmother!" The scene has no consequence in the plot, and in fact it is clearly inconsistent with the role of the girl in type AT 333. It would sound natural, however, in a type in opposition to AT 333, such as AT 449: The Tsar's Dog, wherein the roles of victim and villain are totally reversed. The cannibalistic scene in The Story of Grandmother has the effect of assimilating the girl to a ghoul (motif G20 in the Index), and the female villain of the most often cited variant of type AT 449, namely The Story of Sidi Nouman (cf. Andrew Lang's translation in Arabian Nights Entertainment) happens to be a ghoul. No less intriguing in The Story of Grandmother are the repartees in the ensuing undressing scene, with the villain (a werewolf, as we may recall) telling the girl to destroy each piece of clothing: "Throw it in the fire, my child; you don't need it anymore." This, too, turns out to be inconsequential in the plot, but was a major concern in the werewolf historical chronicles and fictions of the Middle Ages [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF][START_REF] Sconduto | Metamorphoses of the Werewolf: A Literary Study from Antiquity Through the Renaissance[END_REF]. In 1521, the Inquisitor-General for the diocese of Besançon heard a case involving a certain Pierre Bourget [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF]. He confessed under duress that, by smearing his body with a salve given by a demon, he became a wolf, but "the metamorphosis could not take place with him unless he were stark naked". And to recover his form he would "beat a retreat to his clothes, and smear himself again". Did the werewolf in The Story of Grandmother intend to transform the girl into a being of his species? Surely the anonymous author did not mean that, but leaving aside the norms of AT 333 the idea would not appear to be so farfetched. In this regard, also illustrating type AT 449, there are two medieval lays (short narrative poems) that deserve our attention. They are both about noble knights with the ability to transform themselves into wolves. In the two narratives, they are betrayed by their villainous wives, intent on permanently preventing their resuming the human form. In Marie de France's lay of Bisclavret [START_REF] De | The Lais of Marie de France[END_REF] an old Breton word signifying "werewolf"the woman accomplishes this effect by stealing from a secret hiding place the man's clothes, which he needed to put on again to undo the transformation. In the other example, the anonymous lay of Melion [START_REF] Burgess | Eleven Old French Narrative Lays[END_REF], after a magic ring is applied to break the enchantment, the man feels tempted to punish the woman by inflicting upon her the same metamorphosis. In the preceding discussion we purported to show how types can be semiotically related, and argued that such relations constitute a factor to be accounted for in the emergence of variants. We should add that types may be combined in various ways to yield more complex types, whose attractiveness is heightened by the occurrence of unexpected changes. Indeed Aristotle's Poetics2 distinguishes simple and complex plots, characterizing the latter by recognition () and reversal (). Differently from reversal, recognition does not imply that the world changed, but that the beliefs of the characters about themselves and the current facts were altered. In particular, could a legitimate folktale promote the union of monster and girl? Could we conciliate type AT 333 (where the werewolf is a villain) with the antithetically related medieval lays of type AT 449 (where the werewolf is the victim)? Such conciliations of opposites are treated under the topic of blending [START_REF] Fauconnier | Conceptual projection and middle spaces[END_REF], often requiring creative adaptations. A solution is given by type AT 425C: Beauty and the Beast. At first the Beast is shown as the villain, claiming the life of the merchant or else of one of his daughters: "Go and see if there's one among them who has enough courage and love for you to sacrifice herself to save your life" [41, p. 159]but then proves to be the victim of an enchantment. Later, coming to sense his true inner nature (an event of recognition, as in Aristotle), Belle makes him human again by manifesting her love (motif D735-1: Disenchanting of animal by being kissed by woman). So, it is as human beings that they join. Alternatively, we might combine AT 333 and AT 449 by pursuing until some sort of outcome the anomalous passages of The Story of Grandmother, allowing the protagonists to join in a non-human form. The werewolf feeds human flesh of his victim to the girl, expecting that she would transform herself like he did (as Melion for a moment thought to cast the curse upon his wife), thereby assuming a shape that she would keep forever once her clothes were destroyed (recall the concern of Pierre Bourget to "beat a retreat to his clothes", and the knight's need to get back his clothes in Bisclavret). At the end the two werewolves would marry and live happily forever after, as a variant of an admittedly misbegotten new type (of, perhaps, a modern appeal, since it would also include among its variants the story of the happy vampires Edward and Bella in the Twilight Saga: http://twilightthemovie.com/). First steps towards variants in computer-generated stories To explore in a computer environment the variants of folktale types, kept in a library of typical plans, we developed a system in C# that does plan-recognition over the variants of the type indicated (e.g. AT 333), with links to pages of semiotically related types (e.g. AT 123, AT 311B*, AT 437, AT 449). Plan-recognition involves matching a number of actions against a pre-assembled repertoire of plot patterns (cf. [START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF]). Let P be a set of m variants of a specific tale type that are represented by complete plans, 𝑃 = {𝑃 1 , 𝑃 2 , ⋯ , 𝑃 𝑚 }, where each plan is a sequence of events, i.e.: 𝑃 𝑖 = 〈𝑒 1 𝑖 , 𝑒 2 𝑖 , ⋯ , 𝑒 𝑛 𝑖 𝑖 〉. These events are actions with ground arguments that are story elements (specific names, places, and objects). For instance, P k = go(Abel, Beach), meet(Abel, Cain), kill(Cain, Abel). The library of typical plans is defined by associating each plan P i with the following elements: (1) the story title; (2) a set of parameterized termsakin to those we use in Logtell [START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF] to formalize Proppian functionsdescribing the story events; (3) the specification of the characters' roles (e.g. villain, victim, hero) and objects' functions (e.g. wolf's feast place, basket contents); (4) the semiotic relations of the story with other variants of same or different types (Section 4); ( 5) a text template used to display the story as text, wherein certain phrases are treated as variables (written in the format #VAR 1 #); and (6) the comics resources used for dramatization, indicating the path to the folder that contains the images representing the characters and objects of the narrative and a set of event templates to describe the events textually. The library is specified in an XML file. Let T be a partial plan expressed as a sequence of events given by the user. The system finds plans in P that are consistent with T. During the searching process, the arguments of the events in P are instantiated. For example, with the input T = {give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), eat(Joe, Little Ring Girl)}, the following stories are generated: Story 1: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl), sleep(Joe), go(Hunter, Grandmother's house), cut(Hunter, Joe, axe), jump_out_of(Little Ring Girl, Joe), jump_out_of(Anne, Joe), die(Joe). Story 2: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), lay_down(Little Ring Girl, Grandmother's bed), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl). which correspond, respectively, to the Grimm and Perrault AT 333 variants, rephrased to display the names of characters and objects given by the user. Our plan recognition algorithm employs a tree structure, which we call generalized plan suffix tree. Based on the suffix tree commonly used for string pattern matching [START_REF] Gusfield | Algorithms on Strings, Trees, and Sequences[END_REF], this trie-like data structure contains all suffixes p k of each plan in P. If a plan P i has a sequence of events 𝑝 = 𝑒 1 𝑒 2 ⋯ 𝑒 𝑘 ⋯ 𝑒 𝑁 , then 𝑝 𝑘 = 𝑒 𝑘 𝑒 𝑘+1 ⋯ 𝑒 𝑁 is the suffix of p that starts at position k (we have dropped the index i of the expressions p and p k for the sake of simplicity). In a generalized plan suffix tree S, edges are labeled with the parameterized plan events that belong to each suffix p k , and the leaves point to the complete plans ending in p k . Each suffix is padded with a terminal symbol $i that uniquely signals the complete plan in the leaf node. Figure 1 shows an example of generalized plan suffix tree generated for the plan sequences The process of searching for plans that match a given partial plan T , expressed as a sequence of input terms, is straightforward: starting from the root node, the algorithm sequentially matches T against the parameterized plan events on the edges of the tree, in chronological but not necessarily consecutive order, instantiating the event variables and proceeding until all input terms are matched and a leaf node is reached. If more solutions are requested, a backtracking procedure tries to find alternative paths matching T. The search process produces a set of complete plans G, with the event variables instantiated with the values appearing in the input partial plan or, for events not present in the partial plan, with the default values defined in the library. After generating G through plan-recognition, the system allows users to apply the semiotic relations (involving connection, similarity, unfolding, and opposition) and explore other variants of same or different types. The process of searching for variants uses the semiotic relations specified in the library of typical plans to create a link between a g i in G and its semiotically related variants. When instantiating one such variant v i , the event variables of v i are instantiated according to the characters and objects that play important roles in the baseline story g i . Characters playing roles in g i that also exist in v i , assume the same role in the variant. For roles that only exist in v i , the user is asked to name the characters who would fulfil such roles. Following the g i →v i links taken from the examples of section 4, the user gains a chance to reinterpret the g i AT 333 variant, in view of aspects highlighted in the semiotically related v i : 1. the wolf's villainy complemented by a rescue act (AT 123); 2. As illustrated in Figure 2, our system supports two dramatization modalities: text and comics. The former uses the original literary rendition of the matched typical plan as a template and represents the generated stories in text format. The latter offers a storyboard-like comic strip representation, where each story event gains a graphical illustration and a short sentence description. In the illustrations, the scene compositing automatic process takes into account the specific object carried by each character and the correct movement directions. More details on the generation of comic strips can be found in our previous work on interactive comics [START_REF] Lima | Non-Branching Interactive Comics[END_REF]. similarly predefined genre, readers have a fair chance to find a given story in a treatment as congenial as possible to their tastes and personality profile. Moreover, prospective amateur authors may feel inspired to put together new variants of their own after seeing how variants can derive from the type and motif interactions that we associate with semiotic relations. They would learn how new stories can arise from episodes of existing stories, through a process, respectively, of concatenation, analogous substitution, expansion into finer grained actions, or radical reversal. Computer-based libraries, such as we described, should then constitute a vital first step in this direction. In special, by also representing the stories as plans, in the form of sequences of terms denoting the story events (cf. the second paragraph of section 5), we effectively started to combine the two approaches mentioned in the Introduction, namely Aarne-Thompson's types and motifs and Proppian functions, and provided a bridge to our previously developed Logtell prototypes [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF]. We expect that our analysis of variants, stimulated by further research efforts in the line of computational narratology, may contribute to the design of semi-automatic methods for supporting interactive plot composition, to be usefully incorporated into digital storytelling systems. P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}. Fig. 1 . 1 Fig. 1. Generalized plan suffix tree for P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}.. his belly replaced by ugly man and his bag (AT 311B*); 3. the girl's gesture of picking needles expanded to the wider scope of a disenchantment ritual (AT 437); 4. girl and werewolf with reversed roles of villain and victim (AT 449). Fig. 2 . 2 Fig. 2. Plan recognition system: (a) main user interface; (b) comics dramatization; (c) a variant for story 1; and (d) text dramatization. http://www-di.inf.puc-rio.br/~furtado/LRRH_texts.pdf http://www.gutenberg.org/files/1974/1974-h/1974-h.htm Acknowledgements This work was partially supported by CNPq (National Council for Scientific and Technological Development, linked to the Ministry of Science, Technology, and Innovation), CAPES (Coordination for the Improvement of Higher Education Personnel), FINEP (Brazilian Innovation Agency), ICAD/VisionLab (PUC-Rio), and Oi Futuro Institute.
43,316
[ "1011688", "995185", "1011687" ]
[ "362752", "362752", "362752" ]
01758437
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01758437/file/371182_1_En_20_Chapter.pdf
Vojtech Cerny Filip Dechterenko email: [email protected] Rogue-like Games as a Playground for Artificial Intelligence -Evolutionary Approach Keywords: artificial intelligence, computer games, evolutionary algorithms, rogue-like Rogue-likes are difficult computer RPG games set in a procedurally generated environment. Attempts have been made at playing these algorithmically, but few of them succeeded. In this paper, we present a platform for developing artificial intelligence (AI) and creating procedural content generators (PCGs) for a rogue-like game Desktop Dungeons. As an example, we employ evolutionary algorithms to recombine greedy strategies for the game. The resulting AI plays the game better than a hand-designed greedy strategy and similarly well to a mediocre player -winning the game 72% of the time. The platform may be used for additional research leading to improving rogue-like games and general PCGs. Introduction Rogue-like games, as a branch of the RPG genre, have existed for a long time. They descend from the 1980 game "Rogue" and some old examples, such as NetHack (1987), are played even to this day. Many more of these games are made every year, and their popularity is apparent. A rogue-like is a single-player, turn-based, highly difficult RPG game, featuring a randomized environment and permanent death 1 . The player takes the role of a hero, who enters the game's environment (often a dungeon) with a very difficult goal. Achieving the goal requires a lot of skill, game experience and perhaps a little bit of luck. Such a game, bordering between RPG and puzzle genres, is challenging for artificial intelligence (AI) to play. One often needs to balance between being reactive (dealing with current problems) and proactive (planning towards the main goal). Attempts at solving rogue-likes by AI have been previously made [START_REF] Mauldin | ROG-O-MATIC: a belligerent expert system[END_REF][START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF], usually using a set of hand-coded rules as basic reasoning, and being to some extent successful. On the other hand, the quality of a rogue-like can heavily depend on its procedural content generator (PCG), which usually creates the whole environment. Procedural generation [START_REF] Shaker | Procedural Content Generation in Games: A Textbook and an Overview of Current Research[END_REF] has been used in many kinds of games [START_REF] Togelius | Search-based procedural content generation: A taxonomy and survey[END_REF][START_REF] Hendrikx | Procedural content generation for games: A survey[END_REF], and thus, the call for high-quality PCG is clear [START_REF] Liapis | Towards a Generic Method of Evaluating Game Levels[END_REF]. However, evaluating the PCG brings issues [START_REF] Dahlskog | A Comparative Evaluation of Procedural Level Generators in the Mario AI Framework[END_REF][START_REF] Smith | The Seven Deadly Sins of PCG Research[END_REF], such as how to balance between the criteria of high quality and high variability. But a connection can be made to the former -we could conveniently use the PCG to evaluate the artificial player and similarly, use the AI to evaluate the content generator. The latter may also lead to personalized PCGs (creating content for a specific kind of players) [START_REF] Shaker | Towards Automatic Personalized Content Generation for Platform Games[END_REF]. In this paper, we present a platform for developing AI and PCG for a rogue-like game Desktop Dungeons [11]. It is intended as an alternative to other used AI or PCG platforms, such as the Super Mario AI Benchmark [START_REF] Karakovskiy | The Mario AI Benchmark and Competitions[END_REF] or SpelunkBots [START_REF] Scales | SpelunkBots API -An AI Toolset for Spelunky[END_REF]. AI platforms have even been created for a few rogue-like games, most notably NetHack [START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF]. However, Desktop Dungeons has some characteristics making it easier to use than the other. Deterministic actions and short play times help the AI, while small dungeon size simplifies the work of a PCG. And as such, more experimental and resource demanding approaches may be tried. The platform could also aid other kinds of research or teaching AI, as some people create their own example games for this purpose [START_REF] Russell | Artificial Intelligence: A Modern Approach[END_REF]Chapter 21.2], where Desktop Dungeons could be used instead. The outline of this paper is as follows. First, we introduce the game to the reader, then we proceed to describe our platform, and finally, we will show how to use it to create a good artificial rogue-like player using evolutionary algorithms. Desktop Dungeons Description Desktop Dungeons by QCF Design [11] is a single-player computer RPG game that exhibits typical rogue-like features. The player is tasked with entering a dungeon full of monsters and, through careful manipulation and experience gain, slaying the boss (the biggest monster). Disclaimer: The following explanation is slightly simplified. More thorough and complete rules can be found at the Desktop Dungeons wiki page [START_REF]Desktop Dungeons -DDwiki[END_REF]. Dungeon The dungeon is a 20 × 20 grid viewed from the top. The grid cells may contain monsters, items, glyphs, or the hero (player). Every such object, except for the hero, is static -does not move2 . Only a 3 × 3 square around the hero is revealed in the beginning, and the rest must be explored by moving the hero next to it. Screenshot of the dungeon early in the game can be seen in Fig. 1. Hero The hero is the player-controlled character in the dungeon and holds a set of values. Namely: health, mana, attack power, the number of health/mana potions, and his spell glyphs. The hero can also perform a variety of actions. He can attack a monster, explore unrevealed parts of the dungeon, pick up items and glyphs, cast spells or convert glyphs into bonuses. Exploring Unrevealed grid cells can be explored by moving the hero next to them (at least diagonally). Not only does exploration reveal what lies underneath for the rest of the game, but it also serves one additional purpose -restoring health and mana. Every square explored will restore health equal to the hero's level and 1 mana. This means that the dungeon itself is a scarce resource that has to be managed wisely. It shall be noted, though, that monsters heal also when hero explores, so this cannot be used to gain an edge over damaged monsters. Combat Whenever the hero bumps into a monster, a combat exchange happens. The higher level combatant strikes first (monster strikes first when tied). The first attacker reduces his opponent's health by exactly his attack power. The other attacker, if alive, then does the same. No other action causes any monster to attack the hero. Items Several kinds of items can be found lying on the ground. These comprise of a Health Powerup, Mana Powerup, Attack Powerup, Health Potion and a Mana Potion. These increase the hero's health, mana, attack power, and amount of health and mana potions respectively. Glyphs Spell glyphs are special items that each allow the hero to cast one kind of spell for it's mana cost. The hero starts with no glyphs, and can find them lying in the dungeon. Common spells include a Fireball spell, that directly deals damage to a monster (without it retaliating), and a Kill Protect spell, that saves the hero from the next killing blow. Additionally, a spell glyph can be converted to a racial bonus -a specific bonus depending on the hero's race. These are generally small stat increases or an extra potion. The spell cannot be cast anymore, so the hero should only convert glyphs he has little use for. Hero Races and Classes Before entering the dungeon, the player chooses a race (Human, Elf, etc.) and a class (Warrior, Wizard, etc.) of his hero. The race determines only the reward for converting a glyph, but classes can modify the game in a completely unique way. Other The game has a few other unmentioned mechanics. The player can enter special "challenge" dungeons, he can find altars and shops in the dungeon, but all that is far beyond the basics we'll need for our demonstration. As mentioned, more can be found at the Desktop Dungeons wiki [START_REF]Desktop Dungeons -DDwiki[END_REF]. AI Platform Desktop Dungeons has two parameters rarely seen in other similar games. Every action in the game is deterministic3 (the only unknown is the unrevealed part of the dungeon) and the game is limited to 20 × 20 grid cells and never extends beyond. These may allow for better and more efficient AI solutions, and may be advantageously utilized when using search techniques, planning, evaluating fitness functions, etc. On the other hand, Desktop Dungeons is a very interesting environment for AI. It is complex, difficult, and as such can show usefulness of various approaches. Achieving short-term and long-term goals must be balanced, and thus, simple approaches tend to not do well, and must be specifically adjusted for the task. Not much research has been done on solving rogue-like games altogether, only recently was a famous, classic title of this genre -NetHack -beaten by AI [START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF]. From the perspective of a PCG, Desktop Dungeons is similarly interesting. The size of the dungeon is very limited, so attention to detail should be paid. If one has an artificial player, the PCG could use him as a measure of quality, even at runtime, to produce only the levels the artificial player found enjoyable or challenging. This is why we created a programming interface (API) to Desktop Dungeons, together with a Java framework for easy AI and PCG prototyping and implementation. We used the alpha version of Desktop Dungeons, because it is more direct, contains less story content and player progress features, runs in a browser, and the main gameplay is essentially the same as in the full version. The API is a modified part of the game code that can connect to another application, such as our framework, via a WebSocket (TCP) protocol and provide access to the game by sending and receiving messages. A diagram of the API usage is portrayed in Fig. 2. The framework allows the user to focus on high-level programming, and have the technical details hidden from him. It efficiently keeps track of the dungeon elements, and provides full game simulation, assisting any search techniques and heuristics that might be desired. The developed artificial players can be tested against the default PCG of the game, which has the advantage of being designed to provide challenging levels for human players, or one can generate the dungeon on his own and submit it to the game. Intermediate ways can also be employed, such as editing the dungeons generated by the game's PCG to e.g. adjust the difficulty or reduce the complexity of the game. The framework is completely open-source and its repository can be found at https://bitbucket.org/woitee/desktopdungeons-java-framework. Evolutionary Approach To demonstrate the possibilities of the Desktop Dungeons API, we have implemented an evolutionary algorithm (EA) [START_REF] Mitchell | An Introduction to Genetic Algorithms[END_REF] to fine-tune greedy AI. A general explanation of EAs is, however, out of the scope of this paper. Simple Greedy Algorithm The original greedy algorithm was a simple strategy for each moment of the game. It is best described by a list of actions, ordered by priority. 1. Try picking up an item. 2. Try killing a monster (prefer strongest). 3. Explore. The hero tries to perform the highest rated applicable action, and when none exists, the run ends. Killing the monster was attempted by just simulating attacks, fireballs and drinking potions until one of the participants died. If successful, the sequence of actions was acted out. This can be modeled as a similar list of priority actions: 1. Try casting the Fireball spell. 2. Try attacking. 3. Try drinking a potion. Some actions have parameters, e.g. how many potions is the hero allowed to use against a certain level of monster. These were set intuitively and tuned by trial and error. This algorithm has yielded good results. Given enough time (weeks, tens of thousands of runs), this simple AI actually managed to luck out and kill the boss. This was very surprising, we thought the game would be much harder to beat, even with chance on our side. It was probably caused by the AI always calculating how to kill every monster it sees, which is tedious and error-prone for human players to do. Design of the Evolution We used two ordered lists of elementary strategies in the greedy approach, but we hand-designed them and probably have not done that optimally. This would become increasingly more difficult, had we added more strategies to the list. We'll solve this by using evolutionary algorithms. We'll call the strategies used to select actions in the game maingame strategies and the strategies used when trying to kill monsters attack strategies. Each strategy has preconditions (e.g. places to explore exists) and may have parameters. We used as many strategies as we could think of, which resulted in a total of 7 maingame strategies and 13 attack strategies. The evolutionary algorithm was tasked with ordering both lists of strategies, and setting their parameters. It should be emphasized, that this is far from an easy task. Small imperfections in the strategy settings accumulate over the run, and thus only the very refined individuals have some chance of slaying the final boss. However, the design makes the AI ignore some features of the game. It doesn't buy items in shops nor does it worship any gods. These mechanics are nevertheless quite advanced, and should not be needed to win the basic setting of the game. Using them can have back-biting effects if done improperly, so we just decided to ignore them to keep the complexity low. On a side note, this design is to a certain extent similar to linear genetic programming [START_REF] Brameier | Linear Genetic Programming[END_REF]. Fitness Function Several criteria could be considered when designing the fitness function. An easy solution would be to use the game's score, which is awarded after every run. However, the score takes into account some attributes that do not directly contribute towards winning the game, e.g. awarding bonuses for low completion time, or never dropping below 20% of health. We inspired ourselves by the game's scoring, but simplified it. Our basic fitness function evaluates the game's state at the end of the run and looks like this: f itness = 10 • xp + 150 • healthpotions + 75 • manapotions + health The main contributor is the total gained XP (experience points, good runs get awarded over a hundred), and additionally, we slightly reward leftover health and potions. We take these values from three runs and add them together. Three runs are too few to have low variance on subsequent evaluations, but it yields far better results than evaluating only one run, and more runs than three would just take too much time to complete. If the AI manages to kill the boss in any of the runs, we triple the fitness value of that run. This may look a little over the top, but slaying the final monster is very difficult, and if one of the individuals is capable of doing so, we want to spread it's gene in the population. Note, that we don't expect our AI to kill the boss reliably, 5-10% chance is more what we are aiming for. We have tried a variety of fitness functions, taking into account other properties of the game state and with different weights. For a very long time, the performance of the bots was similiar to the hand-designed greedy strategy. But, by analyzing more of the game, we have constructed roughly the fitness function above and the performance has hugely improved. The improvement lies in the observation of how can the bots improve during the course of evolution. Strong bots in the early state will probably just use objectively good strategies, and not make complete blunders in strategy priorities, such as exploring the whole level before trying to kill anything. This should already make them capable of killing quite a few monsters. Then, the bots can improve and fine-tune their settings, to use less and less resources (mainly potions) to kill as many monsters as possible. And towards the late state of evolution, the bots can play the game so effectively, they may still have enough potions and other resources to kill the final boss and beat the game. The current fitness function supports this improvement, because the fitness values of the hypothetical bots in subsequent stages of evolution continuously rises. After implementation, this was exactly the course the bots have evolved through. Note, that saving at least a few potions for the final boss fight is basically a necessary condition for success. Genetic Operators Priorities of the strategies are represented by floating point numbers in the [0, 1] interval. Together with the strategy's parameter values, we can encode it as just a few floating point numbers, integers and booleans. This representation allows us to use classical operators like one-/two-point crossovers and small change mutations. And they make good sense and work, but they are not necessarily optimal, and after some trial and error, we have Fig. 3. Graphs describing the fitnesses of the evolution for each of our class-race settings. The three curves describe the total best fitness ever encountered, the best fitnesses averaged over all runs and the mean fitnesses averaged over all runs. The vertical line indicates the point, where the AI has killed the boss and won the game at least once in three attempts. This fitness value is different for each setting, since some raceclass combinations can gain more hitpoints or health potions than other, both of which directly increase their fitness (see Section 4.3). started using a weighted average operator to crossover the priorities for better performance. The AI evolved with these settings were just a little too greedy, often using all their potions in the early game, and even though they advanced far, they basically had no chance of beating the final boss. These strategies found quite a strong local optimum of the fitness, and we wanted to slightly punish them for it. We did so in two ways. Firstly, we rewarded leftover potions in our fitness value calculation, and secondly, a smart mutation was added, that modifies a few individuals from the population to not use potions to kill monsters of lower level than 5. After some balancing, this has shown itself to be effective. Mating and natural selection was done by simple roulette, i.e. individuals were chosen with probability proportional to their fitness. This creates a rather low selection pressure, and together with a large enough number of individuals in a generation, the evolution should explore a large portion of the candidate space and tune the strategies finely. Results After experimentation, we settled to do final runs with a population of 100 individuals, evolving through 30 generations. The population seemed large enough to be exploring the field well, and the generations sufficient for the population to converge. We ran the EA on 4 computers for a week, with a different combination of hero class and race on each computer. The result was a total of 62 runs, every hero class and race setting completed a minimum of 12 full runs. A single evaluation of an individual takes about 2 seconds, and a single whole run finishes in about 14 hours (intel i5-3470 at 3.2GHz, 4GB RAM, two instances in parallel). The data of the results contain a lot of good strategies, their qualities can be seen in Fig. 3. Every combination of hero race and class managed to beat the boss at least once, and the strongest evolved individual kills the boss 72% of time (averaged over 10000 runs). This is definitely more than we expected. Note that no AI can slay the boss 100% of the time, since the game's default PCG sometimes creates an obviously unbeatable level (e.g. all exits from the starting room surrounded by high level monsters). The evolved strategies also vary from each other. Different race and class combinations employ different strategies, but variance occurs even among runs of the same configuration. This shows that Desktop Dungeons can be played in several ways, and that different initial settings require different approaches to be used, which makes the game more interesting for a human. The different success rates of the configurations can also be used as a hint which race-class combinations are more difficult to play than others, either to balance them in the game design, or to recommend the easier ones to a beginner. Conclusion We present a platform for creating AI and PCG for the rogue-like game Desktop Dungeons. As a demonstration, we created an artificial player by an EA adjusting greedy algorithms. This AI functioned better than the hand-made greedy algorithm, winning the game roughly three quarters of the time, compared to a winrate of much less than 1%, and being as successful as an average human player. This shows that the game's original PCG worked quite well, not generating a great abundance of impossible levels, yet still providing a good challenge. A lot of research is possible with this platform. AI could be improved by using more complex EAs, or created from scratch using any techniques, such as search, planning and others. The PCG may be improved to e.g. create more various challenges for the player, adjust difficulty for stronger/weaker players or reduce the number of levels that are impossible to win. For evaluating the PCG, we could advantageously utilize the AI, and note some statistics, such as winrate, how often are different strategies employed or number of steps to solve a level. A combination of these would then create a rating function. Also, it would be very interesting to keep improving both the artificial player and the PCG iteratively by each other. Fig. 1 . 1 Fig. 1. Screenshot of the dungeon, showing the hero, monsters, and an item (a health potion). The dark areas are the unexplored parts of the dungeon. Fig. 2 . 2 Fig.2. The API, as a part of the game, connects to an application using a WebSockets protocol and provides access to the game by receiving and sending messages. The game offers no save/load features, it is always replayed from beginning to end. Some spells and effects move monsters, but that is quite uncommon and can be ignored for our purpose. Some rare effects have probabilistic outcomes, but with a proper game setting, this may be completely ignored.
23,075
[ "1030231", "1030232" ]
[ "304738", "304738" ]
01758442
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01758442/file/371182_1_En_4_Chapter.pdf
Augusto Baffa email: [email protected] Marcus Poggi email: [email protected] Bruno Feijó email: [email protected] Adaptive Automated Storytelling based on Audience Response Keywords: Social Interaction, Group decision making, Model of Emotions, Automated Storytelling, Audience model, Optimization application End To tell a story, the storyteller uses all his/her skills to entertain an audience. This task not only relies on the act of telling a story, but also on the ability to understand reactions of the audience during the telling of the story. A well-trained storyteller knows whether the audience is bored or enjoying the show just by observing the spectators and adapts the story to please the audience. In this work, we propose a methodology to create tailored stories to an audience based on personality traits and preferences of each individual. As an audience may be composed of individuals with similar or mixed preferences, it is necessary to consider a middle ground solution based on the individual options. In addition, individuals may have some kind of relationship with others that influence their decisions. The proposed model addresses all steps in the quest to please the audience. It infers what the preferences are, computes the scenes reward for all individuals, estimates their choices independently and in group, and allows Interactive Storytelling systems to find the story that maximizes the expected audience reward. Introduction Selecting the best events of a story to please the audience is a difficult task. It requires continued observation of the spectators. It is also necessary to understand the preferences of each individual in order to ensure that the story is able to entertain and engage as many spectators as possible. Whereas an interactive story is non-linear, because it has several possible branches until the end, the objective of a storyteller is to find out the best ones considering an audience profile, the dramatic tension and the emotions aroused on the individuals. Empathy is the psychological ability to feel what another person would feel if you were experiencing the same situation. It is a way to understand feelings and emotions, looking in an objective and rational way what another person feels [START_REF] Davis | A multidimensional approach to individual differences in empathy[END_REF]. Based on the empathy, it is possible to learn what the audience likes. This allows selecting similar future events along the story and, therefore, to maximize the audience rating. The proposed method aims to select the best sequence of scenes to a given audience, trying to maximize the acceptance of the story and reduce drop outs. The idea behind this approach is to identify whether the audience is really in tune with the story that is being shown. A well-trained storyteller can realize if the audience is bored or enjoying the story (or presentation) just looking at the spectators. During story writing, an author can define dramatic curves to describe emotions of each scene. These dramatic curves define how the scene should be played, its screenshot, lighting and soundtrack. After the current scene, each new one has a new dramatic curve which adds to the context of the story [START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF]. The reactions of the audience are related to the dramatic curves of the scene. If the audience readings of the emotions are similar to the emotions defined by the dramatic context, then there is a connection (empathy) between audience and what is being watched [START_REF] Jones | The actor and the observer: Divergent perceptions of the causes of behavior[END_REF][START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF]. In this work, we propose a methodology to create tailored stories to an audience based on personality traits and preferences of each individual. The global objective is to maximize the expected audience reward. This involves considering a middle ground solution based on the individual options of the audience group. In addition, individuals may have some kind of relationship with others, characterizing an interaction among the audience and ultimately influencing their decisions. The proposed model addresses all steps in the quest to please the audience. It infers what the preferences are, computes the scenes reward for all individuals, estimates their choices independently and in group, and allows Interactive Storytelling systems to find the story that maximizes the expected audience reward. This paper is organized as follows. Section 2 discusses on emotion modeling and on its application to audience characterization and behavior expectation. The following section presents the main aspects of automated storytelling. Section 4 is dedicated to modeling the expected audience reward maximization. The interaction of individuals in the audience is the object of section 5. Section 6 proposes a heuristic to solve the optimization model in section 5. Analysis and conclusions are drawn in the last section. Emotions and Audience During film screening, the audience gets emotionally involved with the story. Individuals in the audience reacts according to their preferences. When an individual enjoys what is staged, he/she tends to reflect the same emotions that are proposed by the story. The greater the identification between the individual and the story, the greater are the emotions experienced. As an audience can be composed of individuals who have very different preferences, it is important that the storyteller identifies a middle ground to please as many as possible. Knowing some personality traits of each individual helps to get the story closer to the audience. Model of Emotions The emotional notation used to describe the scenes of a story is based on the model of "basic emotions" proposed by Robert Plutchik [START_REF] Plutchik | The emotions: Facts, theories, and a new model[END_REF][START_REF] Plutchik | A general psychoevolutionary theory of emotions[END_REF]. Plutchik's model is based on Psychoevolutionary theory. It assumes that emotions are biologically primitive and that they evolved in order to improve animal reproductive capacity. Each of the basic emotions demonstrates a high survival behavior, such as the fear that inspires the fight-or-flight. In Plutchik's approach, the basic emotions are represented by a three-dimensional circumplex model where emotional words were plotted based on similarity [START_REF] Plutchik | The nature of emotions[END_REF]. Plutchik's model is often used in computer science in different versions, for tasks such as affective human-computer interaction or sentiment analysis. It is one of the most influential approaches for classifying emotional responses in general [START_REF] Ellsworth | Appraisal processes in emotion[END_REF]. Each sector of the circle represents an intensity level for each basic emotion: the first intensity is low, the second is normal and the third intensity is high. In each level, there are specific names according to the intensity of the emotion, for example: serenity at low intensity is similar to joy and ecstasy in a higher intensity of the instance. Plutchik defines that basic emotions can be combined in pairs to produce complex emotions. These combinations are classified in four groups: Primary Dyads (experienced often), Secondary Dyads (sometimes perceived), Tertiary Dyads (rare) and opposite Dyads (cannot be combined). Primary Dyads are obtained by combining adjacent emotions, e.g., Joy + Trust = Love. The Secondary Dyads are obtained by combining emotions that are two axes distant, for example, Joy + Fear = Excitement. The Tertiary Dyads are obtained by combining emotions that are three axes distant, for example, Joy + Surprise = Doom. The opposite Dyads are on the same axis but on opposite sides, for example, Joy and Sorrow cannot be combined, or cannot occur simultaneously [START_REF] Plutchik | The nature of emotions[END_REF]. Complex Emotions -Primaries Dyads: antecipation + joy = optimism joy + trust = love trust + fear = submission fear + surprise = awe surprise + sadness = disappointment sadness + disgust = remorse disgust + anger = contempt anger + antecipation = aggression This model assumes that there are eight primary emotions: Joy, Anticipation, Trust, Fear, Disgust, Anger, Surprise and Sadness. It is possible to adapt the Plutchik's model within a structure of 4-axis of emotions [START_REF] Rodrigues | Um Sistema de Geração de Expressões Faciais Dinâmicas em Animações Faciais 3D com Processamento de Fala[END_REF][START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF] as shown in Figure 1. The Plutchik's model describes a punctual emotion and it is used to represent an individual or a scene in a specific moment. In order to describe the emotions of a scene, the Plutchik's model is converted to a time series of emotions called "dramatic curve". The dramatic curve describes the sequence of emotions in a scene in an interval of one second per point. It follows the structure of 4-axis based on Plutchik's wheel and maps the variation of events in a story. Audience Model In Psychology, there are many models to map and define an individual's personality traits. One of the most used is called Big Five or Five Factor Model, developed by Ernest Tupes and Raymond Christal in 1961 [START_REF] Tupes | Recurrent personality factors based on trait ratings[END_REF]. This model was forgotten until achieving notoriety in the early 1980s [START_REF] Rich | User modeling via stereotypes[END_REF] and defines a personality through the five factors based on a linguistic analysis. It is also known by the acronym O.C.E.A.N. that refers to five personality traits. The personality of an individual is analyzed and defined throughout answers to a questionnaire that must be completed and verified by factor analysis. Responses are converted to values that define one of the factors on a scale of 0 to 100. In this work only two traits are used to create the individual profile: Openness to experience O ∈ [0, 1] and Agreeableness (Sociability) A ∈ [0, 1]. Each personality trait is described as follows: Openness to experience The openness reflects how much an individual likes and seeks for new experiences. Individuals high in openness are motivated to seek new experiences and to engage in self-examination. In a different way, closed individuals are more comfortable with familiar and traditional experiences. They generally do not depart from the comfort zone. [START_REF] John | The big-five trait taxonomy: History, measurement, and theoretical perspectives[END_REF] Agreeableness (Sociability) Agreeableness reflects how much an individual like and try to please others. Individuals high on agreeableness are perceived as kind, warm and cooperative. They tend to demonstrate higher empathy levels and believe that most people are decent, honest and reliable. On the other hand, individuals low on agreeableness are generally less concerned with others' wellbeing and demonstrate less empathy. They tend to be manipulative in their social relationships and more likely to compete than to cooperate. [START_REF] John | The big-five trait taxonomy: History, measurement, and theoretical perspectives[END_REF] Concept of Empathy According to Davis [START_REF] Davis | A multidimensional approach to individual differences in empathy[END_REF], "empathy" is defined by spontaneous attempts to adopt the perspectives of other people and to see things from their point of view. Individuals who share higher empathy levels tend to have similar preferences and do things together. In this work, he proposes a scale of "empathy" to measure the tendency of an individual to identify himself with characters in movies, novels, plays and other fictional situations. Also, the emotional influence of a movie to the viewer can be considered "empathy". It is possible to identify a personality based on the relationship between an individual and his favorite movies and books. Furthermore, it is possible to suggest new books or movies just knowing the personality of an individual [START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF]. Following these ideas, it is possible to relate empathy to a rating index. During an exhibition, if the viewer is enjoying what he is watching, there is an empathy between the show and the spectator. This information is used to predict what the spectator likes and dislikes. Interactive Storytelling In recent years, there have been some efforts to build storytelling systems in which authors and audience engage in a collaborative experience of creating the story. Furthermore, the convergence between video games and film-making can give freedom to the player's experience and generate tailored stories to a spectator. Interactive Storytelling are applications which simulates a digital storyteller. It transforms the narrative from a linear to a dialectical form, creating new stories based on audience by monitoring their reactions, interactions or suggestions for new events to the story. [START_REF] Karlsson | Applying a planrecognition/plan-generation paradigm to interactive storytelling[END_REF] The proposed approach of a storytelling system should be able to generate different stories adapted to each audience, based on previously computed sequence of events and knowledge of preferences of each individual on the audience. Story Model A story is a single sequence of connected events which represents a narrative. The narrative context may be organized as a decision tree to define different possibilities of endings. During the story writing, the author can define many different ends or sequences to each event (or scene). Each ending option forwards to a new scene and then to new ending options, until story ends. For example, Figure 2 To evaluate the proposed model and algorithm, the tests are performed using an event tree corresponding to a non-linear variation of the fairy tale Little Red Cap [START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF]. The event tree is described in Table 1 and presented in Figure 3. In some events, there are possibilities of branches such as the moment when the girl meets the wolf in the forest. The original story is represented by the sequence of events π : {EV1, EV2, EV3, EV4, EV5, EV7, EV8, EV9, EV10, EV17, EV11, EV13, EV15}. Each scene describes what occurs to the characters and the story, and also has an emotional description called "dramatic curve". The dramatic curves are based on Plutchik's wheel of emotions and describes how emotions should manifest during the scene. Soundtracks, screenshots and lighting can be chosen based on the dramatic curves. The sequence of scenes tells the story, describes a complete emotional curve and "tags" the story as "genre". Modeling Emotions to Events During the story writing, the scenes are described as a tree of events. Each event in the tree is associated to a dramatic curve and must be modeled containing the following information: -Name: unique name for the event (each event has a unique name); -Text: describes what happens during the event; -Dramatic Curves: emotional time series presented on figure 1: Joy/Sadness (axis x), Fear/Anger (axis y), Surprise/Anticipation (axis w) and Trust/Disgust (axis z). The Tree of events has different paths, connecting to different future events, until the end of the story. When the story is told, it is selected a single branch to each event. The dramatic curves representing the original story sequence of events are demonstrated in Figure 4. Table 2 illustrates the emotions involved in each of 20 events present in the story. Considering the personality traits, individuals who score high in "openness" like a greater variety of genres (often opposed) in comparison to others. Individuals low in "openness" generally prefer the same things and choose the same genres. In this case, there are less options to please individuals low in "openness". The task of selecting a scene that pleases a person is to find which of the possible options approaches their preferences. The task selection becomes difficult when we try to find the best option that would please the most people in a group. In this case, it is necessary to consider other information about individuals as "agreeableness" and empathy between individuals. Individuals who score high in "agreeableness" try to approach quickly the choices of others and have more patience than others. They sometimes prefer to accept other preferences and decisions just to please them all. The empathy indicates the level in a relationship that individuals have. For example, when two people like each other, they may want to do things together, thus it indicates a higher level of empathy. In the other hand, people who want avoid each other have a low level of empathy in this relationship. Generally, individuals in a relationship with high empathy choose the same options or a middle ground. Maximizing the Audience Given a tree of events, each ending (a leaf of the tree) uniquely determines a sequence of scenes or the story to tell. This corresponds to the path from the root of the tree to ending leaf. Finding the most rewardable path for a given audience amounts to evaluate an utility function that captures how the audience feels rewarded by the scenes and also the choices the audience makes at each branch. The tree of event can be represented as follows: Let S be the finite set of events (scenes) of a story. Let also Γ + (s) be a subset of S containing the child nodes of node s. The utility function is given by E(s, i) which determines the expected value of state s for individual i and represents a measure of similarity between 0 and 1. Finally, let P rob(s l-1 , s l , i) be the probability with which individual i chooses state s l to follow state s l-1 . Remark that the probabilities P rob(s l-1 , s l , i)) must add one for each branch and for each individual, since one branch must be selected on each state. Consider now a sequence of states π = {s 0 , . . . , s k } that represents a path from the root to a leaf. The proposed model evaluates path by computing its expected utility which is given by the expression: f (π) = k l=1 i∈I (E(s l , i).P rob(s l-1 , s l , i)) (1) Let R(s) be the maximum expected utility that can be obtained starting from state s. The following recursion determines R(s). where p(s) is the predecessor of s. By computing R(s 0 ), the root's reward, an optimal sequence π * , with maximum expected reward, can be retrieved in a straightforward way. We conclude the model by proposing an evaluation for the individual probabilities of choice on each story branch. This is done by assuming this probability is proportional to the expected individual reward of the branches. This leads to the expression: R(s) =            i∈I (E(s, i).P rob(p(s), s, i)) + max s ∈Γ + (s) R ( P rob(s, s , i) = IR(s , i) s ∈Γ + (s) IR(s , i) (3) where IR(s, i) is the expected reward at state s for individual i, which is given by: IR(s, i) = E(s, i) + max s ∈Γ + (s) IR(s , i).P rob(s, s , i) This model allows determining the best sequence of scenes for and audience provided there is no interaction within the audience. We address this case in the following section. To create tailored stories for an individual it is just necessary to check what he/she likes most, based on its own probabilities but when an individual participates of a group he/she needs to deal a middle ground. The dynamic of choosing the best story to an audience is based on the fact that the individuals will watch the same story, share a minimal intimacy and want spend sometime together. In a similar way, it is possible to say that they are trying to watch television and need to choose a television program that please the entire group. During this interaction, each individual tries to convince others about his preferences. Some individuals may agree with these suggestions based on the relationship they share, but others may introduce some limits. After some rounds, some individuals give in and accept to approach other preferences [START_REF] Tortosa | Interpersonal effects of emotion in a multi-round trust game[END_REF]. The decision of accepting others' do not eliminate personal preferences but introduce a new aspect to the options. According to the proposed model some options that originally are not attractive will be chosen because of the induced social reward imposed by the probability function of choosing it. This means that for some individuals, it is better to keep the group together than take advantage of their preference. Furthermore, as explained in section 2.2, individuals high in "openness" do not care so much about their own preferences because they like to experiment new possibilities. They may be convinced by friends or relatives and will tend to support their preferences. In order to model the audience behavior, we propose an algorithm based on a spring-mass system. Consider that all preferences are modeled by the real coordinate space ( 2 ) and each individual of the audience is represented by a point positioned on his preferences. Each point (individual) is connected to n springs (where n is the number of individuals). A spring is connected to its original position and other n-1 springs are connected to the other points. Then, we have a total of n×(n+1) 2 springs. The objective function aims to approach each point, considering the constraints of the springs. Each spring is modeled based on the individual personality traits and relationship levels between them. Let K ii = (1 -O i ), be the openness level of each individual i and K ij = A ij be the agreeableness level for each pair of individuals i and j. In this model, we are assuming that "agreeableness" may also be influenced by the relationship between i and j and it is possible to describe an individual resistance by others' preferences. After some experiments, we realize that it is possible to start an audience from A ij = A i and fine tuning A ij after some rounds. Given e ij ∈ [-1, 1], the empathy level between each pair of individuals, and x 0 i , the original position in space for each individual i, let d 0 ij = x 0 i -x 0 j be the original distance between individuals and let L ij = (1 -e ij ).d 0 ij be a weighted empathy level. The objective of the following model is to find the final positions x i minimizing the distances between the individuals d ij , weighted by their agreeableness level K ij and considering L ij . min i∈A j∈J:i =j K ij .(d ij -L ij ) 2 + i∈I K ii .d 2 ii (5) subject to d ij = x i -x j ∀i, j ∈ A, i = j (6) d ii = x i -x 0 i ∀i ∈ A (7) The constraints (6) link the distance variables d ij with the coordinate variables x i and x j when individuals i and j are different. Constraints [START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF] are used to obtain the distance d ii which each individual has moved from its original position. Figure 5 describes the operation of the Spring-mass system with 2 individuals. Since this model is not linear, it is not possible to use a linear solver to obtain the optimal solution. Therefore, we use a meta-heuristic approach based on simulated annealing to obtain a good approximate solution. The simulated annealing algorithm is presented in Section 6. 6 Solving the audience interaction model Simulated annealing is a meta-heuristic for optimization problems based on thermodynamics. Given a large solution space, it solves the optimization problem by finding a good solution near the global optimum [START_REF] Dréo | Metaheuristics for Hard Optimization: Methods and Case Studies[END_REF]. At each iteration of the algorithm, it changes the current solution within a neighborhood and considers the new solution as the current one if there is any improvement on the objective function or, if there is no improvement, it may consider it based on a randomized criteria. The neighborhood used for the audience problem is defined by all possible movements of each individual in other to minimize the distances between all individuals according to spring constraints. Let -→ a be the current position of individual a, -→ b be the referential position based on all relationships between the individuals and s a be the "agreeableness" level of the personality of individual a. It is possible to calculate the step of a movement δ x for individual a using equations ( 8)- [START_REF] Plutchik | A general psychoevolutionary theory of emotions[END_REF]. - → b =   n j=1 a j y .e ij e ij , n j=1 a j x .e ij e ij   (8) α = (b y -a y )/(b x -a x ) (9) δ x = -s 2 a /α 2 + 1 a x > b x , s 2 a /α 2 + 1 otherwise (10) The final position after moving the individual a is given by -→ a f inal as follows: - → a f inal = (δ x + a x , δ x .α + a y ) (11) The simulated annealing method is shown in Algorithm 1. The algorithm receives as input an initial solution S 0 , limits on the number of iterations M , on the number of solution movements per iteration P and on the number of solution improvements per iteration L. During its initialization, the iteration counter starts with 1, the best solution S starts equal to S 0 and the current temperature T is obtained from the function InitialT emp(), which returns a value based on the instance being solved. On each iteration, the best solution is changed within the neighborhood by function Change(S) and the improvement is calculated on ∆F i . This solution is then accepted or not as the new best solution and, at the end of the iteration, the temperature is updated given the factor α. Algorithm 1 Simulated Annealing procedure SA(S0, M, P, L) j ← 1 S ← S0 T ← InitialT emp() repeat i ← 1 nSuccess ← 0 repeat Si = Change(S) ∆Fi = f (Si) -f (S) if (∆Fi = 0)||(exp(-∆Fi/T ) > Rand()) then S ← Si nSuccess ← nSuccess + 1 end if i ← i + 1 until (nSuccess = L)||(i > P ) T ← α.T j ← j + 1 until (nSuccess = 0)||(j > M ) P rint(S) end procedure The proposed methodology was initially applied on students of our graduate program in order to evaluate the emotional characteristics of the individuals. This allowed a positive view of techniques and validated the initial hypothesis. Then, the final experiments were conducted using 20 generated audience1 instances with 20 individuals each on instances divided in three groups: 8 entirely mixed audiences with 60% of individuals supporting an emotion, 8 audiences with similar individuals and 4 mixed audiences with one opinion leader. The opinion leader instances were generated by describing an influential individual to others2 . This starting point permitted a qualitative evaluation of the application of the whole methodology based on discussion among the ones involved in the experience. The resulting story endings for each audience are presented on Table 3. Stories generated to mixed audiences before interaction considered average preferences while stories generated after interaction (SA) tend to select the majority preference. The proposed Red Cap story has a natural tendency for a Sadness + Angry endings (EV12, EV18, EV19, EV20) since there are more final events of these emotional features than Joy + Angry endings (EV15 only). However, the proposed method was able to select expected story endings according to the audience preferences. Also, the preliminary evaluation of an opinion leader suggested there is a sound basis for results that may effectively converge to the choice of audience rewarding paths. Next step amounts to carrying out more thorough and relevant experiments which requires not only larger groups but also stories that truly draws the audience. In this preliminary analysis, an evaluation of the model parameters also allowed to conclude that their determination may lead to conditions which can represent a wide range of groups, thus leading to a representative model. Our evaluation is that the proposed methodology can still incorporate more factors of emotional behavior, group interaction and storytelling aspects. The goal is to experiment thoroughly on a wide spectrum of stories and audiences. Fig. 1 . 1 Fig. 1. Simplified 4-axis structure -families of emotions Fig. 2 . 2 Fig. 2. Story as a Decision Tree Fig. 3 . 3 Fig. 3. Little Red-Cap story as a decision tree Fig. 4 . 4 Fig. 4. Dramatic Curves of Little Red-Cap original sequence of Scenes s ) i∈I (E(s, i).P rob(p(s), s, i)), for s a leaf i∈I E(s, i) + max s ∈Γ + (s) R(s ), for s the root (2) Fig. 5 . 5 Fig. 5. Spring-mass example with 2 individuals Table 1 . 1 Little Red-Cap story events Event Description Event Description EV1 Mother warns the girl EV11 Girl escapes EV2 Girl leaves her home EV12 Wolf devours Girl EV3 Girl is in the forest EV13 Girl finds the Hunter EV4 Girl finds the wolf in the forest EV14 Wolf gets the girl EV5 Wolf cheats the girl EV15 Hunter kills the wolf and saves Grandma EV6 Wolf attacks the girl EV16 Wolf kills the Hunter EV7 Wolf goes to Grandma's house EV17 Wolf attacks the Girl at Grandma's house EV8 Wolf swallows Grandma EV18 Wolf eats the Girl after his escape EV9 Girl arrives at Grandma's house EV19 Wolf devours the Girl in Grandma's house EV10 Girl speaks with Wolf EV20 Wolf devours the Girl in the Forest Table 2 . 2 Dramatic Curves for Little Red-Cap eventsEvery time an individual likes a scene or story, he/she tells what he/she likes and what does not. This information is then used to analyze and determine which are the individual preferences. The information from the dramatic curve indicates the emotion that has been liked and is used to classify genres. The favorite scenes of an individual are used to ascertain which are the emotions that stand out. The genres of the stories are set primarily by the main emotions of the scenes. Throughout readings of emotions which stand out, it is possible to know which genres the individual prefers and which scenes of a new story are emotionally similar. Event Emotion Event Emotion EV1 Joy + Surprise EV11 Joy + Anticipation EV2 Joy + Anticipation EV12 Sadness + Angry EV3 Trust + Surprise EV13 Joy + Anticipation EV4 Fear + Surprise EV14 Angry + Disgust EV5 Fear + Trust EV15 Joy + Angry EV6 Angry + Anticipation EV16 Sadness + Surprise EV7 Sadness + Anticipation EV17 Angry + Anticipation EV8 Angry + Surprise EV18 Sadness + Angry EV9 Joy + Fear EV19 Sadness + Angry EV10 Trust + Surprise EV20 Sadness + Angry 3.3 Uncovering audience preferences Table 3 . 3 Selected Story Endings for Audiences Emotion Mixed SA Similar Opinion Leader Trust EV 12 EV 15 EV 15 short - Surprise EV 12 EV 12 EV 20 short - Joy EV 12 EV 15 EV 15 EV 15 Sadness EV 12 EV 12 EV 12 short EV 12 Disgust EV 12 EV 12 EV 20 short - Anger EV 12 EV 18 EV 18 EV 18 Fear EV 12 EV 12 EV 12 EV 12 Anticipation EV 12 EV 15 EV 15 short - An audience is a set of individuals We considered that the empathy from others to an opinion leader is near to 1 but his/her empathy to others is low Acknowledgements This work was partially supported by CNPq (National Council for Scientific and Technological Development, linked to the Ministry of Science, Technology, and Innovation), CAPES (Coordination for the Improvement of Higher Education Personnel, linked to the Ministry of Education), FINEP (Brazilian Innovation Agency), and ICAD/VisionLab (PUC-Rio).
31,792
[ "1030237", "1030238", "1011687" ]
[ "362752", "362752", "362752" ]
01713511
en
[ "chim" ]
2024/03/05 22:32:10
2018
https://univ-rennes.hal.science/hal-01713511/file/Deunf%20et%20al_Anodic%20oxidation%20of%20p-phenylenediamines%20in%20battery%20grade%20electrolytes.pdf
Elise Deunf Franck Dolhem Dominique Guyomard Jacques Simonet Philippe Poizot email: [email protected] Anodic oxidation of p-phenylenediamines in battery grade electrolytes Keywords: phenylenediamine, cyclic voltammetry, redox-active amine, organic batteries, PF 6 decomposition, lithium hexafluorophosphate The use of anion-inserting organic electrode materials represent an interesting opportunity for developing 'metal-free' rechargeable batteries. Recently, crystallized conjugated diamines have emerged as new host materials able to accommodate anions upon oxidation at potentials higher than 3 V vs. Li + /Li 0 in carbonate-based battery electrolytes. To further investigate the electrochemical behavior of such promising systems, comparison with electroanalytical data of soluble forms of conjugated diamines measured in battery grade electrolytes appeared quite useful. However, the literature on the topic is generally poor since such electrolyte media are not common in molecular electrochemistry. This contribution aims at providing relevant data on the characterization by cyclic voltammetry of unsubstituted, diphenyl-substituted and tetramethyl-substituted p-phenylenediamines. Basically, these three molecules revealed two reversible one-electron reaction upon oxidation corresponding to the electrogenerated radical cation and dication, respectively, combined with the association of electrolyte anions (i.e., M A N U S C R I P T A C C E P T E D Introduction Global warming, fossil fuels depletion and rapid population growth are confronting our technology-oriented society with significant challenges notably in the field of power engineering. One of the priorities in this domain is to promote reliable, safe but also lowpolluting electrochemical storage devices for various practical applications from mWh to MWh range. Since the invention of the first rechargeable battery in 1859 by G. Planté (leadacid cell), the current manufacturing of batteries is still dominated by the use of redox-active inorganic species but the organic counterparts appear today as a promising alternative displaying several advantages such as low cost, environmental friendliness and the structural designability [START_REF] Poizot | Clean energy new deal for a sustainable world: from non-CO 2 generating energy sources to greener electrochemical storage devices[END_REF][START_REF] Liang | Organic electrode materials for rechargeable lithium batteries[END_REF][START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Haeupler | Carbonyls: Powerful organic materials for secondary batteries[END_REF][START_REF] Zhao | Rechargeable lithium batteries with electrodes of small organic carbonyl salts and advanced electrolytes[END_REF][START_REF] Schon | The rise of organic electrode materials for energy storage[END_REF][START_REF] Muench | Polymer-based organic batteries[END_REF][START_REF] Zhao | Advanced organic electrode materials for rechargeable sodiumion batteries[END_REF][START_REF] Winsberg | Redox-flow batteries: from metals to organic redox-active materials[END_REF]. For instance, the operating redox potential of organic electrodes can be widely tuned by the choice of (i) the electroactive functional group (both n-or p-type1 [START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Gottis | Voltage gain in lithiated enolate-based organic cathode materials by isomeric effect[END_REF]), (ii) the molecular skeleton, and (iii) the substituent groups. Organic structures based on conjugated carbonyl/enolate redox-active moiety represent probably the most studied family of n-type organic electrode materials especially for developing Li/Na-based rechargeable systems. Conversely, p-type organic electrodes which involve an ionic compensation with anions [START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Muench | Polymer-based organic batteries[END_REF][START_REF] Gottis | Voltage gain in lithiated enolate-based organic cathode materials by isomeric effect[END_REF] makes development of 'molecular' ion batteries possible [START_REF] Yao | Molecular ion battery: a rechargeable system without using any elemental ions as a charge carrier[END_REF] since numerous metal-free anions do exist. In this regard, our group has recently reported (for the first time) that crystallized conjugated diamines can accommodate anions at E > 3 V vs. Li + /Li 0 in the solid state with an overall reversible two-electron reaction making them interesting for positive electrode applications [START_REF] Deunf | Reversible anion intercalation in a layered aromatic amine: A High-voltage host structure for organic batteries[END_REF][START_REF] Deunf | Solvation, exchange and electrochemical intercalation properties of disodium 2,5-(dianilino)terephthalate[END_REF][START_REF] Deunf | A dual-ion battery using diamino-rubicene as anion-inserting positive electrode material[END_REF]. In light of the opportunity offer by this new family of insertion compounds and to go further in the understanding of electrochemical processes, we revisited the anodic oxidation of pphenylenediamine derivatives by cyclic voltammetry but measured in typical (aprotic) battery grade electrolytes for which less than 0.1 ppm of both O 2 and H 2 O are guaranteed. More specifically, we report herein the electrochemical feature of three selected phenylenediamines Experimental Chemicals The different electrolyte formulations were prepared in an Ar-filled glovebox (MBRAUN) containing less than 0.1 ppm of both O 2 and H 2 O from lithium perchlorate (LiClO 4 ), lithium hexafluorophosphate (LiPF 6 ), propylene carbonate (PC), ethylene carbonate (EC) and dimethylcarbonate (DMC) purchased from BASF (battery grade) and used as received. Lithium trifluoromethanesulfonate (LiOTf, 99.995%, Aldrich) was dried at 100° C under vacuum for 15 h prior use. The common "LP30" battery grade electrolyte (i.e., LiPF 6 1 M in EC:DMC 1:1 vol./vol.) was directly employed as received from Novolyte. Amines was purchased from Aldrich with the following purities: N,N'-p-phenylenediamine PD (99%), N,N,N',N'-tetramethyl-p-phenylenediamine TMPD (≥ 97%), N,N'-diphenyl-pphenylenediamine DPPD (98%), and triethylamine (Et 3 N, ≥99%). Electrochemical procedures Cyclic voltammetric (CV) experiments were recorded on a SP-150 potentiostat/galvanostat (Bio-Logic S.A., Claix, France). All electrochemical experiments were systematically conducted from freshly prepared electrolyte solutions (except the "LP30" battery grade electrolyte, Novolyte) in a conventional three-electrode setup (V = 10 mL) placed inside an Ar-filled glovebox (MBRAUN) containing less than 0.1 ppm of both O 2 and H 2 O. The working electrode was constituted of a commercial platinum disk microelectrode with a diameter of 1.6 mm (ALS Japan). Facing the working electrode, a large platinum wire was M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 5 used as the counter electrode. An Ag + /Ag 0 reference electrode made of a fritted glass tube filled with an AgNO 3 10 mM solution in acetonitrile [START_REF] Bard | Electrochemical Methods: Fundamentals and Applications[END_REF] was systematically used. However, reported potentials were also given against the Li + /Li 0 reference electrode for a better appreciation of the battery community. This second reference electrode, made of lithium metallic attached on a Pt wire, was experimentally checked versus the Ag + /Ag 0 reference electrode in each studied electrolyte giving a correction of the measured potentials of +3.6 V. Results and discussion The typical electrochemical activity of the simple PD molecule is preliminary reported being a representative member of this family of redox-active compounds. In addition, PC/LiClO 4 1 M electrolyte was first employed in order to be aligned with our former battery cycling tests performed on crystallized p-phenylenediamines derivatives [START_REF] Deunf | Reversible anion intercalation in a layered aromatic amine: A High-voltage host structure for organic batteries[END_REF][START_REF] Deunf | Solvation, exchange and electrochemical intercalation properties of disodium 2,5-(dianilino)terephthalate[END_REF]. Basically, the oxidation of PD in such an electrolyte shows two anodic peaks (I, II) located at 3.58 and 4.07 V vs. Li + /Li 0 , respectively (Figure 1b). When reversing the scan, two corresponding cathodic peaks are observed. The peak-to-peak separation values for both steps (I/I', II/II') are equal to 60 mV, which indicates the occurrence of two fully reversible one-electron processes. The anodic events are assigned to the electrogeneration of the radical cation PD • • • • + at peak I further followed by the dicationic form (PD 2+ ) at peak II. The ratio between the peak currents and the square root of the scan rate at both anodic and cathodic waves shows linearity (Figure 1c), which confirms the two electrochemical processes are under diffusion control as expected for reversible systems [START_REF] Bard | Electrochemical Methods: Fundamentals and Applications[END_REF][START_REF] Batchelor-Mcauley | Voltammetry of multi-electron electrode processes of organic species[END_REF]. At this point it should be recalled that the typical mechanisms for the electrochemical oxidation of phenylenediamines have already been established in common aprotic solvents for molecular electrochemistry [START_REF]The solvent effect on the electro-oxidation of 1,4-phenylenediamine. The influence of the solvent reorientation dynamics on the one-electron transfer rate[END_REF][START_REF] Bewick | Anodic oxidation of aromatic nitrogen compounds: Spectroelectrochemical studies of EE and EECr processes with a coupled redox reaction[END_REF][START_REF] Fernández | Determination of the kinetic and activation parameters for the electro-oxidation of N,N,N',N'-tetramethyl-p-phenylenediamine (TMPD) in acetonitrile (ACN) by chronocoulometry and other electrochemical techniques[END_REF][START_REF] Santana | In situ UV-vis and Raman spectroscopic studies of the electrochemical behavior of N,N'-diphenyl-1,4phenylenediamine[END_REF][START_REF] Maleki | Mechanism diversity in anodic oxidation of N,N-dimethyl-pphenylenediamine by varying pH[END_REF] and very recently in the battery grade PC/LiBF 4 1 M electrolyte for non-aqueous redox flow batteries [START_REF] Kim | A comparative study on the solubility and stability of pphenylenediamine-based organic redox couples for nonaqueous flow batteries[END_REF]. Their oxidation processes through this M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 6 two reversible one-electron transfer with the formation of successive stable radical cation and dication species. However, in the case of primary and secondary amines -meaning the existence of labile protons -this behavior can be impacted by the presence of acid-base species in the electrolyte [START_REF] Santana | In situ UV-vis and Raman spectroscopic studies of the electrochemical behavior of N,N'-diphenyl-1,4phenylenediamine[END_REF][START_REF] Maleki | Mechanism diversity in anodic oxidation of N,N-dimethyl-pphenylenediamine by varying pH[END_REF]. Indeed, these labile protons are easily involved in an acidbase reaction competing with the electrochemical process, sometimes even leading to the loss of the reversible character. Interestingly, in PC/LiClO 4 1 M electrolyte one can observe that PD • • • • + remains stable on the voltammetry time-scale and can undergo a second electrochemical oxidation at higher potentials for producing PD 2+ . Similarly, this dicationic form is also stable enough towards chemical side-reactions and is reduced back on the reverse scan. For comparison, two other common lithiated salts used in Li-batteries (LiPF 6 and LiOTf) were also evaluated as supporting electrolytes using again PC as the solvent. The resulting CV curves of PD shows quite similar electrochemical steps (Figure 2a) with fully reversible peaks obtained at anodic potentials of 3.6 and 4.1 V vs. Li + /Li 0 , respectively. This result attests that neither the thermodynamic nor the kinetic of the stepwise one-electron oxidation reactions are impacted by the counter anions of the supporting electrolyte although exhibiting very different donor number (DN) in PC and van der Waals volume [START_REF] Ue | Mobility and ionic association of lithium and quaternary ammonium salts in propylene carbonate and γ-butyrolactone[END_REF][START_REF] Ue | Ionic radius of (CF 3 SO 2 ) 3 C and applicability of stokes law to its propylene carbonate solution[END_REF][START_REF] Linert | Anions of low Lewis basicity for ionic solid state electrolytes[END_REF]. These results are in agreement with a dominance of the solvation process in high polarity aprotic solvents such as PC (ε r ~ 66) in which a low ion-pairing is expected [START_REF] Barrière | Use of weakly coordinating anions to develop an integrated approach to the tuning of ∆E 1/2 values by medium effects[END_REF]. However, the use of were also selected as representative secondary and tertiary p-phenylenediamine derivatives of interest for this comparative study. In fact, the possible π-delocalization by mesomeric effect (+M) occurring with the DPPD structure should induce both a positive potential shift and higher acidic character of the secondary amine functional group. On the contrary, methyl substituent groups which are electron-donating by inductive effect (+I) would decrease the oxidative strength of the p-phenylenediamine backbone (lower formal potential) whereas no acidic protons do exist. Figure 3 summarizes the most striking features observed in both PC/LiClO 4 1 M and EC-DMC/LiPF 6 1 M electrolytes. As expected with DPPD, the reversible stepwise one-electron oxidation steps occur at 140 and 50 mV higher than the corresponding events observed with PD while TMPD shows the lowest redox potentials of the series. It is worth noting that the presence of substituent groups on the p-phenylenediamine backbone does not impact the reversibility of the processes and further illustrates the stability of both the neutral and the electrogenerated species. Table 1 shows the diffusion coefficient values experimentally determined from the voltammetric curves recorded at different scan rates (50 mV.s -1 to 10 V.s -1 ). These values are comparable to those reported in the literature [START_REF] Kim | A comparative study on the solubility and stability of pphenylenediamine-based organic redox couples for nonaqueous flow batteries[END_REF][START_REF] Ue | Mobility and ionic association of lithium and quaternary ammonium salts in propylene carbonate and γ-butyrolactone[END_REF]. The peculiar electrochemical feature previously observed with PD in presence of PF 6 anion (Figure 2a) is again noticed in the case of DPPD with the appearance this time of an obvious reversible pre-peak (III) prior to the second main electrochemical step (Figure 3). One possible explanation could be related to the peculiar chemistry of LiPF 6 in high polarity aprotic solvents (denoted :S). Indeed, it has long been known in the field of Li-ion batteries [START_REF] Linert | Anions of low Lewis basicity for ionic solid state electrolytes[END_REF][START_REF] Sloop | Chemical reactivity of PF 5 and LiPF 6 in ethylene carbonate/dimethyl carbonate solutions[END_REF][START_REF] Tasaki | Decomposition of LiPF 6 and stability of PF 5 in Li-ion battery electrolytes[END_REF][30][31]] that undissociated LiPF 6 do exist at relatively high electrolyte concentrations (in the range of 1 mol.L -1 ) in equilibrium with F -and the strong Lewis acid, PF 5 : [Li + PF 6 - ] (ion pair) + :S ⇌ [S:PF 5 ] + (sol) + LiF (1) M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 8 The dilemma is that in high polarity aprotic solvents the dissociation of [Li + PF 6 -] ion pairs is facilitated but the stabilization of PF 5 too. In addition, it has been shown that the presence of H + ion in the medium that also form ion pairs with PF 6 -catalyzes its decomposition according to the following equilibria due to the strong H … F interactions [30]: H + + PF 6 - (sol) ⇌ [H + PF 6 - ] (ion pair) ⇌ H … F-PF 5 ⇌ HF + PF 5 (2) In the case of PD and DPPD, the resulting radical cations electrogenerated at peak I exhibit more polarized N-H bonds in comparison with the pristine state. In the presence of PF 6 -(and potentially F -in the vicinity of the electrode), the acidic proton can be neutralized according to Eq. 2 for producing the corresponding radical, which is a more readily oxidizable species. This hypothesis is further supported by the fact that no pre-peak is observed with TMPD for which no labile protons do exist. However, supplementary experiments were also conducted by adding a base to LiPF 6 -free electrolyte media in order to verify the deprotonation assumption. In practice, triethylamine (Et 3 N) was used as a common base in organic chemistry. Figure 4 summarizes the as-obtained results by selecting both DPPD and TMPD as the two representative cases bearing no labile proton. As expected, in the presence of trimethylamine (0.5 mM), the pre-peak appeared with DPPD (III) while the electrochemical behavior of TMPD was not affected. Note that a pre-peak (IV) was also observed prior to the first oxidation step (I), which can be attributed to the deprotonation reaction of DPPD itself at this concentration of base. The proposed overall mechanism is finally depicted in Figure 5 Table 1. Diffusion coefficients for the oxidation of phenylenediamines in the different electrolytes calculated with the Randles-Sevcik equation from the slope of the experimental curves i p = f (υ 1/2 ) . e., N,N'-p-phenylenediamine, PD; N,N'-diphenyl-p-phenylenediamine, DPPD; N,N,N',N'tetramethyl-p-phenylenediamine, TMPD-Figure1a) solubilized at millimolar concentrations in different electrolyte formulations including LiPF 6 as the most popular supporting salt used in the Li-ion battery by manufacturers and researchers. LiPF 6 6 supporting salt shows some slight differences with the appearance of a new contribution between the two regular steps and a peak-to-peak separation of the second anodic wave shifted from reversibility (∆E 1/2 = 90 mV) suggesting a quasi-reversible process. When excluding LiPF 6 , the solvent change does not affect the reversibility of the two electrochemical steps involved with PD. Figure2bshows for instance a comparison of CVs recorded in PC, DMC and EC-DMC, respectively, using a concentration of 1 mol.L -(DPPD) and tetramethyl-substituted p-phenylenediamines (TMPD) in the presence of PF 6 - 6 -- 66 or by adding Et 3 N in PF free electrolyte. Interestingly, this particular electrochemical investigation focused on both substituted and unsubstituted pphenylenediamines supports well the few other reports pointed out the decomposition issues of LiPF 6 in aprotic media when labile protons are present.This study aimed at emphasizing the potentiality of p-phenylenediamines which can offer high potential and multi-electronic behavior as p-type materials for battery applications. A specific cyclic voltammetry study was then conducted to evaluate the electrochemical behavior of three selected p-phenylenediamines derivatives (PD, DPPD and TMPD) dissolved in several battery grade (carbonate) electrolyte media. Among the various electrolytes tested, it appeared a chemical instability of the electrogenerated radical cation in presence of LiPF 6 when labile protons do exist on nitrogen atoms due to the propensity of PF 6-to be decomposed in high-polarity solvents such as PC-or EC-based battery electrolytes;this phenomenon being catalyzed by labile protons. This electrochemical study provides also to the Li battery community a supplementary proof concerning the high reactivity of the most popular supporting salt versus any labile proton potentially present in a battery electrolyte. 14 Figure 1 .Figure 2 . 1412 Figure 1. (a) Structural formula of studied p-phenylenediamines denoted PD, DPPD and Figure 3 . 3 Figure 3. Comparison of typical CV curves recorded on Pt disk microelectrode at a scan rate of 200 mV.s -1 using a concentration of 1 mM PC/LiClO 4 1 M or EC-DMC/LiPF 6 1 M Figure 4 . 18 Figure 5 . 4185 Figure 4. Typical CV curves recorded on Pt disk microelectrode at a scan rate of 200 mV.s -1 Note that n-type structures involve upon oxidation an ionic compensation with cation release whereas p-type structures imply an anion. Acknowledgments This work was partially funded by a public grant overseen by the French National Research Agency as part of the program "Investissements d'Avenir" [grant number ANR-13-PRGE-0012] also labeled by the Pôle de Compétitivité S2E2.
21,010
[ "942192" ]
[ "896", "441290", "216126", "896", "194938" ]
01197401
en
[ "info" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01197401/file/371182_1_En_34_Chapter.pdf
Thomas Constant email: [email protected] Axel Buendia email: [email protected] Catherine Rolland email: [email protected] Stéphane Natkin email: [email protected] A switching-role mechanic for reflective decision-making game Keywords: serious game, game design, decision-making, overconfidence This paper introduces issues about a methodology for the design of serious games that help players/learners understand their decisionmaking process. First, we discuss the development of a video game system based on a switching-role mechanic where the player becomes the game leader of the experience. Then, we introduce game mechanics designed to induce a specific behavior, overconfidence, that helps to understand the players' decision-making processes. Finally, we describe tools for measuring the players' self-reflection regarding their judgment process. Introduction Serious games for decision-making play an important role in management training [START_REF] Barth | Former les futurs managers à des compétences qui n'existent pas: les jeux de simulation de gestion comme vecteur d'apprentissage[END_REF]. But their use is too often limited to the training of a specific behavior, or to learn good habits. Video games offer the possibility of teaching a more reflexive experience [START_REF] Constant | Enjeux et problématiques de conception d'un jeu sérieux pour la prise de décision[END_REF]. They can be designed as decision-driven systems [START_REF] Schell | The Art of Game Design A Book of Lenses[END_REF], tools created to help learners reflect on how they play [START_REF] Gee | Surmise the Possibilities: Portal to a Game-Based Theory of Learning for the 21st Century[END_REF], how they interact with the system [START_REF] Papert | Mindstorms: Children, Computers, And Powerful Ideas[END_REF] ; thus, how they make a decision [START_REF] Shaffer | Epistemic Games[END_REF]. This paper presents issues about a game design methodology for serious games whose goal is to help learners gain a better understanding of their decision-making process, and to encourage players' reflexivity towards their own decision-making. The design is based on an asymmetrical gameplay: after the player has made a judgment task, and has taken a decision, s/he can become the "game leader"able to influence the other player. By switching roles, s/he may gain a better understanding of his/her own decision process. Our proposal to validate the mechanic's efficiency is to build a video game designed to develop and maintain an excessive confident behavior in the players' judgment, in order to promote the emergence of a reflexive stance of the player towards their decision processes. The first section of this paper introduces our model and its working conditions. The second section explains game mechanics useful for inducing overconfident behavior. These mechanics are, in effect, a translation of cognitive science principles regarding overconfidence into game variables. The third section proposes measurement tools for evaluating the game's efficiency. 2 Main issue: enlighten the player's decision-making 2.1 Switching-role mechanic and operating conditions Our main hypothesis is that a switching-role mechanic can help the players to develop a better understanding of their decision-making processes. However, we make the assumption that switching-role is not enough: the player can be good at playing but may not necessarily understand of how. To help players to be in a reflexive position about their abilities to make a decision, we introduce three conditions to support the switching-role mechanic: -A main condition: when switching-role, the player must become the game leader of the game. In this role, s/he can use variables to impact the game experience. The game leader is the one who plays with the mechanics in order to alter the other player's judgment. S/he can achieve an optimal point of view of how the game works, and how it can alter the player's behavior. -A pre-condition: before becoming the game leader, it is necessary that the player has been in the position of taking a decision for which s/he is confident about. The confidence must be assumed even if the decision was made in an uncertain situation, and may be biased by the context of the game. Players' judgment about their decision must be unequivocal if we want to help them to understand how it can be affected. -A post-condition: after playing the game leader, it is necessary that the player is able to play his/her first role again, in order to measure the impact of the switching-role mechanic on his/her behavior. For a serious purpose, we need to help the player to achieve this state of selfreflection. His/her way to make a decision has to be easier to understand and, as a consequence, the decision mechanisms have to be underlined by the system. Our proposal is to use cognitive fallacies in order to highlight judgment processes and explain why the player decision is biased. Heuristic judgment and decision making processes Heuristics and biases research allows to understand more precisely human judgment under uncertainty. Confronted with a complex question, decision-makers sometimes unwittingly substitute the question with an easier one. This process, called "attribute substitution", is an example of heuristic operating [START_REF] Kahneman | A model of heuristic judgment[END_REF]. A heuristic represents a shortcut in the judgment process as compared with a rational approach to decision-making. Heuristics are "rules of thumbs" -simpler and faster ways to solve a problem, based on knowledge, former experiences, skills, and cognitive abilities (similar to memory or computational ability) [START_REF] Kahneman | Judgment under Uncertainty: Heuristics and Biases[END_REF][START_REF] Gigerenzer | Heuristic decision making[END_REF]. If heuristic strategies are efficient most of the time, they can, however, occasionally lead to failure comparatively to a rational resolution of the full problem. These errors are called biases: markers of the use of a judgment heuristic. Identifying these markers allows researchers to better understand decision-making processes and reveal heuristic at work. Based on this approach, our methodology entails focusing on a single behavior in order to underline the player's decision-making processchosen specifically because it frequently manifests itself in the comportment of game players: overconfidence. Serious game concept and context of use Before introducing specific game mechanics, we present the key elements of a gameplay chosen to illustrate the use of our methodology. The game is played by two players on two different computers. Players cannot see each other and cannot communicate directly, but they are aware of each other presence and role in the game. They play a narrative adventure game which apparent goal is to solve a sequence of criminal cases. Each player has a specific role. One of the players adopts the role of an investigator, gathering information to build a hypothesis for a given problem. S/he is confronted with various forms of influence, which are going to have an impact on his/her judgment. The other player personify the game leader, played by the other player, who is going to control the investigator access to information. S/he has access to multiple game variables useful to induce overconfidence in the other player's judgment (see below). After playing a sufficient number of levels in the same role (to be sure that the evaluation of the player's behavior is correct), the players exchange their roles: the game leader becomes the investigator, and reciprocally. By experimenting with these two gameplays, the player puts its own actions into perspective in order to understand how s/he made a decision. 3 Pre-condition: guiding the player's judgment Variables to orient the player's confidence The overconfidence effect has been studied in economic and financial fields as a critical behavior of decision-makers [START_REF] Bessière | Excès de confiance des dirigeants et décisions financières: une synthèse[END_REF]. It impacts our judgment of both our own knowledge and skills and those of others [START_REF] Johnson | The evolution of overconfidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Overconfidence can be explained as a consequence of a person's use of heuristics such as availability and anchoring (defined in Section 3) [START_REF] Griffin | The weighing of evidence and the determinants of confidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Overconfidence is also commonly observed in player behaviors. In a card game, for example, beginners as well as experts can be overconfident with regard both to performance and play outcomes [START_REF] Keren | Facing uncertainty in the game of bridge: A calibration study[END_REF]. If we want to induce this behavior, the player's judgment has to be driven in a given direction. As a consequence, game mechanics must be related to expressions or sources of overconfidence in human behavior [START_REF] Moore | The Trouble with Overconfidence[END_REF]. Then, based on game design methods for directing the behavior of the player [START_REF] Schell | The Art of Game Design A Book of Lenses[END_REF][START_REF] Adams | Fundamentals of Game Design[END_REF], we derived game mechanics that can be used to produce the overconfidence effect. Figure 1 presents some mechanics examples according to three major expressions of the overconfidence effect. Core gameplay At the beginning of the level, the game leader introduces a case to the other player, the investigator. The investigator's mission is to find the culprit: s/he is driven through the level to a sequence of places where his/her is able to get new clues about the case, mainly by questioning non-playable characters. But the investigator is allowed to perform a limited number of actions during a level, losing one each time s/he gets a new clue. Thus, the investigator is pushed to solve the case as fast as possible. The game leader is presented as the assistant of the investigator, but his/her real role is ambiguous: maybe s/he is trying to help the investigator, or maybe s/he has to push the investigator on the wrong track. This doubt is required to avoid biasing the investigator's judgment about the nature of the influence which target him/her. The investigator should not easily guess what the game leader is really doing, and should stay in a context of judgment in uncertainty. If this is not the case, the measure of the confidence of the player may be distorted. After several levels (several cases), the investigator Difficulty Anchoring Confirmation Definition A decision-maker can be overconfident if s/he thinks that the task is too easy or too difficult [START_REF] Griffin | The weighing of evidence and the determinants of confidence[END_REF][START_REF] Lichtenstein | Do those who know more also know more about how much they know?[END_REF]. Estimations are based on an anchor, a specific value they will easily memorize. The adjustments will be too far narrowed down towards this value to give an appropriate estimation. Anchor bias can induce overconfidence when evaluating an item or an hypothesis [START_REF] Kahneman | Intuitive prediction: Biases and corrective procedures[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Confirmation bias reveals the fact that decision-makers often seek evidences that confirm their hypothesis, denying other evidences that may refute them [START_REF] Koriat | Reasons for confidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Mechanic example 1 Setting up sensitive difficulty by restricting the player's exploration in time and space. The game designer chooses a specific piece of information to use as an anchor. In order for it to be clear to the player that s/he has to use it the information must be important to the case. The game designer classifies each piece of evidence according to how they support the investigation's solution and each of the red herrings. Mechanic example 2 Setting up logical difficulty using puzzle game design, the intrinsic formal complexity of which can be controlled via given patterns and parameters. In order to compare its impact on player judgment the game leader sets the anchor at different points and times in the game. During the game, when giving evidence to the player, the game leader must give priority to evidence that favors a specific red herring. becomes the new game leader, and vice versa. To win, the investigator must find the probable solution of a case depending on the clues s/he might have seen, associated with a realistic measure of his/her confidence. At the opposite, the game leader wins if s/he has induced overconfidence in the investigator's judgment, and if the latter didn't discover the game leader role. 4 Post-condition: measuring the player's behavior Evaluation of the player's confidence Two kinds of evaluations are used to assess the effectiveness of a serious game based on our switching-role model. The first ones focuses on the player's judgment through the evaluation of his/her confidence. Measurements of the investigator's overconfidence are based on credence calculation, which is used in overconfidence measurement studies [START_REF] Lichtenstein | Do those who know more also know more about how much they know?[END_REF]. This score assesses the players' ability to evaluate the quality of their decision rather than assessing the value of the decision itself. Variations of this score from one game session to an other can show the evolution of the players' confidence regarding their decision-making process. After playing, players must fill out a questionnaire survey in order to give a more precise evaluation of their progression and confidence [START_REF] Stankov | Confidence and cognitive test performance[END_REF]. Evaluation of the player's reflexivity The second evaluations highlight the players' ability to assess their self-efficacy in terms of problem solving. Judgment calibration may engage the decision-maker in a reflexive posture on his/her ability to judge the quality of his/her decision that the overconfidence effect may bias [START_REF] Stone | Overconfidence in Initial Self-Efficacy Judgments: Effects on Decision Processes and Performance[END_REF]. But it is not enough for a long-lasting understanding of the behavior [START_REF] Stankov | Realism of confidence judgments[END_REF]. Therefore, in order to extend its effects, we design a re-playable game which can be experienced repeatedly within one or various training sessions. The switching-role mechanic allows the player to engage in a self-monitoring activity, by observing the behavior of other players and by experimenting on them. After several levels from this perspective, the player discerns how the investigator develops overconfidence, or tries to reduce it. Then the player resumes his/her first role and starts by giving new self-evaluations. This time, the player should give a more realistic assessment of his/her ability to solve the case. The variation of the players calibration score can give us a precise measure of the evolution of their behavior, and by extension, of their understanding on how they make a decision in the game. Figure 2 presents the range of possible player behaviors that we can expect. Not confident Very confident The solution of the case given by the player is improbable The player is aware of the weakness of his/her reasoning. Well calibrated Score multiplied The player was too quick in his/her reasoning (and s/he has failed to seen the limits). S/he made a mistake in his/her reasoning. Uncalibrated Player loses his points The solution of the case given by the player is probable The player was too quick in his/her reasoning (and s/he realizes this). S/he is correct, but has no confidence in his/her reasoning. Uncalibrated Player loses his points The player is correct as well as confident in his/her reasoning. Well calibrated Score multiplied Fig. 2. Player behavior matrix Conclusion and future works This paper proposed a game design methodology for building serious games and the way of use to let the players gain a better appreciation of how they make a decision. This methodology is based on the heuristic approach to the analysis of human judgment as well as game design research that relates to decision-making and reflexivity. We then proposed rules and game mechanics designed to induce and control the overconfidence effect and to encourage the players' reflexivity regarding their decision-making. Finally, we introduced the idea of tools for measuring both the players' reflexivity and the effectiveness of the game itself. This methodology is currently being used to develop a prototype of the serious game, which will be evaluated in training courses at the Management & Society School of the National Conservatory of Arts and Crafts1 . The prototype will be able to verify the proper functioning of the switching-role mechanic, its impact and its durability on the player's behavior. Fig. 1 . 1 Fig. 1. Variables and game mechanics to orient the player's behavior For more informations about the School and the Conservatory: http://the.cnam.eu
17,420
[ "3107", "182179", "965012", "964508" ]
[ "16574", "573796", "300351", "16574", "573796", "300351", "573796", "300351", "16574", "573796", "300351" ]
01758572
en
[ "sdv" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01758572/file/article.pdf
B G M Van Dijk E Potier Maarten Van Dijk Marloes Langelaan Nicole Papen-Botterhuis Keita Ito Reduced tonicity stimulates an inflammatory response in nucleus pulposus tissue that can be limited by a COX-2-specific inhibitor Keywords: explant culture, disc herniation, inflammation, regenerative therapy, sustained release In intervertebral disc herniation with nucleus pulposus (NP) extrusion, the elicited inflammatory response is considered a key pain mechanism. However, inflammatory cytokines are reported in extruded herniated tissue, even before monocyte infiltration, suggesting that the tissue itself initiates the inflammation. Since herniated tissue swells, we investigated whether this simple mechanobiological stimulus alone could provoke an inflammatory response that could cause pain. Furthermore, we investigated whether sustained-release cyclooxygenase-2 (COX2) inhibitor would be beneficial in such conditions. Healthy bovine NP explants were allowed to swell freely or confined. The swelling explants were treated with Celecoxib, applied either as a bolus or in sustained-release. Swelling explants produced elevated levels of interleukin-6 (IL-6) and prostaglandin E2 (PGE2) for 28 days, while confined explants did not. Both a high concentration bolus and 10 times lower concentration in sustained release completely inhibited PGE2 production, but did not affect IL-6 production. Swelling of NP tissue, without the inflammatory system response, can trigger cytokine production and Celecoxib, even in bolus form, may be useful for pain control in extruded disc herniation. Introduction Low back-related leg pain, or radicular pain, is a common variation of low back pain. Its main cause is extruded lumbar disc herniation [START_REF] Gibson | Surgical interventions for lumbar disc prolapse: updated Cochrane Review[END_REF], during which the central core of the intervertebral disc, the nucleus pulposus (NP), pushes through the outer ring, the annulus fibrosus (AF). This extruded NP tissue can cause inflammation of nerve roots, which has been recognized as a key factor in painful extruded herniated discs [START_REF] Takada | Intervertebral disc and macrophage interaction induces mechanical hyperalgesia and cytokine production in a herniated disc model in rats[END_REF]. Presence of inflammatory factors, interleukin-1 (3,4) and 6 (5,6) (IL-1 and IL-6), tumor necrosis factor alpha [START_REF] Takahashi | Inflammatory cytokines in the herniated disc of the lumbar spine[END_REF][START_REF] Yoshida | Intervertebral disc cells produce tumor necrosis factor alpha, interleukin-1beta, and monocyte chemoattractant protein-1 immediately after herniation: an experimental study using a new hernia model[END_REF][START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF] (TNF), and prostaglandin-E 2 (5,6) (PGE 2 ), is reported in herniated tissue. This is mainly attributed to the body's immune response: monocyte infiltration [START_REF] Doita | Immunohistologic study of the ruptured intervertebral disc of the lumbar spine[END_REF], macrophage maturation, and resorption of extruded tissue [START_REF] Komori | The natural history of herniated nucleus pulposus with radiculopathy[END_REF]. But, production of several cytokines have been measured, albeit in lower quantities, in protruded NP tissue, which is not exposed to the body's immune response [START_REF] Kang | Herniated lumbar intervertebral discs spontaneously produce matrix metalloproteinases, nitric oxide, interleukin-6, and prostaglandin E2[END_REF]. Furthermore, Yoshida et al. [START_REF] Yoshida | Intervertebral disc cells produce tumor necrosis factor alpha, interleukin-1beta, and monocyte chemoattractant protein-1 immediately after herniation: an experimental study using a new hernia model[END_REF] have observed IL-1 and TNF positive cells, before infiltration of monocytes. Thus, NP tissue itself may initiate the inflammatory response through an unknown mechanism. We hypothesize that when NP tissue is extruded, it is exposed to a lower osmolarity and this osmotic shock in turn stimulates the native NP cells to produce inflammatory factors. Radicular pain caused by herniation is generally treated conservatively, for example, oral analgesics, and 50% of patients recover spontaneously [START_REF] Frymoyer | Back pain and sciatica[END_REF]. However, in many patients, analgesia is insufficient [START_REF] Croft | Outcome of low back pain in general practice: a prospective study[END_REF], and they are treated with epidural steroid injections or surgery. Long-term results of surgery are not different from conservative treatment for radicular pain [START_REF] Jacobs | Surgery versus conservative management of sciatica due to a lumbar herniated disc: a systematic review[END_REF], and although epidural injections can be effective in 80% of cases, patients often need four to five injections within a year [START_REF] Manchikanti | Evaluation of the effectiveness of lumbar interlaminar epidural injections in managing chronic pain of lumbar disc herniation or radiculitis: a randomized, double-blind, controlled trial[END_REF]. Moreover, steroids might slow down the natural resorption of extruded NP tissue as shown in a rabbit tissue model [START_REF] Minamide | Effects of steroid and lipopolysaccharide on spontaneous resorption of herniated intervertebral discs. An experimental study in the rabbit[END_REF]. Reducing pain, while not inhibiting resorption, would be a promising approach to treat herniation. One of the inflammatory factors in herniation is PGE 2 which can sensitize nerves and induce pain [START_REF] Samad | Interleukin-1betamediated induction of Cox-2 in the CNS contributes to inflammatory pain hypersensitivity[END_REF]. Two enzymes are involved in PGE 2 production, cyclooxygenase 1 and 2 (COX1 and 2). Contrary to COX1, COX2 is inducible and, therefore, COX2 inhibitors are used in pain management, and have been shown to reduce pain in rat models of disc herniation [START_REF] Kawakami | Epidural injection of cyclooxygenase-2 inhibitor attenuates pain-related behavior following application of nucleus pulposus to the nerve root in the rat[END_REF]. Celecoxib (Cxb) is a COX2-specific inhibitor and a candidate for treating herniated discs whose cells can produce PGE 2 , when stimulated by macrophages [START_REF] Takada | Intervertebral disc and macrophage interaction induces mechanical hyperalgesia and cytokine production in a herniated disc model in rats[END_REF]. However, because the half-life of Cxb is only 7.8 h (17), a biodegradable Cxb sustained release option is likely to be more successful than a single Cxb injection. We have previously cultured bovine NP tissue explants in an artificial annulus, to prevent swelling and provide a near in vivo environment for the NP cells up to 6 weeks [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF]. In this study, we allow the tissue to swell, as in extruded herniation, and investigate if this simple mechanobiological stimulus can stimulate the tissue to produce cytokines, in the absence of the inflammatory system response. Two injectable sustained release biomaterials loaded with Cxb were then tested in this model and compared to a single Cxb injection. Materials and Methods Tissue Tissue NP explants (150-350 mg) were isolated with an 8 mm biopsy punch (Kruuse, Sherburn, UK) from the center of fresh caudal discs of 24-month-old cows, obtained from a local abattoir in accordance with local regulations. After weighing, free swelling samples were immediately placed in 6 ml of culture medium (DMEM; Gibco Invitrogen, Carlsbad, CA), with 4 mM Lglutamine (Lonza, Basel, Switzerland), 1% penicillin/streptomycin (Lonza), 50 mg/l ascorbic acid (Sigma), and 5% fetal bovine serum (Gibco)). Control samples (non-swelling) were cultured in an artificial annulus system (18) in 6 ml of medium (Fig. 1). In this system, a jacket knitted from UHMWPE fibers (Dyneema, DSM, Heerlen, the Netherlands) lined with a 100 kDa molecular weight cut off (MWCO) semi-permeable membrane (Spectrum Laboratories, Breda, the Netherlands) prevents swelling. Injectable gel A hybrid thermo-reversible biodegradable hydrogel (Fig. 2) was used as one controlled release platform (TNO, Eindhoven, the Netherlands) [START_REF] Craenmehr | Liquid composition comprising polymer chains and particles of an inorganic material in a liquid[END_REF]. The hydrogel consists of a network of biodegradable nanoparticles linked to Lower Critical Solution Temperature (LCST) polymers. At room temperature, they are dispersed in water, injectable through a 32-gauge needle. However, at 37°C, they gel by crosslinks arising from hydrophobic interactions of the LCST polymers. The hydrogel was mixed with Cxb (LC Laboratories, Woburn, MA) in three different concentrations [START_REF] Jacobs | Surgery versus conservative management of sciatica due to a lumbar herniated disc: a systematic review[END_REF]120, and 1200 mg/ ml gel). Previous in vitro experiments, showed that 0.5 ml gel, loaded with these concentrations, resulted in average releases of 0.1, 1, and 10 mM, respectively, when added to 6 ml of medium. Microspheres Biodegradable microspheres (DSM, Fig. 2) were prepared with an average particle diameter of 40 mm [START_REF] Yang | Applicability of a newly developed bioassay for determining bioactivity of anti-inflammatory compounds in release studies-celecoxib and triamcinolone acetonide released from novel PLGA based microspheres[END_REF]. During production, microspheres were loaded with 8.5%w/w Cxb. Previous in vitro experiments showed that 2.7mg of microspheres (loaded with 22.9 mg Cxb) in 6 ml of medium resulted in an average release of 10 mM. Therefore 2.7, 0.27, and 0.027 mg microspheres were added to a well to release 10, 1, and 0.1 mM Cxb. Treatment conditions Six NP tissue samples from independent donors were cultured in every experimental group (n = 6/group). Control artificial annulus samples remained untreated. Samples in free swelling condition were (1) not treated; (2) treated with a bolus (10 mM Cxb in medium for 3 days); (3) treated with a sustained release control (1 mM Cxb in medium for 28 days); and (4-11) treated with the microspheres or gel, loaded for an aimed release of: 0, 0.1, 1, or 10 mM Cxb (Table 1). Explants were cultured in 12-well deep-well insert plates (Greiner) with cell culture inserts (0.4mm pore size, Greiner) to hold the sustained release biomaterials. Samples were cultured for 28 days at 37°C, 5% O 2 , and 5% CO 2 and medium was changed twice a week. During medium changes, temperature was kept at 37°C to prevent hydrogel liquefaction. Biochemical content Samples were weighed at the beginning and end of culture, and the percent increase in sample wet weight (ww) was calculated. Subsequently, they were stored frozen at -30°C, lyophilized overnight (Freezone 2.5; Labconco) and the dry weight (dw) was measured. The water content was calculated from ww-dw/ww. The samples were then processed as described earlier and used to determine their content of sulfated glycosaminoglycans (sGAG), hydroxyproline (HYP), and DNA, as well as the fixed charge density (FCD) [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF]. The amounts of sGAG, HYP, and DNA were expressed as percentage of the sample dw. Cytokine release into the media At every medium change, samples were collected and stored at -80°C. Release of cytokines into the medium was measured with ELISAs specific for bovine IL-1 and IL-6 (both Thermo Fisher Scientific, Waltham, MA), specific for bovine TNF (R&D Biosystems, Minneapolis, MN), and PGE 2 (Enzo Life Sciences, Farmingdale, NY) following manufacturer's instructions, and normalized to the original sample ww. All standard curves were dissolved in fresh culture medium, to account for any effects of the media on the analysis. Release was measured for all groups at days 3 and 28 as well as on days 14 and 21 for the best performing concentration of both loaded biomaterials and control groups. Cxb concentration At every medium change, samples were stored at -80°C, before Cxb concentration analysis by InGell Pharma (Groningen, the Netherlands). Samples were pre-treated as described before with slight modifications, that is concentration using liquid-liquid extraction with ethyl acetate, and mefenamic acid (Sigma) used as internal standard [START_REF] Zarghi | Simple and rapid highperformance liquid chromatographic method for determination of celecoxib in plasma using UV detection: application in pharmacokinetic studies[END_REF]. The samples were analyzed with UPLC (1290 Infinity, Agilent, Santa Clara, CA) where the detection-limit was 10 ng/ml Cxb. The medium concentration of Cxb was measured for bolus treatment at days 14 and 28 and the best performing concentration of both loaded biomaterials at days 3, 14, 21, and 28. Statistics Matlab (Mathworks, Inc., Natick, MA) was used for statistical analysis. For all biochemical data, one-way analysis of variance (ANOVA) was performed, followed by Dunnett's test for differences compared to artificial annulus. For the cytokine release data of the selected groups, Kruskal-Wallis analysis was performed at each time point, followed by Bonferroni corrected Mann-Whitney post hoc tests for differences compared to artificial annulus. To investigate the difference in Cxb release between the two biomaterials, two-way ANOVA was performed, followed by Bonferroni corrected post hoc t-tests. Statistical significance in all cases was assumed for p<0.05. Results The ww of the artificial annulus group decreased 10% during culture, while free swelling groups increased between 100 and 150% ww (Fig. 3A). In all free swelling groups, the water content increased significantly (Fig. 3B), and the sGAG content (Fig. 3C) and FCD (Fig. 3D) decreased significantly compared to the artificial annulus group. In addition, the DNA content increased five-fold compared to the artificial annulus group (Fig. 3E). There were no differences in hydroxyproline content (Fig. 3F). With both biomaterials, 1 mM aimed release was the lowest dosage that completely inhibited PGE 2 at day 28 (Fig. S1). These were analyzed at the intermediate time points (days 14 and 21), together with the artificial annulus, free swelling, sustained control, and bolus groups (Fig. 4). In artificial annulus samples, high levels of PGE 2 were produced during the first 3 days, but were ameliorated from day 14 onwards (Fig. 4A). Free swelling samples produced PGE 2 throughout culture, with maximal levels from days 14 to 28. Due to the large variance in response, these levels were only significantly different from artificial annulus at day 28. Bolus, sustained control and both biomaterial samples were significantly different from the artificial annulus at day 3, with a complete inhibition of PGE 2 production. From day 14 onwards, these groups were not different from the artificial annulus group, indicating continuous inhibition of PGE 2 production. In artificial annulus samples, very low or undetectable levels of IL-6 were produced throughout culture (Fig. 4B). In free swelling explants, significantly higher levels of IL-6 were produced, from day 14 onwards, which were maximal at days 14 and 21. In all treated groups, high levels of IL-6 were produced throughout the culture, except for gel 1 mM where IL-6 production was partially reduced at days 14 and 21 and not significantly different from artificial annulus. Very low or undetectable levels of IL-1 and TNF were produced in all groups and at all time points, and there were no differences between groups (Fig. S2). Both biomaterials, loaded for an aimed release of 1 mM, released Cxb for 28 days (Fig. 5). The average release, determined from these four time points, was 0.84 and 0.68 mM for the microspheres and gels, respectively, that is, close to the aimed concentration of 1 mM Cxb release in the gel was stable around 0.7 mM for 28 days. A burst release was observed in microspheres during the first 3 days, which was significantly higher compared to the gel. Thereafter, the release decreased in the microspheres and was significantly lower at each subsequent time point. Cxb release at days 21 and 28 was significantly lower in the microspheres compared to the gel. Cxb was still measured in the culture medium of the bolus group at day 14 and even at day 28, and was significantly larger than 0. Discussion The artificial annulus prevented swelling, kept NP tissue stable [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF][START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF] and showed no long-term inflammatory response. Free swelling of NP tissue induced a sustained inflammatory response, of the inflammatory cytokine IL-6 and the nociceptive stimulus PGE 2 . This supports our hypothesis that even without interaction with the immune system, the extruded NP may alone initiate and/or contribute to the inflammation seen in herniation. With its high tonicity, extruded NP tissue will absorb water and swell [START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF], causing a hypo-osmotic stress to the cells. Such a stimulus has been shown to also stimulate amnion derived cells in a COX2 dependent manner resulting in production of PGE 2 [START_REF] Lundgren | Hypotonic stress increases cyclooxygenase-2 expression and prostaglandin release from amnion-derived WISH cells[END_REF]. NP cells regulate daily changes in osmolarity through NFAT [START_REF] Burke | Human nucleus pulposis can respond to a pro-inflammatory stimulus[END_REF][START_REF] Tsai | TonEBP/ OREBP is a regulator of nucleus pulposus cell function and survival in the intervertebral disc[END_REF] which is produced more in hypertonic conditions. Although NP cells in monolayer stimulated with hypo-osmotic stress demonstrated MEK/ERK and AKT involvement [START_REF] Mavrogonatou | Effect of varying osmotic conditions on the response of bovine nucleus pulposus cells to growth factors and the activation of the ERK and Akt pathways[END_REF], further research is needed, to clearly understand the involved biological pathway(s), which may be exploited for alternative treatment strategies. Nevertheless, this simple mechanobiological stimulus, similar to the initial phase of extruded herniation, did consistently elicit an inflammatory response although the variance was large between samples. This could have been due to the differences in osmotic pressure between samples, but this will be even more variable in patients with their different states of disc degeneration. PGE 2 is involved in painful herniated discs [START_REF] Samad | Interleukin-1betamediated induction of Cox-2 in the CNS contributes to inflammatory pain hypersensitivity[END_REF]; thus, the observation that NP tissue itself is able to produce PGE 2 shows the promise of COX2-specific treatment for radicular pain. The role of IL-6 in extruded herniation is not clear. It may be beneficial to alleviate the negative effects of herniation by contributing to resorption of herniated tissue via upregulation of matrix metalloproteases [START_REF] Studer | Human nucleus pulposus cells react to IL-6: independent actions and amplification of response to IL-1 and TNF-alpha[END_REF], inhibiting proteoglycan production [START_REF] Studer | Human nucleus pulposus cells react to IL-6: independent actions and amplification of response to IL-1 and TNF-alpha[END_REF], and stimulating macrophage maturation [START_REF] Mitani | Activity of interleukin 6 in the differentiation of monocytes to macrophages and dendritic cells[END_REF]. On the other hand, as it can induce hyperalgesia in rats [START_REF] Deleo | Interleukin-6mediated hyperalgesia/allodynia and increased spinal IL-6 expression in a rat mononeuropathy model[END_REF], it may be detrimental as well. However, IL-6 can induce PGE 2 (29) so, it is possible that hyperalgesic effects of IL-6 were not direct but due to increased PGE 2 production. This ambiguous role of IL-6 in herniated disc disease should be further investigated, but if it is mostly beneficial, treating painful herniated discs with Cxb could be a promising alternative to currently used steroids that have been reported to slow down the natural resorption of extruded NP tissue in a rabbit tissue model [START_REF] Minamide | Effects of steroid and lipopolysaccharide on spontaneous resorption of herniated intervertebral discs. An experimental study in the rabbit[END_REF]. Interestingly, swelling healthy bovine NP tissue did not directly produce effective doses of TNF and IL-1, although both have been detected in herniated human tissue [START_REF] Takahashi | Inflammatory cytokines in the herniated disc of the lumbar spine[END_REF][START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF] and rabbits (4). However, similar to our results, culture of such specimens did not lead to presence of either cytokine in the medium, even after lipopolysaccharide stimulation [START_REF] Burke | Human nucleus pulposis can respond to a pro-inflammatory stimulus[END_REF]. This could be because TNF detected in homogenates of human herniated NP tissue was only membrane bound and no soluble TNF was found [START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF]. Furthermore, expression of these cytokines is associated with increasing degree of disc degeneration [START_REF] Maitre | The role of interleukin-1 in the pathogenesis of human intervertebral disc degeneration[END_REF], but can also be produced by activated macrophages (32) neither of which were present in our system. Low levels of TNF, though, can have already strong effects in NP-like tissues [START_REF] Seguin | Tumor necrosis factor-alpha modulates matrix production and catabolism in nucleus pulposus tissue[END_REF] and TNF inhibitors have the potential to stop radicular pain in patients [START_REF] Korhonen | Efficacy of in¯iximab for disc herniation-induced sciatica: one-year follow-up[END_REF]. Thus, the roles of TNF and IL-1 in painful herniation remain pertinent. The DNA content in free swelling samples increased five-fold compared to artificial annulus. However, the PGE 2 and IL-6 production at day 28 increased 30-90-fold, respectively, thus, cannot be explained by the increased cell number alone. In a previous study [START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF], increase in DNA was attributed to cell cloning and formation of a fibrous layer at the sample periphery. In the present study, we did not investigate the contribution of this fibrous layer to the production of IL-6 and PGE 2 . As this fibrous layer most likely contains active dedifferentiated cells, its production of inflammatory factors might be considerable. However, the increased PGE 2 and IL-6 production in the first 3 days, during absence of the fibrous layer, indicate that NP cells within the tissue produce inflammatory cytokines as well. Furthermore, in vivo this layer will probably also be formed [START_REF] Specchia | Cytokines and growth factors in the protruded intervertebral disc of the lumbar spine[END_REF] and may initiate the formation of the granulation tissue seen in sequestered and extruded surgical samples [START_REF] Koike | Angiogenesis and inflammatory cell infiltration in lumbar disc herniation[END_REF]. Any contribution of this layer to the production of cytokines in this study will occur in vivo as well. Artificial annulus samples also increased PGE 2 production but only for the first 3 days. This is most likely a result of the shrinkage step needed before placing samples in the system. In this step, samples received a strong hyperosmotic stimulus (50% PEG in PBS (w/v) for 100 minutes), which has been shown to induce COX2 in renal cells [START_REF] Moeckel | COX2 activity promotes organic osmolyte accumulation and adaptation of renal medullary interstitial cells to hypertonic stress[END_REF]. Besides this temporal production of PGE 2 , no other cytokines were produced, showing that healthy bovine NP explants do not produce inflammatory cytokines, unless they are triggered. Cxb was able to completely inhibit elevated PGE 2 levels for 28 days, when delivered continuously at a concentration of 1 mM in the sustained control group. This result was expected, as this is in the order of the therapeutic plasma concentration of Cxb (17). Both biomaterials were able to release Cxb continuously for 28 days and, thus, inhibit PGE 2 . However, the release kinetics from the two biomaterials were significantly different, that is, microspheres showed a burst release, and a slowly declining release afterwards as observed earlier [START_REF] Yang | Applicability of a newly developed bioassay for determining bioactivity of anti-inflammatory compounds in release studies-celecoxib and triamcinolone acetonide released from novel PLGA based microspheres[END_REF], while the gels showed constant release for 28 days. To our surprise, the 10 mM bolus was also able to inhibit PGE 2 production of swelling NP explants for 28 days. This dosage is 10 times higher than the biomaterials and 3 times higher than maximum serum levels (17), but as this is a local deposit it will not lead to high serum concentrations in patients. We did not test a lower bolus; thus, we do not know if the bolus' success is because of the higher dose. What was most surprising is that we still measured Cxb in the medium at days 14 and 28, although all initial culture medium was removed at day 3. If Cxb was not degraded and distributed evenly throughout the fluid in tissue and medium, only approximately 70% of Cxb was removed at each medium change. Therefore, the bolus is likely to provide an effective dose longer than 3 days, possibly even until day 14, but not until day 28. Cxb only decreased 50% between days 14 and 28, where a 99% decrease was expected, so it is probable that Cxb is binding to the tissue. In blood serum samples, 97% of Cxb is bound to serum albumin (17), which is also a component of our culture medium. Furthermore, albumin is detected in osteoarthritic cartilage tissue, bound to keratan sulfate, one of the main sGAGs in the disc, through disulfide bonds [START_REF] Mannik | Immunoglobulin-G and Serumalbumin isolated from the articular-cartilage of patients with rheumatoidarthritis or osteoarthritis contain covalent heteropolymers with proteoglycans[END_REF]. If this proposed mechanism of albumin binding to the tissue is correct, this will have clinical implications. In 80% of extruded herniation samples, blood vessels are observed (38) so, Cxb injected adjacent to extruded NP tissue can diffuse in and bind to pre-bound albumin, lengthening the therapeutic effect. Possibly, this might explain the relative success of epidural steroid injections, as the NP tissue can prolong the effect of a single drug injection. Nevertheless, more research needs to be done to determine the validity of this mechanism. It is also possible that Cxb is binding to other proteins in the NP. This interesting finding shows the value of using our tissue model for preclinical evaluation of therapies for disc herniation, as in cell culture this property of Cxb would be missed. However, a limitation of this model is the absence of the body immune response. Repeating this study in a suitable animal model will truly show if a single injection of Cxb will be as successful as a sustained release biomaterial. Nevertheless, treatment with a COX2 inhibitor like Cxb has clinical potential, as not only the swelling NP tissues in this study, but also infiltrated samples, produce PGE 2 . Another interesting finding is that the gel alone was able to reduce IL-6 production to some extent, especially at day 14. This reduction was also observed at day 14 with the 10 mM loaded gels but also with the empty gels (Fig. S3), indicating that the biomaterial itself affected IL-6 production. As we used Transwell  systems (pore size 0.4 mm) in this experiment, there was no direct contact between tissue and biomaterials, but degradation products could have reached the tissue. When the gels degrade, magnesium ions (Mg 2 ) leach out, which can affect IL-6 production [START_REF] Nowacki | Highmagnesium concentration and cytokine production in human whole blood model[END_REF]. Furthermore, a relatively large amount of biomaterial was used in the gel groups (500 ml per sample in all concentrations). Nevertheless, any direct effect of the gel on IL-6 production is only partial and transient, and any benefits thereof remain to be investigated. Conclusion Exposing NP tissue explants to lower tonicity conditions, without the inflammatory system response, increased PGE 2 and IL-6. These cytokines are interesting candidates when treating acute herniation, as they are produced before infiltration of macrophages, and involved in pain and inflammation. Cxb could successfully stop PGE 2 , but not IL-6, production for 28 days when supplied by sustained release biomaterials. Interestingly, a bolus was able to achieve the same result, revealing that NP tissue itself can function as a carrier for sustained release of Cxb. Explants were cultured in a deep-well insert plate (bottom). In some groups, celecoxib (Cxb) was added to the medium directly. Both the injectable gel (top left), and the microspheres (top right) were added using a culture insert. LDH, layered double hydroxide; pNIPAAM, poly(N-isopropylacrylamide); RT, room temperature (drawn by Anthal Smits). Figure S3. IL-6 release into the media. Release at day 14, in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6. Figure 1 . 1 Figure 1. Tissue NP explant culture system.Image of nucleus pulposus (NP) explants in artificial annulus culture. Artificial annulus samples (right) were cultured in a deep-well Transwell plate with 6 ml of culture medium. As the artificial annulus samples might be buoyant, a stainless steel cylinder was added to the culture insert (left) to keep samples submerged (drawn by Anthal Smits). Figure 2 . 2 Figure 2. Schematic image of NP explants in free swelling culture. Figure 3 . 3 Figure 3. Biochemical content of NP explants after 28 days. Weight change (A), water content (B), sulfated glycosaminoglycan (sGAG) content expressed per dry weight (dw) (C), fixed charge density in mEq/g (D), DNA content expressed per dw (E) and hydroxyproline content expressed per dw (F). Values are means standard deviation, n = 6. * Different from artificial annulus; p < 0.05. Figure 4 . 4 Figure 4. Cytokine release into the medium over time.Release of prostaglandin E2 (PGE2, A) and interleukin-6 (IL-6, B), in pg/ ml, during 3 days, normalized to original sample wet weight (ww). Values are means +/-standard deviation, n = 6. * Different from artificial annulus at same time point, p < 0.05. Figure 5 . 5 Figure 5. Cxb concentration measured in the medium over time.Concentration in mM at days 3, 14, 21, and 28 for both biomaterials aimed for 1mM release, and at days 14 and 28 for bolus. Values are means + standard deviation, n = 6. * Different from gel at same time point, # different from previous time point of the same biomaterial, $ different from 0 mM, p < 0.05. Figure S1 . S1 Figure S1. PGE 2 release into the media.Release at day 28, in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6. Figure S2 . S2 Figure S2. Cytokine release into the medium over time. Release of interleukin-1β (IL-1β, a) and tumor necrosis factor α (TNFα, b), in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6. Table 1 . Overview of the culture groups. 1 Acknowledgments This research forms part of the Project P2.01 IDiDAS of the research program of the BioMedical Materials institute, a Dutch public-private partnership. We would like to acknowledge Paul van Midwoud at InGell Pharma for celecoxib release measurements, Irene Arkesteijn at TU/e for the valuable contribution to the artificial annulus model, Renz van Ee and Klaas Timmer at TNO, and Detlef Schumann at DSM for supplying us with the biomaterials. Disclosure Statement The academic partners did not receive direct funding from the commercial partners and only non-commercial authors interpreted the data.
33,247
[ "740372" ]
[ "169365" ]
01758625
en
[ "sdv" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-01758625/file/Potier07_HypOsteo.pdf
E Potier E Ferreira R Andriamanalijaona J P Pujol K Oudina D Logeart-Avramoglou H Petite Hypoxia affects mesenchymal stromal cell osteogenic differentiation and angiogenic factor expression Keywords: Mesenchymal stromal cells, Hypoxia, Osteogenic differentiation, Angiogenic factor, Cell survival Mesenchymal stromal cells (MSCs) seeded onto biocompatible scaffolds have been proposed for repairing bone defects. When transplanted in vivo, MSCs (expanded in vitro in 21% O 2 ) undergo temporary oxygen deprivation due to the lack of pre-existing blood vessels within these scaffolds. In the present study, the effects of temporary (48-hour) exposure to hypoxia (1% O 2 ) on primary human MSC survival and osteogenic potential were investigated. Temporary exposure of MSCs to hypoxia had no effect on MSC survival, but resulted in (i) persistent (up to 14 days post exposure) down-regulation of cbfa-1/Runx2, osteocalcin and type I collagen and (ii) permanent (up to 28 days post exposure) up-regulation of osteopontin mRNA expressions. Since angiogenesis is known to contribute crucially to alleviating hypoxia, the effects of temporary hypoxia on angiogenic factor expression by MSCs were also assessed. Temporary hypoxia led to a 2-fold increase in VEGF expression at both the mRNA and protein levels. Other growth factors and cytokines secreted by MSCs under control conditions (namely bFGF, TGF1 and IL-8) were not affected by temporary exposure to hypoxia. All in all, these results indicate that temporary exposure of MSCs to hypoxia leads to limited stimulation of angiogenic factor secretion but to persistent down-regulation of several osteoblastic markers, which suggests that exposure of MSCs transplanted in vivo to hypoxia may affect their bone forming potential. These findings prompt for the development of appropriate cell culture or in vivo transplantation conditions preserving the full osteogenic potential of MSCs. Introduction Mesenchymal stromal cells (MSCs) loaded onto biocompatible scaffolds have been proposed for restoring function of lost or injured connective tissue, including bone [START_REF] Cancedda | Cell therapy for bone disease: a review of current status[END_REF][START_REF] Logeart-Avramoglou | Engineering bone: challenges and obstacles[END_REF][START_REF] Petite | Tissueengineered bone regeneration[END_REF]. Physiological oxygen tensions in bone are about 12.5% O2 [START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF] but fall to 1% O2 in fracture hematoma [START_REF] Brighton | Oxygen tension of healing fractures in the rabbit[END_REF][START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF]. In tissue engineering applications, implanted MSCs undergo temporary oxygen deprivation, which may be considered as similar to fracture hematoma (i.e., 1% O2) due to the disruption of the host vascular system (as the result of injury and/or surgery) and the lack of preexisting vascular networks within these scaffolds. These drastic conditions of transplantation can lead to the death or functional impairment of MSCs, which can affect their ultimate bone forming potential. The exact effects of hypoxia on osteoprogenitor or osteoblast-like cells have not been clearly established, however, as several studies demonstrated a negative impact on cell growth [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF] and differentiation [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF], whereas others have shown that hypoxia has positive effects on cell proliferation [START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF] and osteoblastic differentiation [START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF]. These discrepancies may be due to the differences between the cell types (primary [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF] and cell lines [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]), species (rat [START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF], human [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF] and mouse [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]) and hypoxic conditions (from 0.02% to 5% O2) used. Since the success of bone reconstruction methods based on the use of engineered constructs depends on the maintenance of viable and functional MSCs, it is of particular interest to elucidate the effects of temporary hypoxia on primary human MSC survival and osteogenic potential. MSCs secrete a wide variety of angiogenic factors (including vascular endothelial growth factor (VEGF) [START_REF] Kinnaird | Local delivery of marrow-derived stromal cells augments collateral perfusion through paracrine mechanisms[END_REF], transforming growth factor 1 (TGF1) [START_REF] Han | Potential of human bone marrow stromal cells to accelerate wound healing in vitro[END_REF][START_REF] Sensebe | Cytokines active on granulomonopoiesis: release and consumption by human marrow myoid [corrected] stromal cells[END_REF], and basic fibroblast growth factor (bFGF) [START_REF] Han | Potential of human bone marrow stromal cells to accelerate wound healing in vitro[END_REF][START_REF] Kinnaird | Local delivery of marrow-derived stromal cells augments collateral perfusion through paracrine mechanisms[END_REF]) and may therefore modulate angiogenic processes and participate in the vascular invasion of engineered contructs. Since effective neo-vascularization is crucial for shortening the hypoxic episodes to which transplanted MSCs are exposed, it seemed to be worth investigating the stimulatory effects of hypoxia on angiogenic factor expression by MSCs. The aim of the present study was therefore to investigate the effects of temporary hypoxia on primary human MSC (hMSC) proliferation, osteogenic potential and angiogenic factor expression. In this study, O 2 tensions  4% are termed hypoxic conditions (as these conditions represent the hypoxia to which hMSCs transplanted in vivo are subjected) and 21% O 2 tensions are termed control conditions (as these conditions represent standard cell culture conditions). Cell viability was assessed after exposing hMSCs to hypoxic conditions during various periods of time. Osteogenic differentiation was assessed after temporary (48-hour) exposure of hMSCs to either control or hypoxic conditions followed by different periods of osteogenic cell culture. Expression of several angiogenic factors by hMSCs involved in new blood vessel formation (VEGF, bFGF and TGF) and maturation (platelet derived growth factor BB (PDGF-BB)) was assessed after temporary (48-hour) exposure of hMSCs to either control or hypoxic conditions. Materials and Methods Hypoxia Hypoxia was obtained using a sealed jar (Oxoid Ltd, Basingstoke, United Kingdom) containing an oxygen chelator (AnaeroGen, Oxoid Ltd) [START_REF] Grosfeld | Transcriptional effect of hypoxia on placental leptin[END_REF]. Twice a day, the pO 2 was measured diving an oxygen electrode directly into cell culture medium (pH: 7.2) and using an Oxylab pO 2 TM (Oxford Optronix; Oxford, United Kingdom). The hypoxic system was left closed throughout the period of experimentation. Cell culture Human mesenchymal stromal cells (hMSCs) were isolated from tibia bone marrow specimens obtained as discarded tissue during routine bone surgery (spinal fusion) in keeping with local regulations. Bone marrows were obtained from 3 donors (2 males and 1 female; 14-16 years old). hMSCs were isolated using a procedure previously described in the literature [START_REF] Friedenstein | The development of fibroblast colonies in monolayer cultures of guinea-pig bone marrow and spleen cells[END_REF][START_REF] Pittenger | Multilineage potential of adult human mesenchymal stem cells[END_REF]. Briefly, cells were harvested by gently flushing bone marrow samples with alpha Minimum Essential Medium (MEM, Sigma) containing 10% fetal bovine serum (FBS, PAA Laboratories) and 1% antibiotic and anti-mycotic solution (PAA Laboratories). When the hMSCs reached 60-70% confluence, they were detached and cryopreserved at P1 (90% FBS, 10% DMSO). For each experiment, a new batch of hMSCs was thawed and cultured. Cells from each donor were cultured separately. Human endothelial cells (EC, kindly provided by Dr Le Ricousse-Roussanne) were cultured in Medium 199 (Sigma) containing 20% FBS supplemented with 15 mM HEPES (Sigma) and 10 ng/ml rhVEGF165 (R&D Systems) [START_REF] Ricousse-Roussanne | Ex vivo differentiated endothelial and smooth muscle cells from human cord blood progenitors home to the angiogenic tumor vasculature[END_REF]. Multipotency of hMSCs Induction of osteogenic differentiation. hMSCs (passage P7) were cultured in osteogenic medium consisting of MEM containing 10% FBS, 10 -7 M dexamethasone, 0.15 mM ascorbate-2-phosphate (Sigma), and 2 mM -glycerophosphate (Sigma) [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 10 and 20 days of culture, the cells were fixed in PBS containing 1% PFA and stained with a NBT/TCIP kit (Molecular Probes) to evaluate the alkaline phosphatase (ALP) activity. Calcium deposition was assayed by using the Von-Kossa staining method [START_REF] Bruder | Growth kinetics, self-renewal, and the osteogenic potential of purified human mesenchymal stem cells during extensive subcultivation and following cryopreservation[END_REF]. After 10 and 20 days of culture, mRNA extraction, cDNA synthesis and RT-PCR were performed as described in the "RT-PCR assays" section to assess the transcription levels of osteogenic markers (osteocalcin and osterix). Induction of chondrogenic differentiation. hMSCs (passage P7; 2x10 5 cells) suspended in 0.5 ml of chondrogenic medium were centrifuged for 2 min at 500 g. The chondrogenic medium used contained MEM supplemented with 6.25 µg/ml insulin, 6.26 µg/ml transferrin (Sigma), 6.25 µg/ml selenious acid (Sigma), 5.35 µg/ml linoleic acid (Sigma), 1.25 µg/ml bovine serum albumin (Sigma), 1 mM pyruvate (Sigma), and 37.5 ng/ml ascorbate-2phosphate [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After centrifugation, pellets of hMSCs were cultured in chondrogenic medium supplemented with 10 ng/ml TGFß1 (R&D Systems) and 10 -7 M dexamethasone [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 20 and 30 days of cell culture, hMSC pellets were cryo-preserved (-80°C) until immuno-histological analysis to detect the presence of human type II collagen. Human type II collagen protein was detected using a goat polyclonal IgG anti-human type II collagen antibody (200 g/ml; Santa Cruz Biotechnology). Peroxidaseconjugated anti-goat IgG antibody (1:200; Vectastain ABC kit; Vector) was used as the secondary antibody. Peroxidase activity was monitored using a Vectastain ABC kit. Sections were counterstained using haematoxylin. Induction of adipogenic differentiation. hMSCs (passage P7) were cultured in adipogenic medium consisting of MEM containing 10% FBS, 5 µg/ml insulin (Boehringer Manheim), 10 -7 M dexamethasone (Sigma), 0.5 mM isobutylmethylxanthine (Sigma), and 60 µM indomethacin (Sigma) [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 10 and 20 days of culture, the cells were fixed in PBS containing 1% paraformaldehyde (PFA, Sigma) and stained with Oil Red O (Sigma) [START_REF] Diascro | High fatty acid content in rabbit serum is responsible for the differentiation of osteoblasts into adipocyte-like cells[END_REF]. After 10 and 20 days of cell culture, mRNA extraction, cDNA synthesis and RT-PCR were performed as described in the "RT-PCR assays" section to assess the transcription levels of adipogenic markers (fatty acid binding protein 4 (aP2) and peroxisome proliferator-activated receptor (PPAR)). Cell death assays hMSCs (passage P5) were plated at 5,000 cells/cm 2 and allowed to adhere overnight. Cells were subsequently exposed to hypoxic conditions (without medium change) for different periods of time. Cell death was assessed by image analysis (Leica Qwin software) after staining with the Live/Dead viability/cytotoxicity kit (Molecular Probes). hMSC osteogenic differentiation after exposure to temporary hypoxia hMSCs (passage P5) were plated at 5,000 cells/cm 2 and allowed to adhere overnight. After exposure of hMSCs either to hypoxic or control conditions for 48 hours, the cell culture supernatant medium was replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. mRNA extraction, cDNA synthesis and RT-PCR were then performed as described in the "RT-PCR assays" section to assess the transcription levels of osteogenic markers (osteocalcin, ALP, type I collagen, osteopontin, bone sialoprotein (BSP), core binding factor alpha sub-unit 1 (cbfa-1/Runx2) and bone morphogenetic protein-2 (BMP-2)). RT-PCR assays Cytoplasmic mRNA was extracted from cell layers using an RNeasy mini kit (Qiagen) and digested with RNase-free DNase (Qiagen) in line with the manufacturer's instructions. cDNA synthesis was performed using a Thermoscript kit (Invitrogen) and Oligo DT primers (50 M). PCRs were performed on an iCycler using a Multiplex PCR kit (Qiagen) with 15 ng of cDNA and 0.2 M of each of the primers (for primer sequences see Table 1, supplemental data). After a 10-min denaturation step at 95°C, cDNA was amplified in PCR cycles consisting of a three-step PCR: a 30-sec denaturation step at 95°C , a 90-sec annealing step at 60°C, and a 90-sec elongation step at 72°C. An additional 10min elongation cycle was conducted at 72°C. PCR products were analyzed by performing agarose gel electrophoresis and ethidium bromide staining. In each PCR, ribosomal protein L13a (RPL13a) was used as the endogenous reference gene (for primer sequences see Table 1). RPL13a was chosen among the 5 housekeeping genes tested (RPL13a, actin, glyceraldehyde-3-phosphate dehydrogenase, 18S ribosomal RNA, and hypoxanthine phosphoribosyltransferase 1) as the most "stable" housekeeping gene in hMSCs exposed to hypoxic conditions. cDNA from ECs was used as the positive control in the angiogenic growth factor mRNA expression assays. Semi-quantitation of the PCR products was performed using Quantity One software (BioRad). Expression of target genes was normalized taking the respective RPL13a expression levels. Real Time PCR assays mRNA extraction and reverse transcription were conducted as described in the "RT-PCR assays" section. Real Time PCR assays were performed on the ABI Prism 7000 SDS (Applied Biosystems) using the SYBR Green Mastermix Plus (Eurogentec) with 1.5 ng of cDNA (1/50 diluted) and 400-600 nM of each of the primers (for primer sequences see Table 2, supplemental data). After a 10min denaturation step at 95°C, cDNA was amplified by performing two-step PCR cycles: a 15-sec step at 95°C, followed by a 1-min step at 60°C. In each Real Time PCR assay, one of the cDNA used was diluted (1/2; 1/4; 1/8) in order to establish a standard curve and define the exact number of cycles corresponding to 100% efficiency of polymerization. Reactions were performed in triplicate and expression of target genes was normalized taking the respective RPL13a expression levels. Relative quantities of cDNA were calculated from the number of cycles corresponding to 100% efficiency of polymerization, using the 2 -DeltaDeltaCT method [START_REF] Livak | Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) Method[END_REF]. ELISA assays After exposing hMSCs to either hypoxic or control conditions for 48 hours, the supernatant media were collected, centrifuged at 13,000g at 4°C for 10 min, collected, and kept at -80°C until ELISA assays were performed. VEGF, bFGF, and Interleukin-8 (IL-8) expressions were assayed using ELISA kits from R&D Systems (Quantikine) in line with the manufacturer's instructions. TGF1 expression was assayed using an ELISA assay developed at our laboratory [START_REF] Maire | Retention of transforming growth factor beta1 using functionalized dextran-based hydrogels[END_REF], after activating TGF1 by acidifying the cell culture supernatant media [START_REF] Van Waarde | Quantification of transforming growth factor-beta in biological material using cells transfected with a plasminogen activator inhibitor-1 promoter-luciferase construct[END_REF]. Angiogenesis antibody array assays The levels of expression of 20 growth factors and cytokines were determined using the RayBio® human angiogenesis antibody array (Ray Biotech, Inc, Norcross, GA, USA). After exposing hMSCs to either hypoxic or control conditions for 48 hours, the supernatant media were collected and stored as described in the "ELISA assays" section. Protein-antibody complexes were revealed by chemoluminescence in line with the manufacturer's instructions and the results were photographed on Xomat AM film (Kodak). The following growth factors and cytokines were detected by the RayBio® angiogenesis antibody arrays: angiogenin, RANTES, leptin, thrombopoeitin, epidermal growth factor, epithelial neutrophil-activating protein 78, bFGF, growth regulated oncogene, interferon , VEGF, VEGF-D, insulin-like growth factor-1, interleukin 6, interleukin 8, monocyte chemoattractant protein 1 (MCP-1), PDGF, placenta growth factor, TGF1, tissue inhibitors of metalloproteinases 1 (TIMP-1), and tissue inhibitors of metalloproteinases-2 (TIMP-2). Statistical analysis Data are expressed as means  standard deviations. Statistical analysis was performed using an ANOVA with a Fisher post hoc test. The results were taken to be significant at a probability level of P < 0.05. Results Multipotency of hMSCs In order to determine the multipotency of the human mesenchymal stromal cells (hMSCs) used in this study, hMSCs were cultured in either osteogenic, chondrogenic, or adipogenic differentiation medium. Culture of hMSCs in osteogenic medium for 10 and 20 days increased the levels of alkaline phosphatase (ALP) activity (Fig. 1A). Osteogenic differentiation of hMSCs was confirmed by the expression of the osteogenic differentiation markers osterix and osteocalcin (Fig. 1A). Culture of hMSCs in chondrogenic medium for 30 days resulted in the expression of the type II collagen (marker of the chondrogenic differentiation) in the cell cytoplasm and extracellular matrix (Fig. 1B). Control sections incubated with secondary antibody alone showed negative staining patterns (Fig. 1B). Culture of hMSCs in adipogenic medium for 20 days resulted in the development of several clusters of adipocytes containing intracellular lipid vacuoles, which stained positive with Oil Red O (Fig. 1C). Expression of fatty acid binding protein 4 (aP2) and peroxisome proliferator activated receptor  (PPAR) (markers of the adipogenic differentiation) by hMSCs (Fig. 1C) confirmed the ability of these cells to differentiate along the adipogenic lineage. All these results confirm that the hMSCs used in this study are multipotent cells, since they are capable of differentiating along the osteogenic, adipogenic and chondrogenic lineages as previously demonstrated by numerous studies (for review: [START_REF] Barry | Mesenchymal stem cells: clinical applications and biological characterization[END_REF][START_REF] Jorgensen | Tissue engineering through autologous mesenchymal stem cells[END_REF][START_REF] Krampera | and Franchini Mesenchymal stem cells for bone, cartilage, tendon and skeletal muscle repair[END_REF][START_REF] Prockop | Marrow stromal cells as stem cells for nonhematopoietic tissues[END_REF]). But, even when hMSCs were committed to the osteoblastic lineage, the extracellular matrix did not mineralize after 30 days of cell culture in osteogenic medium. These results suggest that the culture conditions used in this study were suboptimal to preserve full biological function of hMSCs. Hypoxic model In order to check the validity of the model for hypoxia used in this study, the pO 2 levels were monitored in the sealed jar during 5 days and without exposing to atmospheric oxygen tensions. Moderate hypoxic conditions (pO 2 = 4% O 2 ) may be said to have been reached within 24 hours. Severe hypoxic conditions (pO 2 < 1%O 2 ) may be considered as reached after 48 hours. The pO 2 levels in the cell culture medium gradually decreased, reaching a plateau corresponding to values of around 0.25% O 2 after 72 hours (Fig. 2). Effects of prolonged hypoxia on hMSC survival To investigate the effects of hypoxia on cell survival, hMSCs were exposed to hypoxic conditions for 48, 72 and 120 hours. Exposure of hMSCs to prolonged (120 hours) hypoxic conditions resulted in limited rates of cell death (Fig. 3; 35.5  18.5%), whereas temporary hypoxia did not affect hMSC survival. Effects of temporary hypoxia on the osteogenic potential of hMSCs Having established that temporary hypoxia has no effect on hMSC survival, its effects on hMSC osteogenic potential were assessed. After 48-hour exposure to hypoxic or control conditions, hMSCs were transferred to osteogenic medium and osteogenic differentiation was assessed by performing RT-PCR assays to detect the expression of several osteogenic markers. The levels of cbfa-1/Runx2, osteocalcin and type I collagen expression were checked by performing quantitative real-time PCR assays. Similar levels of ALP, bone morphogenetic protein 2 (BMP2) and bone sialoprotein (BSP) expression were observed in hMSCs exposed to either hypoxic or control conditions at all time periods of osteogenic culture tested (Fig. 4). Osteopontin expression increased after exposure of hMSCs to hypoxic conditions at all osteogenic culture times tested (0 days: 2.6-fold; 14 days: 12-fold; 28 days: 8-fold) (Fig. 4). The levels of expression of cbfa-1/Runx2 and osteocalcin were slightly down-regulated after 0 and 14 days of osteogenic culture by temporary exposure to hypoxic conditions (0.5-fold with cbfa-1/Runx2; 0.7-fold with osteocalcin), as assessed by quantitative real time PCR assays (Fig. 5). After 28 days of osteogenic culture, however, the levels of cbfa-1/Runx2 and osteocalcin expressed by hMSCs exposed to hypoxic conditions were similar to those exposed to control conditions. Type I collagen expression was permanently down-regulated after 48-hour exposure of hMSCs to hypoxic conditions (approximately 0.4-fold at all the osteogenic culture times tested), but this decrease was statistically significant only on days 0 and 28 of osteogenic culture (Fig. 5). Effects of temporary hypoxia on the mRNA expression of angiogenic factors by hMSCs Effects of temporary hypoxia on angiogenic factor expression by hMSCs were investigated. mRNA expression of angiogenic factors was assessed by performing RT-PCR assays after exposing hMSCs to either hypoxic or control conditions for 48 hours. Expression levels of key angiogenic factors (namely vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), transforming growth factor 1, 2 and 3 (TGF-1, TGF-2, and TGF-3)) and those of VEGF receptor 1 and receptor 2 were studied. No expression of PDGF-BB, VEGF receptor 1 or VEGF receptor 2 was detected under any of the conditions tested with hMSCs. However, the RT-PCR conditions used were suitable for the detection of PDGF-BB, VEGF receptor 1 and VEGF receptor 2, as these factors were detected with endothelial cells (EC) (data not shown). Similar levels of TGF1 and TGF2 expression were detected after exposing hMSCs to either hypoxic or control conditions for 48 hours (Fig. 6). The levels of TGF3 expression decreased after exposure to hypoxic conditions for 48 hours (TGF3/RPL13a ratio: 0.21  0.05), in comparison with TGF3 expression obtained under control conditions (0.89  0.8) (Fig. 6). Conversely, expression levels of bFGF and VEGF increased when hMSCs were exposed to hypoxic conditions for 48 hours (bFGF/RPL13a ratio: 0.71  0.13; VEGF/RPL13a ratio: 1.51  0.05), in comparison to results obtained under control conditions (0.14  0.01 and 0.25  0.24 respectively) (Fig. 6). Effects of temporary hypoxia on the protein secretion levels of three major regulators of angiogenesis by hMSCs Since the secretion of angiogenic factors is required to induce angiogenesis, the levels of protein secretion of three major regulators of angiogenesis (namely VEGF, TGF1, and bFGF which were previously detected at the mRNA level) were assessed by performing ELISA assays after exposing hMSCs to either hypoxic or control conditions for 48 hours. To measure the TGF1 content of the cell culture supernatant media, acid activation of samples was required. Without this activation, no TGF1 secretion was detectable (data not shown). TGF1 secretion by hMSCs exposed to hypoxic conditions (270  70 pg/ml) was down-regulated in comparison with TGF1 secretion obtained under control conditions (570  270 pg/ml), but did not reach statistical significance (Fig. 7A). bFGF secretion decreased, but not significantly, in response to exposure of hMSCs to hypoxic conditions (0.4  0.3 pg/ml) in comparison with control conditions (1.2  0.5 pg/ml) (Fig. 7B). Even under control conditions, however, hMSCs were found to secrete small quantities of bFGF. Contrary to what occurred with TGF1 and bFGF, VEGF secretion by hMSCs exposed to hypoxic conditions (1640260 pg/ml) increased 2-fold in comparison with the results obtained under control conditions (880  100 pg/ml) (Fig. 7C). Neither TGF1, bFGF nor VEGF were detected in control medium alone (data not shown). Effects of temporary hypoxia on the protein secretion of various growth factors and cytokines by hMSCs To further investigate the effects of temporary and moderate hypoxia on hMSCs, the secretion levels of various growth factors and cytokines involved in angiogenic processes were monitored using angiogenesis antibody arrays after exposing hMSCs to either hypoxic or control conditions for 48 hours. Any changes in the growth factor and cytokine secretion levels were checked by performing conventional ELISA assays. Similar levels of secretion of Interleukin-6 (IL-6), Monocyte Chemoattractant Protein-1 (MCP-1), Tissue inhibitor Metallo-Proteinases 1 and 2 (TIMP-1 and TIMP-2) were observed in hMSCs, whether they were exposed to hypoxic or control conditions. Interleukin-8 (IL-8) secretion was up-regulated in two out of the three donors tested by exposing hMSCs to hypoxic conditions. These results were confirmed by the results of ELISA assays, which showed that IL-8 secretion by hMSCs exposed to hypoxic conditions increased (780  390 pg/ml) in comparison to what occurred under control conditions (440  230 pg/ml). This upregulation was not statistically significant, however, due to the great variability existing between donors. Other growth factors and cytokines tested using angiogenesis antibody arrays were not detected in hMSCs exposed to control or hypoxic conditions (data not shown). Neither cytokines nor growth factors were detected by angiogenesis antibody arrays incubated in control medium alone (data not shown). Discussion The first step in the present study consisted in evaluating the effects of reduced oxygen tensions on hMSC survival. Our results showed that 120-hour exposure to hypoxia resulted in increased cell death rates, when 48-or 72-hour exposure did not, but those cell death rates may have been underestimated as the method used in the present study did not consider floatting dead cells. The mechanisms underlying hMSC death upon oxygen deprivation are unclear at present. A previous study conducted on rat MSCs, however, offers some clues as it reported the induction of caspase dependent apoptosis under brief (24 hours) oxygen and serum deprivation (nuclear shrinkage, chromation condensation, decrease in cell size, and loss of menbrane integrity) [START_REF] Zhu | Hypoxia and serum deprivation-induced apoptosis in mesenchymal stem cells[END_REF]. The hMSC viability does not seem to be affected by short-term (<72-hour) hypoxia which are in agreement with previously published data [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Utting | Hypoxia inhibits the growth, differentiation and bone-forming capacity of rat osteoblasts[END_REF]. Grayson et al. reported that long-term culture of hMSCs under hypoxic conditions (2% O 2 ) resulted in decreased cell proliferation but not in increased apoptosis after 9, 16 or 24 days of cell culture [START_REF] Grayson | Effects of hypoxia on human mesenchymal stem cell expansion and plasticity in 3D constructs[END_REF]. These findings, combined with our own, suggest that hypoxia leads only to moderate cell death and that the surviving hMSCs are still able to proliferate. The ultimate bone forming ability of engineered constructs relies, however, on the survival of "functional" hMSCs. The second step in the present study was therefore to assess the effects of temporary hypoxia on hMSC osteogenic potential by drawing up transcriptional profiles of osteoblast membraneous and extra-cellular matrix molecules (ALP, osteocalcin, osteopontin and type I collagen), those of a growth factor stimulating osteoblast differentiation (BMP2) and those of a transcription factor regulating bone formation (cbfa1/Runx2). Our results show that a slight down-regulation of cbfa-1/Runx2 expression occurs after temporary exposure to hypoxia, persisting for 14 days after the end of the hypoxic episode. Cbfa-1/Runx2 transcription factor plays an essential role in controlling osteoblastic differentiation (for a review: [START_REF] Ducy | Cbfa1: a molecular switch in osteoblast biology[END_REF][START_REF] Komori | Regulation of skeletal development by the Runx family of transcription factors[END_REF]) and its inhibition is associated with a large decrease in the rate of bone formation [START_REF] Ducy | A Cbfa1-dependent genetic pathway controls bone formation beyond embryonic development[END_REF]. Similar long-lasting inhibition of osteocalcin, a late osteogenic differentiation marker, confirmed the inhibition of osteoblastic maturation of hMSCs resulting from temporary exposure to hypoxia. As occurred with type I collagen, its level of expression was durably and strongly inhibited by temporary exposure to hypoxia. Type I collagen is the main component of bone matrix and plays a central role in the mineralization process. Long-term inhibition of cbfa-1/Runx2, osteocalcin and type I collagen expressions strongly suggest that temporary exposure to hypoxia may inhibit the osteoblastic differentiation of hMSCs. Literature conducted on other cell types (human [START_REF] Matsuda | Proliferation and Differentiation of Human Osteoblastic Cells Associated with Differential Activation of MAP Kinases in Response to Epidermal Growth Factor, Hypoxia, and Mechanical Stress in Vitro[END_REF][START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF] and rat (48) osteoblasts) reports that their osteogenic differentiation is impaired by temporary exposure to hypoxia (decreased ALP activity, collagen type I, osteocalcin and cbfa-1/Runx2 expressions). Conversely, Salim et al reported that exposure of hMSCs to hypoxic (2% O 2 ) conditions did not affect their terminal differentiation [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF]. The discrepancies observed between this study and our results may be explained by different time of exposure to hypoxic conditions (24 hours and 48 hours respectively), suggesting that hMSCs are able to face hypoxia for a short period of time (< 48 hours) without losing their osteogenic potential. Surprisingly, neither the expression of BSP, which is regulated by cbfa-1/Runx2 at both mRNA [START_REF] Ducy | A Cbfa1-dependent genetic pathway controls bone formation beyond embryonic development[END_REF] and protein levels [START_REF] Hoshi | Morphological characterization of skeletal cells in Cbfa1-deficient mice[END_REF], nor that of ALP, the enzymatic activity of which has been previously reported to be down-regulated under hypoxic conditions [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Utting | Hypoxia inhibits the growth, differentiation and bone-forming capacity of rat osteoblasts[END_REF], were found here to be affected by temporary exposure to hypoxia. In the case of BSP expression, the down-regulation of cbfa-1/Runx2 observed in the present study may be too weak to significantly inhibit BSP expression. Moreover, Park et al. [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF] have reported that the inhibitory effect of hypoxia on the osteoblastic differentiation of a human osteosarcoma cell line is time-dependent: the longer the hypoxic exposure time, the higher the down-regulation of osteoblastic marker expression. These results suggest that exposure times longer than those used in the present study (48-hour) may nonetheless induce a down-regulation of mRNA expression of BSP or ALP. Osteopontin expression by hMSCs was permanently increased, on the contrary, by temporary exposure to hypoxia. Up-regulation of osteopontin induced by hypoxia has been previously observed in many other cell types, including mouse osteocytes [START_REF] Gross | Upregulation of osteopontin by osteocytes deprived of mechanical loading or oxygen[END_REF], rat aortic vascular smooth muscle cells [START_REF] Sodhi | Hypoxia stimulates osteopontin expression and proliferation of cultured vascular smooth muscle cells: potentiation by high glucose[END_REF], and human renal proximal tubular epithelial cells [START_REF] Hampel | Osteopontin traffic in hypoxic renal epithelial cells[END_REF]. In bone, osteopontin mediates the attachment of several cell types, including osteoblasts, endothelial cells and osteoclasts (for a review: [START_REF] Denhardt | Role of osteopontin in cellular signaling and toxicant injury[END_REF]). This molecule plays an important role in bone remodelling and osteoclast recruitment processes, as its absence (in knock-out mice) led to impaired bone loss after ovariectomy [START_REF] Yoshitake | Osteopontin-deficient mice are resistant to ovariectomy-induced bone resorption[END_REF] and decreased resorption of subcutaneously implanted bone discs [START_REF] Asou | Osteopontin facilitates angiogenesis, accumulation of osteoclasts, and resorption in ectopic bone[END_REF]. As far as the effects of its up-regulation are concerned, however, the results of previous studies are confusing as positive effects on rat osteoblast maturation [START_REF] Kojima | In vitro and in vivo effects of the overexpression of osteopontin on osteoblast differentiation using a recombinant adenoviral vector[END_REF] as well as negative effects on osteoblastic differentiation of the MC3T3 cell line ( 24) have been reported. But the most striking property of osteopontin may be its ability to promote macrophage infiltration (for a review: ( 9)). Increased osteopontin expression by transplanted hMSCs may therefore culminate in attracting macrophages to the bone defect site and exacerbating the inflammatory process. The exact effects of increased osteopontin expression on bone formation by hMSCs, i.e., whether it stimulates bone formation processes or attracts osteoclasts and macrophages to bone defect site, still remain to be determined. Angiogenesis, a crucial process for oxygen supply to cells, is modulated by several proangiogenic factors (for a review: (7, 47)), which expression is stimulated by HIF-1 (hypoxia inducible factor 1), a transcription factor activated by hypoxia (for review see: : [START_REF] Pugh | Regulation of angiogenesis by hypoxia: role of the HIF system[END_REF][START_REF] Wenger | Cellular adaptation to hypoxia: O2-sensing protein hydroxylases, hypoxia-inducible transcription factors, and O2regulated gene expression[END_REF]). The third step in the present study was therefore to assess the effects of temporary exposure to hypoxia on angiogenic factor expression by hMSCs. Our results showed that a 2-fold upregulation of VEGF expression by hMSCs occurs under hypoxic conditions at both mRNA and protein levels. These findings are in agreement with previous reports that hypoxia increases VEGF expression in the MC3T3 cell line [START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]. Expression of other growth factors and cytokines studied here, although regulated at the mRNA level, were not affected at the protein level by temporary exposure to hypoxia. The bFGF expression, indeed, was upregulated by exposure to hypoxia at the mRNA but not at the protein levels. The discrepancies between mRNA and protein may be explained by shorter half-life of bFGF, lower translation efficiency or the absence of post-translational modification under hypoxia. Moreover, several studies comparing genomic and proteomic analyses report moderate or no correlation between RNA and protein expression [START_REF] Chen | Discordant Protein and mRNA Expression in Lung Adenocarcinomas[END_REF][START_REF] Huber | Comparison of Proteomic and Genomic Analyses of the Human Breast Cancer Cell Line T47D and the Antiestrogen-resistant Derivative T47D_r*[END_REF]. Even so, MSCs are able to durably enhance (for up to 28 days) tissue reperfusion when transplanted into ischemic myocardium [START_REF] Fazel | Cell transplantation preserves cardiac function after infarction by infarct stabilization: augmentation by stem cell factor[END_REF][START_REF] Shyu | Mesenchymal stem cells are superior to angiogenic growth factor genes for improving myocardial performance in the mouse model of acute myocardial infarction[END_REF]. Stimulation of VEGF alone does not suffice, however, to trigger the formation of functional vascular networks, as attempts to accelerate vascularization by over-expressing VEGF (using a genetic system) resulted in the formation of immature, leaky blood vessels in mice [START_REF] Ash | Lens-specific VEGF-A expression induces angioblast migration and proliferation and stimulates angiogenic remodeling[END_REF][START_REF] Dor | Conditional switching of VEGF provides new insights into adult neovascularization and pro-angiogenic therapy[END_REF][START_REF] Ozawa | Microenvironmental VEGF concentration, not total dose, determines a threshold between normal and aberrant angiogenesis[END_REF]. These findings suggest either that the secretion levels of multiple angiogenic factors by MSCs, even if they are not up-regulated by hypoxia, suffice to promote vascular invasion of ischemic tissues; that MSCs secrete other growth factors and cytokines involved in angiogenesis, the expression levels of which have not been studied here; or that MSCs may indirectely promote angiogenesis in vivo by stimulating the secretion of angiogenic factors by other cell types. The present study shows that exposure of primary hMSCs to temporary hypoxia results in persistent down-regulation of cbfa-1/Runx2, osteocalcin and type I collagen levels, but in the upregulation of osteopontin expression, which may therefore limit in vivo bone forming potential of hMSCs. This study, however, only addressed the effects of a transient 48-hour exposure to hypoxia with osteogenic differentiation conducted in hyperoxic conditions (21% O 2 ). When transplanted in vivo, MSCs undergo temporary oxygen deprivation but will never come back to hyperoxic conditions as the maximum oxygen tensions reported either in blood [START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF] or in diaphyseal bone (4) do not exceed 12.5% O 2 . One may then expect more disastrous effects on hMSC osteoblastic differentiation when cells are transplanted in vivo than when they are exposed to in vitro 48-hour hypoxia. It may be therefore of great interest to determine what in vitro hMSC culture conditions are most appropriate for preserving their osteogenic potential after their in vivo implantation. Table 1. Primer sequences for target and housekeeping genes used in RT-PCR assays. * The accession number is t GAPDH: glyceraldehyde-3-: basic fibroblast growth factor; PDGF-BB: platelet derived growth factor -BB; VEGF-R1: VEGF receptor 1 (Flt-1); VEGF-R2: VEGF receptor 2 (KDR); Type I coll.: type I collagen; cbfa-1/Runx2: core binding factor alpha 1 subunit/Runx2; BMP2: bone morphogenetic protein 2; BSP: bone sialoprotein; ALP: alkaline phosphatase; RPL13a: ribosomal protein L13a. * The accession number is the GenBank™ accession number. cbfa-1/Runx2: core binding factor alpha 1 subunit/Runx2; Type I coll.: type I collagen; RPL13a: ribosomal protein L13a. Figure 1. Multipotency of hMSCs. A. Induction of osteogenic differentiation. After 10 and 20 days of cell culture in osteogenic medium, osteogenic differentiation was assessed by determining the ALP activity and by performing RT-PCR analysis of osterix and osteocalcin expression. (n=1 donor). H2O was used as the negative control for RT-PCR. B. Induction of chondrogenic differentiation. After 20 and 30 days of cell culture in chondrogenic medium, chondrogenic differentiation was assessed by performing immuno-histological analysis of human type II collagen expression. Sections were counter-stained using haematoxylin. Incubation with secondary antibody alone was used as the negative control. Scale bar = 10 m. (n=1 donor). C. Induction of adipogenic differentiation. After 10 and 20 days of cell culture in adipogenic medium, adipogenic differentiation of hMSCs was assessed by Oil Red O staining, and the levels of aP2 and PPAR expression were determined by performing RT-PCR analysis. H2O was used as the negative control for RT-PCR. Scale bar = 50 m. (n=1 donor). Cell culture medium was placed in a sealed jar containing an oxygen chelator. Twice a day and during 5 days, pO2 levels were measured with a pO2 oxygen sensor without opening the hypoxic system. Values are means ± SD; in triplicate. hMSCs were exposed to either control (21% O2) or hypoxic (1% O2) conditions for 48 hours. After exposure, the media were replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. At the end of these time periods, osteoblastic differentiation was evaluated by performing RT-PCR analysis on osteoblastic markers. RPL13a was used as the endogenous reference gene. Results presented here were obtained on one donor representative of the three donors studied. hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1% O2; grey bars) conditions for 48 hours. After exposure, the media were replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. At the end of these time periods, mRNA expression levels of cbfa-1/Runx2, osteocalcin and type I collagen were determined by performing Real-Time PCR. RPL13a was used as the endogenous reference gene. Values are means ± SD; n=3 donors; the assays performed on each donor were carried out in triplicate. Figure 2 . 2 Figure 2. pO 2 levels with time in the hypoxic system. Figure 3 . 3 Figure 3. hMSC death rate under hypoxic conditions. hMSCs were exposed to hypoxic conditions for 48, 72 and 120 hours. Cell death rates were assessed by Live/Dead staining followed by image analysis. Values are means ± SD; n=3 donors. Figure 4 . 4 Figure 4. Effects of temporary hypoxia on the osteogenic potential of hMSCs. Figure 5 . 5 Figure 5. Effects of temporary hypoxia on the cbfa-1/Runx2, osteocalcin and type I collagen expression by hMSCs. Figure 6 . 6 Figure 6. Effects of temporary hypoxia on the mRNA expression of angiogenic factors by hMSCs.hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1% O2; grey bars) conditions for 48 hours. Expression levels of TGF1, TGF2, TGF3, bFGF, and VEGF were normalized using the respective expression levels of RPL13a. Values are means ± SD; n=3 donors. Figure 7 . 7 Figure 7. Effects of temporary hypoxia on the protein secretion of three major regulators of angiogenesis by hMSCs. hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1%O2; grey bars) conditions for 48 hours. The secretion levels of TGF1 (A), bFGF (B) and VEGF (C) were then determined using ELISA assays. Values are means ± SD; n=3 donors. Table 2 . Primer sequences for target and housekeeping genes used in real time PCR assays. 2 Acknowledgments We thank Dr. Michele Guerre-Millo for providing the sealed jar for hypoxic cell culture conditions, Dr. Sylviane Dennler and Dr. Alain Mauviel for their expert assistance with the RT-PCR assays, and Dr. Sophie Le Ricousse-Roussanne for providing endothelial cells. We would also like to express special thanks to Professor Christophe Glorion and Dr. Jean-Sebastien Sylvestre for their help. Disclosure Statement The authors declare no competing financial interests.
46,883
[ "740372", "15701", "906590" ]
[ "169365", "1330", "455934", "7127", "169365", "169365", "169365" ]
01758661
en
[ "spi", "sde" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758661/file/I2M_RCR_2017_GRIMAUD.pdf
Guilhem Grimaud email: [email protected] Nicolas Perry Bertrand Laratte Aluminium cables recycling process: Environmental impacts identification and reduction Life cycle impact of European generic primary and secondary aluminium are well defined. However specific endof-Life scenario for aluminium products are not available in the literature. In this study, the environmental assessment of cable recycling pathway is examined using the Life Cycle Assessment (LCA) methodology. The data comes from a recycling plant (MTB Recycling) in France. MTB Recycling process relies only on mechanical separation and optical sorting steps on shredder cables to obtain a high purity aluminium (above 99.6%). The life cycle assessment results confirm the huge environmental benefits for aluminium recycled in comparison with primary aluminium. In addition, our study demonstrates the gains of product centric recycling pathways for cables. The mechanical separation is a relevant alternative to metal smelting recycling. This work was done firstly to document specific environmental impact of the MTB Recycling processes in comparison with traditional aluminium recycling smelting. Secondly, to provide an environmental overview of the process steps in order to reduce the environmental impact of this recycling pathway. The identified environmental hotspots from the LCA for the MTB recycling pathway provide help for designers to carry on reducing the environmental impact. Introduction General context The European demand for aluminium has been growing over the past few decades at a rate of 2.4% per annum [START_REF] Bertram | Aluminium Recycling in Europe[END_REF]. The aluminium mineable reserves are large, but finite, an average value for the ultimately recoverable reserve is about 20-25 billion tons of aluminium. Nowadays, the aluminium production is about 50 million tons per year [START_REF] Sverdrup | Aluminium for the future: modelling the global production, market supply, demand, price and long term development of the global reserves[END_REF]. Increase for aluminium demand in Europe is mainly supported by the rise of recycling which growth was in the same time about 5% per annum [START_REF] Bertram | Aluminium Recycling in Europe[END_REF][START_REF] Blomberg | The economics of secondary aluminium supply: an econometric analysis based on European data[END_REF]. The abundance and the versatility of aluminium in various applications have made it one of the top solutions for lightweight metal strategy in various industries such as automotive [START_REF] Liu | Addressing sustainability in the aluminum industry: a critical review of life cycle assessments[END_REF]. In the cable industry, substitute copper for aluminium can considerably reduce the linear weight without degrading too much the electrical properties [START_REF] Bruzek | Eco-Friendly Innovation in Electricity Transmission and Distribution Networks[END_REF]. To obtain optimal electrical conductivity, aluminium use for cables has purity above 99.7% [START_REF] Goodwin | Metals[END_REF]. Because secondary aluminium does not meet the quality requirements for aluminium cables manufacturers; only primary aluminium is used for the aluminium cables supply chain. Nevertheless, improvement in recycling could help reach quality targets, by using new sorting technologies. Aluminium properties are not deteriorated by recycling. However, in most cases aluminium parts are mixed together at the end of life step without considering their provenance and use. According to this, the seven series of aluminium are mixed together in waste treatment plant. All aluminium series do not have the same purity and alloying elements pollute aluminium. When aluminium series are mixed together, the cost-effective solution for refining use furnaces. As the metal is molten, the separation is done by using the difference of density and buoyancy (decantation methods, centrifugation, filtration, flotation, etc.) [START_REF] Rombach | Future potential and limits of aluminium recycling[END_REF]. Despite the technology optimisations, some alloying elements are lost in the process [START_REF] Paraskevas | Sustainable metal management and recycling loops: life cycle assessment for aluminium recycling strategies[END_REF] and a fraction of metal is not recycled [START_REF] Ohno | Toward the efficient recycling of alloying elements from end of life vehicle steel scrap[END_REF]. It leads to a drop of the metal quality which is akin to a down-cycling [START_REF] Allwood | Squaring the circular economy: the role of recycling within a hierarchy of material management strategies[END_REF]. By mixing all the aluminium waste streams, it becomes very difficult to maintain a high level of purity for the recycled aluminium. Streams of material available for recycling become increasingly impure as they move further along the materials processing chain, and therefore refining the stream for future high-quality use becomes more difficult. In particular, recycling materials from mixed-material products discarded in mixed waste streams, is most difficult [START_REF] Allwood | Material efficiency: a white paper[END_REF]. To make a step toward the circular economy, it is essential to achieve a recycling industry [START_REF] Sauvé | L'économie circulaire: Une transition incontournable[END_REF]. Upstream, the solution lies in a better separation of aluminium products to steer each flow to a specific recycling chain. This strategy should enable products to be guided through the best recycling pathway and maintain the quality of alloys. This strategy makes it possible for manufacturing companies to take back their own products and secure their material resources [START_REF] Singh | Resource recovery from post-consumer waste: important lessons for the upcoming circular economy[END_REF]. Increasing the quality of recycled materials should allow recycling company to integrate close loop End-of-Life (EoL) strategy. Morphology of aluminium cables The cables are composed of numerous materials. As illustrated in Fig. 1, the cables are composed of an aluminium core cable (a), covered with a polymer thick layer (b). Additional metallic materials (c) are coaxial integrated into the matrix of cables. These cables are manufactured by extruding together all the materials that compose it. The Table 1 shows the mass proportion of materials contained in cables. Mass proportions are extracted from MTB monitoring data of cables recycled at the plant between 2011 and 2014. Aluminium in cables represents between 35 and 55% of the total weight. Other metals are mainly steel, lead, copper and zinc. The variety of plastics contained in the sheath is even stronger than for metals: silicone rubber, polyethylene (PE), cross-linking PE (xPE), polypropylene, polychloroprene, vulcanised rubber, ethylene vinyl acetate, ethylene propylene rubber, flexible polyvinyl chloride (PVC), etc. (Union Technique de l'Électricité (UTE), 1990). Although aluminium cables represent about 8% of aluminium products in Western Europe (European Aluminium Association (EAA), 2003), the inherent purity of aluminium used for cables justifies differentiate recycling channels to optimise processing steps and to improve cost efficiency. At the end of life, the challenge concerns the separation of materials from each other. The most economical way to separate different materials rely on a smelting purification (European Aluminium Association (EAA), 2006). Presentation of MTB recycling process for aluminium cables An alternative process for EoL cables uses only mechanical steps instead of thermal and wet separation as developed for several years by MTB Recycling, a recycling plant located in south-east of France. The specific processes were developed by MTB engineers and the system is sold worldwide as cables recycling solution. It reaches standard aluminium purity up to 99.6% for quality A and B (Table 2). This performance is obtained using only mechanical separation and optical sorting processes on shredder cables. Aluminium quality D production mainly comes from flexible aluminium, our study does not consider this production. Each batch of aluminium (25 t) produced by MTB is analysed by laboratory spectrometry. The Table 2 presents the averaged analysis results of the chemical elements present in aluminium batches. Between 2012 and 2014, more than 400 lots were analysed. During this period only 40 batches were below the average. The aluminium obtained from recycled cables is specially appreciated by the smelter. Its high purity makes it easy to produce a wide variability of aluminium alloys. Recycled aluminium can then be used in many aluminium products and not only in applications requiring high alloy aluminium. Issues of the study The initial motivation for our study was to rank the environmental performance of the MTB recycling pathway in relation to other aluminium recycling solutions. In addition, we wanted to identify the main process contributing to the global environmental impact. What are the environmental gains to overcome the aluminium recycling by smelting? Firstly, this article attempts to present the environmental assessment results that enabled the comparison of the three aluminium production scenarios. On the one hand, the study demonstrates huge environmental benefits for recycled aluminium in comparison with primary aluminium. And on the other hand, the results show the harmful environmental influence of the heat refining by comparison with the mechanical sorting processes used at the MTB plant. The study demonstrates the interest of recycling waste streams separately from each other. Although the starting point of the study was to assess and document the environmental impact of a specific recycling pathway; the results of this study have allowed to identify several environmental hotspots of the MTB Recycling process. Thus, leads to the development of the effectiveness implementations to reduce the environmental impacts of the MTB recycled aluminium. This article presents how the Life Cycle Assessment methodology allowed the engineering team to improve the environmental efficiency of MTB Recycling processes. Methodological considerations Environmental assessment of aluminium recycling To evaluate the environmental performances of the MTB cable recycling pathway, we chose to use the Life Cycle Assessment (LCA) methodology. [START_REF] Bertram | Aluminium Recycling in Europe[END_REF]. However, systems modelling always relate to the standard melting solution for recycled aluminium. That is why, this study focuses on the environmental assessment of cable recycling with MTB specific processes that have never been documented using LCA. Environmental impact assessment is done using ILCD Handbook recommendations (JRC -Institute for Environment and Sustainability, 2012a). Two systems are compared to the MTB cable recycling pathway (scenario 3): • Scenario 1: European primary aluminium • Scenario 2: secondary aluminium from European smelters The primary aluminium production (scenario 1) is used as a reference for guidance on the quality of production. Comparison with scenario 1 should help to translate the environmental benefits of recycling. Foremost, our analysis is intended to compare possible recycling pathways for the aluminium wastes. With this in mind, the scenario 2 (secondary aluminium) is used as a baseline to evaluate the MTB alternative recycling pathway (scenario 3). Sources of data for the life cycle inventory The evaluation is designed by modelling input and output flows that describe different systems of aluminium recycling with the software SimaPro 8.04 [START_REF] Goedkoop | Introduction to LCA with SimaPro[END_REF][START_REF] Herrmann | Does it matter which Life Cycle Assessment (LCA) tool you choose? A comparative assessment of SimaPro and GaBi[END_REF]. All the flows are based on processes from Ecoinvent 3.1 library [START_REF] Wernet | Introduction to the ecoinvent version 3.1 database. Ecoinvent User Meeting[END_REF]. The systems are developed according to the local context of Western Europe. To allow comparison, all the inventory elements are compiled based on the Ecoinvent database boundaries and data quality is checked [START_REF] Weidema | Ecoinvent Database Version 3?the Practical Implications of the Choice of System Model[END_REF][START_REF] Weidema | Overview and methodology[END_REF]. Once modelling was done, the characterisation is conducted according to International Reference Life Cycle Data System (ILCD) Handbook (JRC -Institute for Environment and Sustainability, 2012a) recommendations. This study compares two different modelling systems. Scenarios 1 and 2 using available foreground data from Ecoinvent library without any modifications. And scenario 3 using Ecoinvent data to model the MTB Recycling pathway, the inventory dataset was done using the recommendations from European Joint Research Centre (JRC -Institute for Environment and Sustainability, 2010). For Scenario 1 (European primary aluminium) and scenario 2 (secondary aluminium from European smelter) data has been collected by European Aluminium Association (EAA) and aggregated in Ecoinvent 3.1 (Ruiz Moreno et al., 2014, 2013). The MTB scenario was modelled using specific data from MTB Recycling plant. The data collection method does not allow the use of the results for other cables recycling pathways. The results are only representative of cable recycling solutions developed by MTB. Nevertheless, the three modelling rely on the same system boundary. Life cycle impact assessment methodology The Table 3 presents the selected indicator models for the life cycle impact assessment method. In Table 3, the two models in italics are the models, which do not follow the recommended ILCD 2011 impact assessment methodology (JRC -Institute for Environment and Sustainability, 2012b), which was used throughout the study. For human toxicity indicators, USEtox (recommended and interim) v1.04 (2010) [START_REF] Huijbregts | Global Life Cycle Inventory Data for the Primary Aluminium Industry -2010 Data[END_REF] model was implemented to improve our characterisation method with latest calculation factors as recommended by UNEP and SETAC [START_REF] Rosenbaum | USEtox-the UNEP-SETAC toxicity model: recommended characterisation factors for human toxicity and freshwater ecotoxicity in life cycle impact assessment[END_REF]. First results on water resource depletion with default calculation factor from Ecoscarcity [START_REF] Frischknecht | The ecological scarcity method -ecofactors 2006. A method for impact assessment in LCA[END_REF], show anomalies. These anomalies are all related to the Ecoinvent transportation modelling which involves electricity mix of Saudi Arabia. For the water resource depletion indicator, the Pfister water scarcity v1.01 (2009) [START_REF] Pfister | Assessing the environental impact of freshwater consumption in life cycle assessment[END_REF] calculation factor was implemented in our characterisation method. It does not completely remove anomalies in the characterisation, but it significantly reduces the positive impact of transport on the water scarcity indicator. A sensitivity analysis on the characterisation method was conducted using two other characterisation methods: ReCiPe Midpoint v1.1 and CML IA Baseline v3.01. This sensitivity analysis has not yielded conflicting results. These calculations do not show a divergence in the hierarchy of scenarios on all indicators. Life cycle assessment study scope This study is based on a life cycle approach, in accordance with the standards of International Organisation for Standardisation (ISO 14010/44) (International Standard Organization, 2006a,b). The Fig. 2 shows the representation of a standard product life cycle including the life cycle stage and the End-of-Life stage. As shown on Fig. 2, product life cycle stage of aluminium is not included in our study scope. Functional unit proposal As part of this study, the functional unit used is as follows: producing one ton of aluminium intended for end-user applications, with the purity higher than 97% using current industrial technologies (annual inbound processing higher than 10,000 t) located in Europe. The matching quality of the products compared can meet the same function as a high purity aluminium can be used for producing many alloys without refining. We selected three scenarios that meet all the conditions of the functional unit: • Scenario 1 or primary: primary aluminium, resulting from mining. • Scenario 2 or secondary: secondary aluminium from recycling by smelter. • Scenario 3 or MTB: MTB aluminium, from recycling using the MTB solution. Presentation of the system boundaries The Fig. 3 presents the main steps considered in each scenario of the comparison. The study focuses on transformation steps of aluminium. That is why the system boundaries chosen is a cradle to exit gate modelling [START_REF] Grisel | L'analyse du cycle de vie d'un produit ou d'un service: Applications et mises en pratique[END_REF][START_REF] Jolliet | Analyse du cycle de vie: Comprendre et réaliser un écobilan[END_REF]. For scenarios 1 and 2, the final product is aluminium ingots, while for scenario 3 the final product is aluminium shot. In any case, the three scenarios meet the functional unit. In both forms of packaging, aluminium can be used to produce semi-finished products. Scenario development The baseline scenarios (scenarios 1 and 2) refer to the Western European average consumption of aluminium. The scenario 1 and scenario 2 are based on Ecoinvent unit processes modelling. Ecoinvent database uses the EAA Life Cycle Inventory (LCI) [START_REF] Althaus | Life Cycle Inventories of Metals -Final Report Ecoinvent Data v2.1[END_REF]. For Ecoinvent 3.1 (Ruiz Moreno et al., 2014, 2013), the Aluminium processes are built with data collected by EAA in 2013 (European Aluminium Association (EAA), 2013; International Aluminium Institute (IAI), 2014). The Ecoinvent modelling uses data from the average technology available on the market for Western Europe [START_REF] Weidema | Overview and methodology[END_REF]. Scenario 1: primary aluminium production The Fig. 4 presents the different steps required (and included in the modelling) for the primary aluminium dataset. The figure adds more details about the intermediate steps required to obtain ingots of primary aluminium. The scenario for primary aluminium comes from Ecoinvent data. The data used for the study is aluminium production, primary, ingot. This data meets the purity requirements established in the functional unit. At this stage of the production process, the aluminium contains only 1.08% silicon and the overall purity is 98.9%. The modelling of primary aluminium is based on the average of primary aluminium production for the European market. The technology considered corresponds to the up-to-date technologies used in Europe. The electricity mix used by the primary aluminium industry is a specific electricity mix. Modelling this mix relies on the compilation of specific data for all European primary aluminium producers. This mix is made up with over 80% from hydroelectric power, 8.7% of electricity from nuclear and the remaining part, 11.3% comes from fossil fuel. For the unit process data used, the downstream transport to the market is not considered, but all the upstream logistic for the transformation steps are included in the boundaries. As processing operations, shown on Fig. 4, are conducted in multiple locations, the total distance travelled is 11,413 km (11,056 km by sea, 336 km by road and 21 km by train). Scenario 2: conventional aluminium recycling Scenario 2 provides the modelling of the traditional aluminium recycling solution. This scenario is based on shredding steps and melting purification step made by refiners. As well as scenario 1, the scenario 2 is based on average values of European smelters. The data was compiled by the EAA and provided in Ecoinvent database. The collection of waste is not included in the second scenario but the transport from the massification point and the waste treatment plant is included in the modelling. Aluminium wastes travel 322 km (20 km on water, 109 km by train and 193 km by road). The electricity mix used in the modelling is equivalent to the electricity mix provided by the European Network of Transmission System Operators for Electricity (ENTSO-E). It is mainly fossil fuel (48.3%), nuclear power (28.1%) and renewable energy (23.6%) (ENTSO-E, 2015). The Fig. 5 presents aluminium recycling as modelled in the Ecoinvent dataset. The modelling is divided in five steps: four mechanical separation steps and one thermal step. The shredding step is to reduce the size of the material around 15-30 mm. Mechanical separations carried out as part of the scenario 2 are coarse separation. The recycling plants have equipment to handle a wide quantity of waste without warranties about the quality. They are designed only to prepare for the melting not to purify the aluminium. Therefore, the objective is to reduce the amount of plastic and ferrous elements but not fully eliminate such pollution from the waste stream. In Ecoinvent, two data collections are available. One data collection was done for production scraps (new scrap) and the other one for postconsumer scrap (old scrap). The processes used for recycling new and old scraps are not the same. New scrap needs less operation than old scraps. The inbound logistics is also different because some of the wastes are recycled directly on production plants. For the study the ratio between old and new scrap is based on European aluminium mix (International Aluminium Institute (IAI), 2014). In 2013, old scrap represents 46.3% of aluminium recycled in Europe and new scrap 53.7%. After the recycling process, two outlets are possible: wrought or cast aluminium. For the study, the choice falls on wrought aluminium because it has sufficient purity required by the functional unit (97%). The data chosen for the study is Aluminium, wrought alloy {RER} | Secondary, production mix (Ruiz Moreno, 2014). Ecoinvent modelling not show the co-products separated during the recycling process. 6 by-products are included in environmental impacts calculation, but no benefit of by-products recycling is integrated into the study to remain consistent with the Ecoinvent modelling. For MTB scenario, the distribution between postconsumer cables (54%) and new scraps (46%) is inverted relative to scenario 2. However, the breakdown between old and new scraps has no influence on the recycling steps used at the MTB plant. All the transport steps are made on the road. The distances of transport considered are 540 km for old scraps and 510 km for new scrap from various cable manufacturers. As shown on Table 2, the intrinsic recycled aluminium quality reaches at least 99.6% of aluminium purity. MTB Recycling has an environmentally friendly strategy at the top management level. One of the company's commitments was to source exclusively renewable energy for the recycling plant. Therefore, they subcontracted with an energy provider that ensures an electricity mix from renewable energy source called EDF Equilibria. Electricity comes almost exclusively from hydroelectric power (96.92% from alpine reservoirs and 2.4% from run of the river). The remaining electricity comes from waste to energy plants (0.51%) and from cogeneration plants (0.17%) [START_REF] Powernext | Lots of Certified Energy Supplies to MTB Plant -Period[END_REF]. To present both the advantages of mechanical refining and the specific results at the Trept MTB Recycling plant, we have divided Scenario 3 into two. Scenario 3a corresponds to the modelling using the same electrical mix as scenario 2: ENTSO-E electricity mix. For scenario 3a, the recycling processes are rigorously compared with the same scope. The scenario 3b corresponds to the modelling using the specific green electricity mix used by the MTB Recycling plant. For scenario 3b case, the MTB recycled aluminium is compared to the other aluminium produced considering the MTB plant specific context. During MTB cables recycling steps, the various separation steps produce co-products, mainly plastics and other metals. Except for plastics which are considered as waste, other co-products are not included in the study: their environmental impact is considered as zero. Although these by-products are recycled, the full impact of separation steps is transferred to the production of recycled aluminium. A sensitivity analysis was conducted on the allocation method and the results show that the boundaries used for scenario 3 maximise the impact of the aluminium produced by the MTB Recycling pathway [START_REF] Stamp | Limitations of applying life cycle assessment to complex co-product systems: the case of an integrated precious metals smelter-refinery[END_REF]. Fig. 7 presents aluminium recycling steps considered in the modelling of scenario 3. The main difference in the scenarios 2 and 3 pathways is concentrated in the second half of the chart on Fig. 7. Aluminium cables recycling starts with shredding. At the MTB Recycling plant, the shredding is done to obtain homogenous particles of size between 5 and 7 mm. The size reduction is done in four steps: two heavy duty shredding steps and two granulation steps. Between each shredding step, magnets are positioned to capture ferrous elements. After shredding, the mechanical and optical separation steps are used to get the best purity of aluminium. The recycled aluminium D is out of scope for this study but the mixture of plastic and aluminium is considered in the LCA study as a waste. Life cycle inventory summary To facilitate the reading of the results, Table 4 gives the main information of the life cycle inventory of each scenario. Comparison of the life cycle assessment results Comparison of the 3 scenarios In this section, we are interested in the three scenarios comparison. The Fig. 8 draws the comparison for the three scenarios, the values used for the characterisation are given on the figure. As expected the scenario 1 emerges as far more significant on all indicators except for freshwater eutrophication where recycling aluminium (scenario 2) takes the lead. On freshwater eutrophication impact category, the scenario 2 (secondary aluminium) has the highest impact, even higher than primary aluminium (scenario1) due to the addition of alloying metals during the aluminium recycling. The alloying elements are required to supply the market with aluminium alloys that meet the market constraints. The copper is the main alloying element contributing to the impact on the freshwater eutrophication. Indeed the copper production chain requires sulphuric tailing [START_REF] Norgate | Assessing the environmental impact of metal production processes[END_REF] and this step represents 96.4% on the impact category. This result seems to be a modelling error into Ecoinvent 3.1. Our team do not consider the results of the freshwater eutrophication impact category from LCA to draw any conclusion. Average secondary aluminium reaches approximately 10% of the primary aluminium environmental impacts. Those results match with the evaluation already done and meet the values given by the Bureau of International Recycling (BIR) for aluminium recycling benefits. In its report, BIR estimates that the energy gain for recycling aluminium is 94% compared to the production of primary aluminium (Bureau of International Recycling, 2010). It should be noted that the use of a highcarbon electrical mix (ENTSO-E) for recycling tends to reduce the gains once translated into environmental impact categories. As explained in Life Cycle Performance of Aluminium Applications [START_REF] Huber | Life cycle performance of aluminium applications[END_REF] only the European Aluminium Association has conducted an LCA study to provide generic LCI data about aluminium production and transformation processes which are based on robust data inventory. This work, although focusing primarily on European aluminium production, also provides results for the rest of the world whose production can be imported into Europe. Moreover, the International Aluminium Institute (IAI) concentrates mainly on the production of primary aluminium and omits the scope of secondary aluminium which is only addressed by EAA (International Aluminium Institute (IAI), 2013). The new contribution of this study concerns the environmental comparison of the mechanical recycling of aluminium cables with a smelting recycling and primary aluminium production. On all the set of indicators, MTB aluminium (scenario 3b) is between 2.5% and 5% of the scenario 1 environmental impacts. Recycling scenarios comparison In this section, we are interested in the comparison of the aluminium recycling scenarios. In the previous characterisation, the difference between scenarios 2 and 3 are not clearly shown on the graphical representation. The Fig. 9 gives the opportunity to compare more specifically the two recycling pathways. The values used for the histogram representation in Fig. 9 are given on the figure. The environmental impacts of the scenario 3a represents between 5% and 82% of scenario 2 environmental impacts, except for the ionising radiation impact category. The results on the ionising radiation impact category for scenario 3a are related to the high electricity consumption during the shredding steps. Using the ENTSO-E which contains a large proportion of nuclear energy (28.1%), the electricity consumption contributes to 70% of the ionising radiation impact category. And the transport contributes to 21%. The high level of nuclear power consumption also contributes significantly to the ozone depletion indicator. The high consumption of electricity from nuclear power contributes largely to the ozone depletion impact category. Using only mechanical separation steps can halve the environmental impact. For the comparison of aluminium produced using the specific electricity mix, scenario 3b, the environmental impact does not exceed the impact of scenario 2. In addition, the environmental impact of the scenario 3b represents between 2% and 46% of the recycling by melting on the set of impact categories. Thanks to the MTB Recycling pathway (scenario 3b), on the set of indicators the environmental impact of recycled aluminium is divided by four. Results from Fig. 9 allow us to establish an environmental hierarchy between the different recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly pathway. It also demonstrates that recycling when driven without loss of quality is a relevant alternative to mining. These results also show the environmental relevance of the product centric recycling approach for cables recycling. The LCA study revealed that the closed loop recycling options (considering aluminium cables) has lower environmental impact over the other recycling scenarios using mixed streams of aluminium wastes. This performance has already been demonstrated for aluminium cans [START_REF] Lacarrière | Emergy assessment of the benefits of closed-loop recycling accounting for material losses[END_REF][START_REF] Niero | Circular economy: to be or not to be in a closed product loop? A life cycle assessment of aluminium cans with inclusion of alloying elements[END_REF]. Uncertainty analysis for recycling scenarios An uncertainty analysis was conducted between the three scenarios. The uncertainty analysis was performed using the Monte Carlo approach with 10,000 iterations and a 95% confidence interval. With the specific electricity mix, the uncertainty between scenario 2 and scenario 3a do not exceed 5% on all the set of indicators, except for the human toxicity (8%) and the water resource depletion (45%) indicators. With equivalent electricity mix, the results for the uncertainty analysis between scenarios 2 and 3b are present on Fig. 10. The uncertainty exceeds 5% on three indicators: ozone depletion (11%), human toxicity, non-cancer effects (9%) and water resource depletion (45%). The results of these three indicators are therefore subject to further investigations to draw some conclusions. Especially for the water resource depletion indicator, which has a very high uncertainty. However, the results of the uncertainties analysis demonstrated the robustness of our modelling and allow us to confirm the conclusions of the characterisation. Sensitivity analysis As seen previously, the electricity supply mix has a huge influence on the overall environmental impact of recycling pathway. A sensitivity analysis was performed on the electricity mix influence for scenario 3. The results are presented on Fig. 11. For this sensitivity analysis two additional electricity mixes were used in the comparison for scenario 3. The electricity sources distribution for each electricity mix is presented in Table 5. For German and French electricity mix, the Ecoinvent 3.1 data used are listed below: • French electricity: Electricity, medium voltage {FR}| production mix | Alloc Rec, U • German Electricity: Electricity, medium voltage {DE}| production mix | Alloc Rec, U The comparison on Fig. 11 shows the results for the MTB cables recycling pathway using different electricity mix. The gains from renewable electricity (scenario 3b) are obvious on all the set of indicators. Similarly, the differences between the two national mixes (scenario 3, French electricity and German electricity) are quite pronounced. On climate change and freshwater eutrophication, these differences are largely due to the predominance of fossil fuels in the German electricity mix (62.1%), as for the French electric mix, the share of fossil fuels accounts for less than 10%. While the French electricity mix consists mainly of nuclear energy, that involving domination on ionising radiation and ozone depletion. Overall, the European electricity mix ENTSO-E is the most harmful on our set of indicators. The environmental performances are close to those obtained with the German electricity mix, which is the leading producer of electricity at European level in the ENTSO-E network. Using the European ENTSO-E electricity mix in the scenario comparison is the worst case for modelling aluminium cables recycling at MTB Recycling plant. It is important to note that whatever the electrical mix used scenario 3 remains the most relevant from an environmental point of view with respect to the other scenarios. Scenario 3b environmental hotspots identification from LCA LCA results allow us to establish a hierarchy between environmental recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly. In this section, the MTB recycling pathway characterisation is described. The Fig. 12 shows the results for the characterisation of the scenario 3b. This characterisation used data from MTB recycling pathway supply with green electricity and without any optimisations. The values used for the graphical representation are given on the figure. On the set of indicators, the MTB recycling steps represent between 11.4% and 79.7% of the total impact, the remaining share of the impact is related to upstream logistic. The upstream logistic includes the transport from massification points to the recycling plant. The average of the 11 indicators is equal to 36.1% and the median is 33.0%. The results show a very strong contribution from the upstream transport for the collection of waste in the overall impact of the scenario 3b. The shredding step is the most contributing process in the overall impact of scenario 3b on the set of impact categories. Although this step is highly energy-intensive, the use of the hydroelectric power supply strongly limits the contribution. Indeed, electricity consumption contributes on average 10% of the environmental impact of shredding steps. The water resource depletion impact category is singular in the sense that the production of hydroelectricity has a very strong influence on the final impact of this impact category. This observation is due to the depletion factor of water resources used for the hydroelectric production processes. Since electricity consumption is not the first contributor for environmental impacts and to explain the predominance of the shredding step on the result, we must go further in the analysis of the subprocess. Thus, the shredding consumables used for the grinding equipment are predominant. The specificity of the alloys used for the blades and the screens implies the use of numerous alloying elements that are a burden on the environmental impact, especially on the freshwater eutrophication indicator (as explain for scenario 2 in paragraph 5.1). The environmental impact of the second part of the MTB Recycling pathway: the mechanical sorting stage is significantly lower than the shredding steps. The consumables of this stage are fewer. The consumption of electricity is also lower in comparison with shredding steps. The electricity consumption is the main contributor to this stage of the recycling pathway. Air separation tables are among the highest electricity consuming processes at the mechanical sorting stage. This stage produces plastic wastes. Also, plastic wastes are currently buried in landfill, all types of plastic are mixed and no industrial processes are available to separate them effectively. In addition, a duty of vigilance is required on a selection of polymers resin that are banned in new plastic products. The impact of this waste is not negligible, between 5 and 10% of the final impact on the set of impact categories. For the scenario 3a environmental evaluation, namely when the ENTSO-E electricity mix is used for the characterisation, all the recycling steps of MTB scenario represent on the set of indicators half of the total impact on average. Of course, the energy mix shift modifies the distribution and hierarchy of each stage in the environmental impacts. The upstream logistic transport becomes lesser on all indicators. For the shredding and mechanical sorting stages, the contribution distribution is not distorted but the electricity consumption becomes the main source of the environmental impact. The ENTSO-E electricity consumption represents 80% of the recycling processes overall environmental impacts. Discussion Recycling process optimisation using LCA results Although the LCA tool is primarily an assessment tool, it is also intended to support an eco-design approach. Using LCA results to improve the environmental performances give good results to industrial processes [START_REF] Pommier | LCA (Life Cycle Assessment) of EVPengineering veneer product: plywood glued using a vacuum moulding technology from green veneers[END_REF]. In this section, we focus on the optimisation option to reduce the environmental impacts of the scenario Fig. 12. Characterisation of MTB aluminium shot with purity of 99.6% using green electricity mix. 3b. The impact of transportation is primarily due to the distance travelled by lorry to the recycling plant. Upstream logistic transport is inevitable, it is therefore difficult to plan to do a distance reduction as the deposits are very diffuse on the collection territory. However, the cable is voluminous waste, and lorry loading is limited by the volume of waste and not by the mass. To improve the upstream logistic, the volume of waste cables could be reduced to facilitate its transport. The logistic optimisation is underway in France. However, the influence we have on logistics flows is limited, so we have focus the eco-design efforts on the processes carried out within the MTB plant. For shredding consumables, in collaboration with subcontractors, research has been done to identify more durable steel alloys for shredding blades. The new alloys are being tested to demonstrate their gain in terms of longevity. The tests carried out with news blades demonstrate an increase of 30-60% of the lifespan. The environmental impact of these new alloys is similar. The modification of the steel used by consumables provides a lower-cost solution to reducing the environmental impacts. Work on energy efficiency of the shredder is also necessary to reduce the electricity consumption of the shredding steps. Work on energy recovery has not allowed yet to implement innovative solutions. Nevertheless, the energy recovery solutions and new electric motors are studied. For the mechanical sorting stage, a thorough reflexion was conducted on the electrical consumption of equipment and more specifically on the air separation tables. The MTB engineering team made improvements in the design of the new air separation table models. The improvements in the airflow within the equipment were reviewed. In fact, power consumption could be reduced by using smaller electric motors. The treatment of plastic waste from the cable sheaths does not appear as a major contributor to the overall environmental impacts in our LCA study. Indeed, this step represents about 5-10% of the scenario 3b overall impacts. This stage of the scenario 3b is divided into two parts: on the one hand, the transport of waste by lorry to the storage site (25 km) and the landfill process. However, as a manufacturer of recycling solutions, it is the responsibility of MTB to provide a technological response to solve this problem. All plastic polymers from the cable sheaths are not recycled. The plastic resin mixture and the presence of aluminium dust greatly complicate the mixture recovery. According to the study results, to reduce the overall environmental impact of the scenario 3b, MTB should cut down the environmental impacts of plastic waste management. To do so, MTB has initiated a reflexion to sort and recycle the plastic resin mixture. A first prototype was developed in late 2015. The synoptic of plastic processing method is shown on Fig. 13. The separation is still based on simple mechanical steps that achieve a uniform separation. The results of this pilot recycling unit are encouraging. The unit reduces landfill by 80%. Other developments are underway to enhance the remaining part as solid recovered fuel. For this, the separation must be perfect to meet the regulatory requirements. New recycling pathway design using LCA results To further reduce the environmental impact of transport issues, the MTB engineering team had to review the overall recycling pathway and not just the industrial processes. The challenge was to design a transportable recycling solution capable of achieving the same level of purity as its existing plant but with a lower throughput. So instead of transporting the waste to the recycling plant, it is the plant that moves closer to the deposits. The use of the international container standard sizes ensures maximum transportability by all modes of transport (road, rail, maritime). In addition, the containers offer modularity with upstream and downstream processes that can be easily implemented before or after the CABLEBOX system. The recycling solution is not autonomous, it requires an external power source. The energy mix used for the local supply of the system depends on the location. There are no direct local emissions but only indirect emissions due to energy consumption. The CABLEBOX system includes all the separation steps presented in Fig. 7. A transportable solution for recycling can effectively reduce the environmental and financial impact of upstream logistic. The CABLEBOX solution is especially relevant when waste production is seasonal or/ and geographically concentrated and it makes a step toward to circular economy by offering an industrial solution for close loop recycling. Conclusions As already seen in this paper, to recycle the same products, different pathways are available. Life cycle assessment results demonstrate that recycling when driven without loss of quality is a relevant alternative to mining. Recycling pathways can be seen as the assembly of elementary technologies. Designers have the option of the layout to meet the specifications. The indicators that guide the designer choices are exclusively economic indicators [START_REF] Allwood | Material efficiency: a white paper[END_REF]. Environmental considerations are not considered in the layout choice. Some customers and MTB reveal the need for a better understanding of recycling pathway environmental impacts. Moreover, optimising recycling pathway systems are long and demand powerful assessment tools such as Material Flow Analysis (MFA) and Life Cycle Assessment (LCA) [START_REF] Grimaud | Reducing environmental impacts of aluminium recycling process using life cycle assessment. 12th Bienn[END_REF][START_REF] Peças | Life cycle Engineering-taxonomy and state-of-the-art. 23rd[END_REF][START_REF] Pommier | LCA (Life Cycle Assessment) of EVPengineering veneer product: plywood glued using a vacuum moulding technology from green veneers[END_REF]. The first limitation concerning the results acquisition which are obtained once the industrial solution is implemented. As the financial investment was made by the manufacturer, they are reluctant to improve efficiency [START_REF] Hauschild | Better-but is it good enough? on the need to consider both ecoefficiency and eco-effectiveness to gauge industrial sustainability[END_REF]Herrmann et al., 2015). The second limit, the approach used is empirical and is not based on guidelines. If tools and methods are available for product ecodesign [START_REF] Donnelly | Eco-design implemented through a product-based environmental management system[END_REF][START_REF] Kulak | Eco-efficiency improvement by using integrative design and life cycle assessment. The case study of alternative bread supply chains in France[END_REF][START_REF] Leroy | Ecodesign: tools and methods[END_REF], methodologies for process eco-design are rare. Product eco-design methodologies are largely based on guidelines provide by standards [START_REF] Jørgensen | Integrated management systemsthree different levels of integration[END_REF][START_REF] Kengpol | The decision support framework for developing ecodesign at conceptual phase based upon ISO/TR 14062[END_REF]. For processes, no standard is available as for products. Therefore, it seems to be necessary to develop an effective methodology to evaluate and guide process design choices to ensure economic, environmental and social efficiency [START_REF] Allwood | Squaring the circular economy: the role of recycling within a hierarchy of material management strategies[END_REF]. Offer to the design team an assessment tool will optimise the eco-efficiency of recycling pathways. Using the Environmental Technology Verification (ETV) certification guidelines, we start building a decision support methodology. The emergence of the ETV program appears as a relevant medium to build a process-oriented methodology. This methodology will allow designers to assess and guide their choices to ensure economic, environmental and social efficiency. Fig. 2 . 2 Fig. 2. Representation of a standard product life cycle showing the study scope boundaries. Adapted from Zhang, 2014. Fig. 3 . 3 Fig.3. Main steps of the production processes for the three scenarios. 4. 3 . 3 Scenario 3: MTB cables recycling pathway An intensive inventory analysis was developed during an internal survey conducted in collaboration with EVEA consultant firm at MTB Recycling plant during fall 2014. Foreground data are based on measurement and on stakeholder interviews. The collection of background data comes from Ecoinvent 3.1 or relevant literature. The Fig. 6 presents the details system boundaries used for the life cycle modelling of the aluminium cables recycling pathway at the MTB plant. The boundaries used for MTB scenarios are based on the boundaries of the Ecoinvent modelling. As shown on Fig. Fig. 4 . 4 Fig. 4. System boundaries of the primary aluminium production from bauxite mining. Adapted from Capral Aluminium, 2012. Fig. 5 . 5 Fig.5. System boundaries of the smelting recycling scenario for end-of-life aluminium cables. Adapted from[START_REF] Bertram | Aluminium Recycling in Europe[END_REF] Fig. 7 . 7 Fig. 7. System boundaries of the MTB end-of life recycling pathway for aluminium cables. Fig. 8 . 8 Fig. 8. Environmental characterisation comparison of the 3 scenarios using specific electricity mix. Fig. 10 . 10 Fig. 10. Uncertainty analysis between recycling scenarios 2 and 3a (European ENTSO-E electricity mix). Fig. 11 . 11 Fig.11. Sensitivity analysis on the influence of electricity mix supply for scenario 3. The engineering team has launched in 2015 a new transportable cable recycling solution called CABLEBOX and presented on Fig. 14. The solution takes place in two 40-foot containers, one 20foot container and one 10-foot container. The flow rate reached with the CBR 2000 version is 2 t/h. Compared to the MTB centralised plant, the flow is divided by two. A first unit of CABLEBOX production has been in operation since December 2016 in the United States and one is in operation since January 2017 in France (MTB Recycling, 2016). Fig. 13 . 13 Fig.13. Presentation of processes added to the MTB pathway to separate the plastic mixture. Today, the environmental LCA of European generic Nomenclature EoL End-of-Life ETV Environmental technology verification List of acronyms IAI International aluminium institute ILCD International life cycle data BIR Bureau of international recycling JRC Joint research centre EAA European aluminium association LCA Life cycle assessment ENTSO-E European network of transmission system operators for LCI Life cycle inventory electricity PE Polyethylene primary and secondary aluminium productions are well defined through the work of the European Aluminium Association (EAA) (European Aluminium Association (EAA), 2008). Numerous studies were conducted concerning the sustainability of aluminium recycled by smelters in comparison with primary aluminium from mining. Out- comes about global and local environmental impacts show a decrease up to 90% by using recycled aluminium (European Aluminium Association (EAA), 2010; Fig. 1 . Section of a cable with multiple aluminium beams. Table 1 1 Composition of recycled cables at the MTB plant (average for the period 2011-14). Material Proportion Rigid aluminium (a) 48.5% Plastics and rubber (b) 40.5% Non-ferrous metals (c) 4.5% Ferrous metals (steel and stainless steel) 4.0% Flexible aluminium 2.5% Table 2 Chemical composition of recycled aluminium produced by the MTB plant (average for the period 2012-14). Chemical elements Al Fe Si Cu Pb Mg Aluminium quality A and B 99.67 0.145 0.090 0.022 0.003 0.026 Aluminium quality C 99.50 0.154 0.064 0.205 0.019 0.010 Aluminium quality D 97.25 0.524 0.791 0.524 0.014 0.427 Table 3 3 List of indicators selected for the life cycle impact assessment (JRC -Institute for Environment and Sustainability, 2011). Indicators Model Climate change Baseline model of 100 years of the IPCC Ozone depletion Steady-state ODPs 1999 as in WMO assessment Human toxicity, non-cancer effects USEtox model v1.04 (Rosenbaum et al., 2008) Particulate matter RiskPoll model Ionising radiation HH Human health effects model as developed by Dreicer Photochemical ozone formation LOTOS-EUROS Acidification Accumulated Exceedance Freshwater eutrophication EUTREND model Freshwater ecotoxicity USEtox model Water resource depletion Pfister water scarcity v1.01 (Frischknecht et al., 2009) Mineral, fossil & ren resource CML 2002 depletion Table 4 4 Summary of the main Life Cycle Inventory information. Scenario 1 2 3a 3b Name Primary Secondary MTB MTB aluminium aluminium aluminium Aluminium ENTSO-E Green electricity Process Mining Smelting MTB MTB Recycling recycling Recycling pathway pathway Al Purity 98.9% 97% 99.6% 99.6% Old scraps - 46.3% 54% 54% Electricity mix EAA ENTSO-E ENTSO-E EDF Equilibria electricity mix -Nuclear power 8.7% 28.1% 28.1% - -Fossil fuel 11.3% 48.3% 48.3% - -Renewable 80% 23.6% 23.6% 100% Transport 11,413 km 322 km ≈526 km ≈526 km -Road 336 km 193 km 526 km 526 km -Train 21 km 109 km -Sea 11,056 km 20 km Table 5 5 Electricity source distribution for electricity mix used in the sensitivity analysis. Electricity mix 3a: ENTSO-E 3b: Green French German Electricity Electricity Electricity Source of data Ecoinvent 3.1 Powernext, Ecoinvent Ecoinvent 2014 3.1 3.1 Nuclear 28.1% - 77.2% 16.8% Fossil Fuel 48.3% - 8.9% 62.1% -Coal 12.7% 4.2% 19.7% -Lignite 8.0% 0% 26.8% -Natural gas 16.5% 3.2% 14.3% Oil 11.1% 1.5% 1.3% Renewable 21.9% 100% 13.4% 21.0% energy -Hydropower 11.9% 99.3% 11.9% 4.9% -Wind & Solar 10.0% 0.7% 1.5% 16.1% and other Other 1.7% - 0.5% 0.1% Acknowledgements This work was performed within the help of Marie Vuaillat from EVEA Consultancy firm and with financial support from French Agency for Environment and Energy Efficiency (ADEME). We also want to thank MTB Recycling and the French National Association for Technical Research (ANRT) for the funding of the PhD study (CIFRE Convention N °2015/0226) of the first author.
53,825
[ "177001", "1723" ]
[ "164351", "483036", "164351", "164351" ]
01758699
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://pasteur.hal.science/pasteur-01758699/file/10_Publication_Predictor_humoralImmuneResponse.pdf
Milieu Intérieur Consortium Petar Scepanovic Cécile Alanio Christian Hammer Flavia Hodel Jacob Bergstedt Etienne Patin Christian W Thorball Nimisha Chaturvedi Bruno Charbit Laurent Abel Lluis Quintana-Murci Darragh Duffy Matthew L Albert EPFL, Lausanne Jacques Fellay Hôpital Necker Andres Alcover Hugues Aschard Kalla Astrom Philippe Bousso Pierre Bruhns Ana Cumano Caroline Demangel Ludovic Deriano James Di Santo Françoise Dromer Gérard Eberl Jost Enninga Magnus Fontes Antonio Freitas Odile Gelpi Ivo Gomperts-Boneca Serge Hercberg Olivier Lantz Claude Leclerc Hugo Mouquet Sandra Pellegrini Stanislas Pol Olivier Schwartz Benno Schwikowski Spencer Shorte Vassili Soumelis Marie-Noëlle Ungeheuer Human genetic variants and age are the strongest predictors of humoral immune responses to common pathogens and vaccines à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Humans are regularly exposed to infectious agents, including common viruses such as cytomegalovirus (CMV), Epstein-Barr virus (EBV) or herpes simplex virus-1 (HSV-1), that have the ability to persist as latent infections throughout life -with possible reactivation events depending on extrinsic and intrinsic factors [START_REF] Traylen | Virus reactivation: a panoramic view in human infections[END_REF]. Humans also receive multiple vaccinations, which in many cases are expected to achieve lifelong immunity in the form of neutralizing antibodies. In response to each of these stimulations, the immune system mounts a humoral response, triggering the production of specific antibodies that play an essential role in limiting infection and providing long-term protection. Although the intensity of the humoral response to a given stimulation has been shown to be highly variable [START_REF] Grundbacher | Heritability estimates and genetic and environmental correlations for the human immunoglobulins G, M, and A[END_REF][START_REF] Tsang | Global analyses of human immune variation reveal baseline predictors of postvaccination responses[END_REF][START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF], the genetic and non-genetic determinants of this variability are still largely unknown. The identification of such factors may lead to improved vaccination strategies by optimizing vaccine-induced immunoglobulin G (IgG) protection, or to new understanding of autoimmune diseases, where immunoglobulin levels can correlate with disease severity [START_REF] Almohmeed | Systematic review and metaanalysis of the sero-epidemiological association between Epstein Barr virus and multiple sclerosis[END_REF]. Several genetic variants have been identified that account for inter-individual differences in susceptibility to pathogens [START_REF] Timmann | Genome-wide association study indicates two novel resistance loci for severe malaria[END_REF][START_REF] Mclaren | Association study of common genetic variants and HIV-1 acquisition in 6,300 infected cases and 7,200 controls[END_REF][START_REF] Casanova | The genetic theory of infectious diseases: a brief history and selected illustrations[END_REF], and in infectious [START_REF] Mclaren | Polymorphisms of large effect explain the majority of the host genetic contribution to variation of HIV-1 virus load[END_REF] or therapeutic [START_REF] Ge | Genetic variation in IL28B predicts hepatitis C treatment-induced viral clearance[END_REF] phenotypes. By contrast, relatively few studies have investigated the variability of humoral responses in healthy humans [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF][START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF]. In particular, Hammer C., et al. examined the contribution of genetics to variability in human antibody responses to common viral antigens, and finemapped variants at the HLA class II locus that associated with IgG responses. To replicate and extend these findings, we measured IgG responses to 15 antigens from common infectious agents or vaccines as well as total IgG, IgM, IgE and IgA levels in 1,000 wellcharacterized healthy donors. We used an integrative approach to study the impact of age, sex, non-genetic and genetic factors on humoral responses in healthy humans. Methods Study participants The Milieu Intérieur cohort consists of 1,000 healthy individuals that were recruited by BioTrial (Rennes, France). The cohort is stratified by sex (500 men, 500 women) and age (200 individuals from each decade of life, between 20 and 70 years of age). Donors were selected based on stringent inclusion and exclusion criteria, previously described [START_REF] Thomas | Milieu Intérieur Consortium. The Milieu Intérieur study -an integrative approach for study of human immunological variance[END_REF]. Briefly, recruited individuals had no evidence of any severe/chronic/recurrent medical conditions. The main exclusion criteria were: seropositivity for human immunodeficiency virus (HIV) or hepatitis C virus (HCV); ongoing infection with the hepatitis B virus (HBV) -as evidenced by detectable HBs antigen levels; travel to (sub-)tropical countries within the previous 6 months; recent vaccine administration; and alcohol abuse. To avoid the influence of hormonal fluctuations in women during the peri-menopausal phase, only pre-or postmenopausal women were included. To minimize the importance of population substructure on genomic analyses, the study was restricted to self-reported Metropolitan French origin for three generations (i.e., with parents and grandparents born in continental France). Whole blood samples were collected from the 1,000 fasting healthy donors on lithium heparin tubes, from September 2012 to August 2013. Serologies Total IgG, IgM, IgE, and IgA levels were measured using clinical grade turbidimetric test on AU 400 Olympus at the BioTrial (Rennes, France). Antigen-specific serological tests were performed using clinical-grade assays measuring IgG levels, according to the manufacturer's instructions. A list and description of the assays is provided in Table S1. Briefly, anti-HBs and anti-HBc IgGs were measured on the Architect automate (CMIA assay, Abbott). Anti-CMV IgGs were measured by CMIA using the CMV IgG kit from Beckman Coulter on the Unicel Dxl 800 Access automate (Beckman Coulter). Anti-Measles, anti-Mumps and anti-Rubella IgGs were measured using the BioPlex 2200 MMRV IgG kit on the BioPlex 2200 analyzer (Bio-Rad). Anti-Toxoplasma gondi, and anti-CMV IgGs were measured using the BioPlex 2200 ToRC IgG kit on the BioPlex 2200 analyzer (Bio-Rad). Anti-HSV1 and anti-HSV2 IgGs were measured using the BioPlex 2200 HSV-1 & HSV-2 IgG kit on the BioPlex 2200 analyzer (Bio-Rad). IgGs against Helicobacter Pylori were measured by EIA using the PLATELIA H. Pylori IgG kit (BioRad) on the VIDAS automate (Biomérieux). Anti-influenza A IgGs were measured by ELISA using the NovaLisa IgG kit from NovaTec (Biomérieux). In all cases, the criteria for serostatus definition (positive, negative or indeterminate) were established by the manufacturer, and are indicated in Table S2. Donors with an unclear result were retested, and assigned a negative result if borderline levels were confirmed with repeat testing. Non-genetic variables A large number of demographical and clinical variables are available in the Milieu Intérieur cohort as a description of the environment of the healthy donors [START_REF] Thomas | Milieu Intérieur Consortium. The Milieu Intérieur study -an integrative approach for study of human immunological variance[END_REF]. These include infection and vaccination history, childhood diseases, health-related habits, and sociodemographical variables. Of these, 53 where chosen for subsequent analysis of their impact on serostatus. This selection is based on the one done in [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], with a few variables added, such as measures of lipids and CRP. Testing of non-genetic variables Using serostatus variables as the response, and non-genetic variables as treatment variables, we fitted a logistic regression model for each response and treatment variable pair. A total of 14 * 52 = 742 models where therefore fitted. Age and sex where included as controls for all models, except if that variable was the treatment variable. We tested the impact of the clinical and demographical variables using a likelihood ratio test. All 742 tests where considered a multiple testing family with the false discovery rate (FDR) as error rate. Age and sex testing To examine the impact of age and sex we performed logistic and linear regression analyses for serostatus and IgG levels, respectively. All continuous traits (i.e. quantitative measurements of antibody levels) were log10-transformed in donors assigned as positive using a clinical cutoff. We used false discovery rate (FDR) correction for the number of serologies tested (associations with P < 0.05 were considered significant). To calculate odd ratios in the age analyses, we separated the cohort in equal numbers of young (<45 years old) and old (>=45 years old) individuals, and utilized the epitools R package (v0.5-10). DNA genotyping Blood was collected in 5mL sodium EDTA tubes and was kept at room temperature (18-25°) until processing. DNA was extracted from human whole blood and genotyped at 719,665 single nucleotide polymorphisms (SNPs) using the HumanOmniExpress-24 BeadChip (Illumina). The SNP call rate was higher than 97% in all donors. To increase coverage of rare and potentially functional variation, 966 of the 1,000 donors were also genotyped at 245,766 exonic variants using the HumanExome-12 BeadChip. The HumanExome variant call rate was lower than 97% in 11 donors, which were thus removed from this dataset. We filtered out from both datasets genetic variants that: (i) were unmapped on dbSNP138, (ii) were duplicated, (iii) had a low genotype clustering quality (GenTrain score < 0.35), (iv) had a call rate < 99%, (v) were monomorphic, (vi) were on sex chromosomes, or (vii) diverged significantly from Hardy-Weinberg equilibrium (HWE P < 10 -7 ). These quality-control filters yielded a total of 661,332 and 87,960 variants for the HumanOmniExpress and HumanExome BeadChips, respectively. Average concordance rate for the 16,753 SNPs shared between the two genotyping platforms was 99.9925%, and individual concordance rates ranged from 99.8% to 100%. Genetic relatedness and structure As detailed elsewhere [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], relatedness was detected using KING [START_REF] Manichaikul | Robust relationship inference in genome-wide association studies[END_REF]. Six pairs of related participants (parent-child, first and second degree siblings) were detected and one individual from each pair, randomly selected, was removed from the genetic analyses. The genetic structure of the study population was estimated using principal component analysis (PCA), implemented in EIGENSTRAT (v6.1.3) [START_REF] Patterson | Population structure and eigenanalysis[END_REF]. Genotype imputation We used Positional Burrows-Wheeler Transform for genotype imputation, starting with the 661,332 quality-controlled SNPs genotyped on the HumanOmniExpress array. Phasing was performed using EAGLE2 (v2.0.5) [START_REF] Loh | Reference-based phasing using the Haplotype Reference Consortium panel[END_REF]. As reference panel, we used the haplotypes from the Haplotype Reference Consortium (release 1.1) [START_REF] Mccarthy | A reference panel of 64,976 haplotypes for genotype imputation[END_REF]. After removing SNPs that had an imputation info score < 0.8 we obtained 22,235,661 variants. We then merged the imputed dataset with 87,960 variants directly genotyped on the HumanExome BeadChips array and removed variants that were monomorphic or diverged significantly from Hardy-Weinberg equilibrium (P < 10 -7 ). We obtained a total of 12,058,650 genetic variants to be used in association analyses. We used SNP2HLA (v1.03) [START_REF] Jia | Imputing amino acid polymorphisms in human leukocyte antigens[END_REF] to impute 104 4-digit HLA alleles and 738 amino acid residues (at 315 variable amino acid positions of the HLA class I and II proteins) with a minor allele frequency (MAF) of >1%. We used KIR*IMP [START_REF] Vukcevic | Imputation of KIR Types from SNP Variation Data[END_REF] to impute KIR alleles, after haplotype inference on chromosome 19 with SHAPEIT2 (v2.r790) [START_REF] O'connell | A general approach for haplotype phasing across the full spectrum of relatedness[END_REF]. A total of 19 KIR types were imputed: 17 loci plus two extended haplotype classifications (A vs. B and KIR haplotype). A MAF threshold of 1% was applied, leaving 16 KIR alleles for association analysis. Genetic association analyses For single variant association analyses, we only considered SNPs with a MAF of >5% (N=5,699,237). We used PLINK (v1.9) [START_REF] Chang | Second-generation PLINK: rising to the challenge of larger and richer datasets[END_REF] to perform logistic regression for binary phenotypes (serostatus: antibody positive versus negative) and linear regression for continuous traits (log10-transformed quantitative measurements of antibody levels in donors assigned as positive using a clinical cutoff). The first two principal components of a PCA based on genetic data, age and sex were used as covariates in all tests. In order to correct for baseline difference in IgG production in individuals, total IgG levels were included as covariates when examining associations with antigen-specific antibody levels, total IgM, IgE and IgA levels. From a total of 53 additional variables additional co-variates, selected by using elastic net [START_REF] Zhou | Efficient multivariate linear mixed model algorithms for genome-wide association studies[END_REF] and stability selection [START_REF] Meinshausen | Stability selection[END_REF] as detailed elsewhere [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], were included in some analyses (Table S3). For all antigen-specific genome-wide association studies, we used a genome-wide significant threshold (Pthreshold < 3.3 x 10 -9 ) corrected for the number of antigens tested (N=15). For genome-wide association tests with total Ig levels we set the threshold at Pthreshold < 1.3 x 10 -8 , correcting for the four immunoglobulin classes tested. For specific HLA analyses, we used PLINK (v1.07) [START_REF] Purcell | PLINK: a tool set for whole-genome association and population-based linkage analyses[END_REF] to perform conditional haplotype-based association tests and multivariate omnibus tests at multi-allelic amino acid positions. Variant annotation and gene burden testing We used SnpEff (v4.3g) [START_REF] Cingolani | A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w[END_REF] to annotate all 12,058,650 variants. A total of 84,748 variants were annotated as having (potentially) moderate (e.g. missense variant, inframe deletion, etc.) or high impact (e.g. stop gained, frameshift variant, etc.) and were included in the analysis. We used bedtools v2.26.0 [START_REF] Quinlan | BEDTools: a flexible suite of utilities for comparing genomic features[END_REF] to intersect variant genomic location with gene boundaries, thus obtaining sets of variants per gene. By performing kernel-regression-based association tests with SKAT_CommonRare (testing the combined effect of common and rare variants) and SKATBinary implemented in the SKAT v1.2.1 [START_REF] Ionita-Laza | Sequence kernel association tests for the combined effect of rare and common variants[END_REF], we tested 16,628 gene sets for association with continuous and binary phenotypes, respectively. By SKAT default parameters, variants with MAF ≤ ! √#$ are considered rare, whereas variants with MAF ≥ ! √#$ were considered common, where N is the sample size. We used Bonferroni correction for multiple testing, accounting for the number of gene sets and phenotypes tested (Pthreshold < 2 x 10 -7 for antigen-specific tests and Pthreshold < 7.5 x 10 -7 for tests with total Ig levels). Results Characterization of humoral immune responses in the 1,000 study participants To characterize the variability in humoral immune responses between healthy individuals, we measured total IgG, IgM, IgA and IgE levels in the plasma of the 1,000 donors of the Milieu Interieur (MI) cohort. After log10 transformation, total IgG, IgM, IgA and IgE levels showed normal distributions, with a median ± sd of 1.02 ±0.08 g/l, 0.01 ±0.2 g/l, 0.31 ±0.18 g/l and 1.51 ±0.62 UI/ml, respectively (Figure S1A). We then evaluated specific IgG responses to multiple antigens from the following infections and vaccines: (i) 7 common persistent pathogens, including five viruses: CMV, EBV (EA, EBNA, and VCA antigens), herpes simplex virus 1 & 2 (HSV-1 & 2), varicella zoster virus (VZV), one bacterium: Helicobacter pylori (H. Pylori), and one parasite: Toxoplasma gondii (T. Gondii); (ii) one recurrent virus: influenza A virus (IAV); and (iii) four viruses for which most donors received vaccination: measles, mumps, rubella, and HBV (HBs and HBc antigens) (Figure 1). The distributions of log10-transformed antigen-specific IgG levels in the 1,000 donors for the 15 serologies are shown in Figure S1B. Donors were classified as seropositive or seronegative using the thresholds recommended by the manufacturer (Table S2). The vast majority of the 1,000 healthy donors were chronically infected with EBV (seropositivity rates of 96% for EBV VCA, 91% for EBV EBNA and 9% for EBV EA) and VZV (93%). Many also showed high-titer antibodies specific for IAV (77%), HSV-1 (65%), and T. Gondii (56%). By contrast, fewer individuals were seropositive for CMV (35%), HSV-2 (21%), and H. Pylori (18%) (Figure 1, Figure S2A and Table S2). The majority of healthy donors carried antibodies against 5 or more persistent/recurrent infections of the 8 infectious agents tested (Figure S2B). 51% of MI donors were positive for anti-HBs IgG -a large majority of them as a result of vaccination, as only 15 study participants (3% of the anti-HBs positive group) were positive for anti-HBc IgG, indicative of previous HBV infection (spontaneously cured, as all donors were negative for HbS antigen, criteria for inclusion in the study). For rubella, measles, and mumps, seropositivity rates were 94%, 91%, and 89% respectively. For the majority of the donors, this likely reflects vaccination with a trivalent vaccine, which was integrated in 1984 as part of national recommendations in France, but for some -in particular the >40 year-old individuals of the cohort, it may reflect acquired immunity due to natural infection. Associations of age, sex, and non-genetic variables with serostatus Subjects included in the Milieu Interieur cohort were surveyed for a large number of variables related to infection and vaccination history, childhood diseases, health-related habits, and socio-demographical variables (http://www.milieuinterieur.fr/en/researchactivities/cohort/crf-data). Of these, 53 where chosen for subsequent analysis of their impact on serostatus. This selection is based on the one done in [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], with a few variables added, such as measures of lipids and CRP. Applying a mixed model analysis that controls for potential confounders and batch effects, we found expected associations of HBs seropositivity with previous administration of HBV vaccine, as well as of Influenza seropositivity with previous administration of Flu vaccine (Figure S3A and Table S4). We also found associations of HBs seropositivity with previous administration of Typhoid and Hepatitis A vaccines -which likely reflects co-immunization, as well as with Income, Employment, and Owning a house -which likely reflects confounding epidemiological factors. We observed a significant impact of age on the probability of being seropositive for antigens from persistent or recurrent infectious agents and/or vaccines. For 14 out of the 15 examined serologies, older people (> 45 years old) were more likely to have detectable specific IgG, with an odds ratio (OR; mean ± SD) of 5.4 ± 8.5 (Figure 2A, Figure S3B and Table S5). We identified four different profiles of age-dependent evolution of seropositivity rates (Figure 2B and Figure S4). Profile 1 is typical of childhood-acquired infection, i.e. microbes that most donors had encountered by age 20 (EBV, VZV, and influenza). We observed in this case either (i) a limited increase in seropositivity rate after age 20 for EBV; (ii) stability for VZV; or (iii) a small decrease in seropositivity rate with age for IAV (Figure S4A-E). Profile 2 concerns prevalent infectious agents that are acquired throughout life, with steadily increasing prevalence (observed for CMV, HSV-1, and T. gondii). We observed in this case either (i) a linear increase in seropositivity rates over the 5 decades of age for CMV (seropositivity rate: 24% in 20-29 years-old; 44% in 60-69 years-old; slope=0.02) and T. Gondii (seropositivity rate: 21% in 20-29 years-old; 88% in 60-69; slope=0.08); or (ii) a nonlinear increase in seropositivity rates for HSV-1, with a steeper slope before age 40 (seropositivity rate: 36% in 20-29 years-old; 85% in 60-69; slope=0.05) (Figure S4F-H). Profile 3 showed microbial agents with limited seroprevalence -in our cohort, HSV-2, HBV (anti-HBS and anti-HBc positive individuals, indicating prior infection rather than vaccination), and H. Pylori. We observed a modest increase of seropositivity rates throughout life, likely reflecting continuous low-grade exposure (Figure S4I-K). Profile 4 is negatively correlated with increasing age and is unique to HBV anti-HBs serology (Figure S4L). This reflects the introduction of the HBV vaccine in 1982 and the higher vaccination coverage of younger populations. Profiles for Measles, Mumps and Rubella are provided in Figure S4M-O. We also observed a significant association between sex and serostatus for 7 of the 15 antigens, with a mean OR of 1.5 ± 0.5 (Figure 2C, Figure S3C and Table S5). For six serological phenotypes, women had a higher rate of positivity, IAV being the notable exception. These associations were confirmed when considering "Sharing house with partner", and "Sharing house with children" as covariates. Impact of age and sex on total and antigen-specific antibody levels We further examined the impact of age and sex on the levels of total IgG, IgM, IgA and IgE detected in the serum of the patients, as well as on the levels of antigen-specific IgGs in seropositive individuals. We observed a low impact of age and sex with total immunoglobulin levels (Figure 3A and Table S5), and of sex with specific IgG levels (Mumps and VZV; Figure S5A andC). In contrast, age had a strong impact on specific IgG levels in seropositive individuals, affecting 10 out of the 15 examined serologies (Figure 3B, Figure S5B and Table S5). Correlations between age and IgG were mostly positive, i.e. older donors had more specific IgG than younger donors, as for example in the case of Rubella (Figure 3C, left panel). The notable exception was T. gondii, where we observed lower amounts of specific IgG in older individuals (b=-0.013(-0.019, -0.007), P=3.7x10 -6 , Figure 3C, right panel). Genome-wide association study of serostatus To test if human genetic factors influence the rate of seroconversion upon exposure, we performed genome-wide association studies. Specifically, we searched for associations between 5.7 million common polymorphisms (MAF > 5%) and the 15 serostatus in the 1,000 healthy donors. Based on our results regarding age and sex, we included both as covariates in all models. After correcting for the number of antigens tested, the threshold for genomewide significance was Pthreshold = 3.3 x 10 -9 , for which we did not observe any significant association. In particular, we did not replicate the previously reported associations with H. Pylori serostatus on chromosome 1 (rs368433, P = 0.67, OR = 0.93) and 4 (rs10004195, P = 0.83, OD = 0.97) [START_REF] Mayerle | Identification of genetic loci associated with Helicobacter pylori serologic status[END_REF]. We then focused on the HLA region and confirmed the previously published association of influenza A serostatus with specific amino acid variants of HLA class II molecules [START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF]. The strongest association in the MI cohort was found with residues at position 31 of the HLA-DRβ1 subunit (omnibus P = 0.009, Table S6). Residues found at that position, isoleucine (P = 0.2, OD (95% CI) = 0.8 (0.56, 1.13)) and phenylalanine (P = 0.2, OR (95% CI) = 0.81 (0.56, 1.13)), are consistent in direction and in almost perfect linkage disequilibrium (LD) with the glutamic acid residue at position 96 in HLA-DRβ1 that was identified in the previous study (Table S7). As such, our result independently validates the previous observation. Genome-wide association study of total and antigen-specific antibody levels To test whether human genetic factors also influence the intensity of antigen-specific immune response, we performed genome-wide association studies of total IgG, IgM, IgA and IgE levels, as well as antigen-specific IgG levels. Using a significance threshold of Pthreshold < 1.3 x 10 -8 , we found no SNPs associated with total IgG, IgM, IgE and IgA levels. However, we observed nominal significance and the same direction of the effect for 3 out of 11 loci previously published for total IgA [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Swaminathan | Variants in ELL2 influencing immunoglobulin levels associate with multiple myeloma[END_REF][START_REF] Viktorin | IgA measurements in over 12 000 Swedish twins reveal sex differential heritability and regulatory locus near CD30[END_REF][START_REF] Frankowiack | The higher frequency of IgA deficiency among Swedish twins is not explained by HLA haplotypes[END_REF][START_REF] Yang | Genome-wide association study identifies TNFSF13 as a susceptibility gene for IgA in a South Chinese population in smokers[END_REF], 1 out of 6 loci for total IgG [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Swaminathan | Variants in ELL2 influencing immunoglobulin levels associate with multiple myeloma[END_REF][START_REF] Liao | Genome-wide association study identifies common variants at TNFRSF13B associated with IgG level in a healthy Chinese male population[END_REF] and 4 out of 11 loci for total IgM [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Yang | Genome-wide scan identifies variant in TNFSF13 associated with serum IgM in a healthy Chinese male population[END_REF] (Table S8). Finally, we also report a suggestive association (genome-wide significant, P < 5.0 x 10 -8 , but not significant when correcting for the number of immunoglobulin classes tested in the study) of a SNP rs11186609 on chromosome 10 with total IgA levels (P = 2.0 x 10 -8 , beta = -0.07 for the C allele). The closest gene for this signal is SH2D4B. We next explored associations between human genetic variants and antigen-specific IgG levels in seropositive donors (Pthreshold < 3.3 x 10 -9 ). We detected significant associations for anti-EBV (EBNA antigen) and anti-rubella IgGs. Associated variants were in both cases located in the HLA region on chromosome 6. For EBV, the top SNP was rs74951723 (P = 3 x 10 -14 , beta = 0.29 for the A allele) (Figure 4A). For rubella, the top SNP was rs115118356 (P = 7.7 x 10 -10 , beta = -0.11 for the G allele) (Figure 4B). rs115118356 is in LD with rs2064479, which has been previously reported as associated with titers of anti-rubella IgGs (r 2 = 0.53 and D' = 0.76) [START_REF] Lambert | Polymorphisms in HLA-DPB1 are associated with differences in rubella virusspecific humoral immunity after vaccination[END_REF]. To fine map the associations observed in the HLA region, we tested 4-digit HLA alleles and variable amino positions in HLA proteins. At the level of HLA alleles, HLA-DQB1*03:01 showed the lowest P-value for association with EBV EBNA (P = 1.3 x 10 -7 ), and HLA-DPB1*03:01 was the top signal for rubella (P = 3.8 x 10 -6 ). At the level of amino acid positions, position 58 of the HLA-DRb1 protein associated with anti-EBV (EBNA antigen) IgG levels (P = 2.5 x 10 -11 ). This is consistent with results of previous studies linking genetic variations in HLA-DRβ1 with levels of anti-EBV EBNA-specific IgGs [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF][START_REF] Pedergnana | Combined linkage and association studies show that HLA class II variants control levels of antibodies against Epstein-Barr virus antigens[END_REF] (Table S9). In addition, position 8 of the HLA-DPb1 protein associated with anti-rubella IgG levels (P = 1.1 x 10 -9 , Table 1). Conditional analyses on these amino-acid positions did not reveal any additional independent signals. KIR associations To test whether specific KIR genotypes, and their interaction with HLA molecules, are associated with humoral immune responses, we imputed KIR alleles from SNP genotypes using KIR*IMP. First, we searched for potential associations with serostatus or IgG levels for 16 KIR alleles that had a MAF > 1%. After correction for multiple testing, we did not find any significant association (Pthreshold < 2.6 x 10 -4 ). Second, we tested specific KIR-HLA combinations. We filtered out rare combinations by removing pairs that were observed less then 4 times in the cohort. After correction for multiple testing (Pthreshold < 5.4 × 10 -7 ), we observed significant associations between total IgA levels and the two following HLA-KIR combinations: HLA-B*14:02/ KIR3DL1 and HLA-C*08:02/ KIR2DS4 (P = 3.9 x 10 -9 and P = 4.9 x 10 -9 respectively, Table 2). Burden testing for rare variants Finally, to search for potential associations between the burden of low frequency variants and the serological phenotypes, we conducted a rare variant association study. This analysis only included variants annotated as missense or putative loss-of-function (nonsense, essential splice-site and frame-shift, N=84,748), which we collapsed by gene and tested together using the kernel-regression-based association test SKAT [START_REF] Ionita-Laza | Sequence kernel association tests for the combined effect of rare and common variants[END_REF]. We restricted our analysis to genes that contained at least 5 variants. Two genes were identified as significantly associated with total IgA levels using this approach: ACADL (P = 3.4 x 10 -11 ) and TMEM131 (P=7.8 x 10 -11 ) (Table 3). By contrast, we did not observe any significant associations between rare variant burden and antigen-specific IgG levels or serostatus. Discussion We performed genome-wide association studies for a number of serological phenotypes in a well-characterized age-and sex-stratified cohort, and included a unique examination of genetic variation at HLA and KIR loci, as well as KIR-HLA associations. As such, our study provides a broad resource for exploring the variability in humoral immune responses across different isotypes and different antigens in humans. Using a fine-mapping approach, we replicated the previously reported associations of variation in the HLA-DRb1 protein with influenza A serostatus and anti-EBV IgG titers [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF], implicating amino acid residues in strong LD with the ones previously reported (Hammer et al.). We also replicated an association between HLA class II variation and anti-Rubella IgG titers [START_REF] Lambert | Polymorphisms in HLA-DPB1 are associated with differences in rubella virusspecific humoral immunity after vaccination[END_REF], and further fine-mapped it to position 8 of the HLA-DPb1 protein. Interestingly, position 8 of HLA-DPb1, as well as positions 58 and 31 of HLA-DRb1, are all part of the extracellular domain of the respective proteins. Our findings confirm these proteins as critical elements for the presentation of processed peptide to CD4 + T cells, and as such may reveal important clues in the fine regulation of class II antigen presentation. We also identified specific HLA/KIR combinations, namely HLA-B*14:02/KIR3DL1 and HLA-C*08:02/KIR2DS4, which associate with higher levels of circulating IgA. Given the novelty of KIR imputation method and the lack of possibility of benchmarking its reliability in the MI cohort further replication of these results will be needed. Yet these findings support the concept that variations in the sequence of HLA Class II molecules, or specific KIRs/HLA class I interactions play a critical role in shaping humoral immune responses in humans. In particular, our findings confirm that small differences in the capacity of HLA class II molecules to bind specific viral peptides can have a measurable impact on downstream antibody production. As such, our study emphasizes the importance of considering HLA diversity in disease association studies where associations between IgG levels and autoimmune diseases are being explored. We identified nominal significance for some but not all of the previously reported associations with levels of total IgG, IgM and IgA, as well as a suggestive association of total IgA levels with an intergenic region on chromosome 10 -closest gene being SH2D4B. By collapsing the rare variants present in our dataset into gene sets and testing them for association with the immunoglobulin phenotypes, we identified two additional loci that participate to natural variation in IgA levels. These associations mapped to the genes ACADL and TMEM131. ACADL encodes an enzyme with long-chain acyl-CoA dehydrogenase activity, and polymorphisms have been associated with pulmonary surfactant dysfunction [START_REF] Goetzman | Long-chain acyl-CoA[END_REF]. As the same gene is associated with levels of circulating IgA in our cohort, we speculate that ACADL could play a role in regulating the balance between mucosal and circulating IgA. Further studies will be needed to test this hypothesis, as well as the potential impact of our findings in other IgA-related diseases. We were not able to replicate previous associations of TLR1 and FCGR2A locus with serostatus for H. Pylori [START_REF] Mayerle | Identification of genetic loci associated with Helicobacter pylori serologic status[END_REF]. We believe this may be a result of notable differences in previous exposure among the different cohorts as illustrated by the different levels of seropositivity; 17% in the Milieu Interieur cohort, versus 56% in the previous ones, reducing the likelihood of replication due to decreased statistical power. In addition to genetics findings, our study re-examined the impact of age and sex, as well as non-genetic variables, on humoral immune responses. Although this question has been previously addressed, our well-stratified cohort brings interesting additional insights. One interesting finding is the high rate of seroconversion for CMV, HSV-1, and T. Gondii during adulthood. In our cohort, the likelihood of being seropositive for one of these infections is comparable at age 20 and 40. Given the high prevalence of these microbes in the environment, this raises questions about the factors that prevent some individuals from becoming seropositive upon late life exposure. Second, both age and sex have a strong correlation with serostatus, i.e. older and female donors were more likely to be seropositive. Although increased seropositivity with age probably reflects continuous exposure, the sex effect is intriguing. Indeed, our study considered humoral responses to microbial agents that differ significantly in terms of physiopathology and that do not necessarily have a childhood reservoir. Also, our analysis show that associations persist after removal of potential confounding factors such as marital status, and/or number of kids. As such, we believe that our results may highlight a general impact of sex on humoral immune response variability, i.e. a tendency for women to be more likely to seroconvert after exposure, as compared to men of same age. This result is in line with observations from vaccination studies, where women responded to lower vaccine doses [39]. Finally, we observed an age-related increase in antigen-specific IgG levels in seropositive individuals for most serologies, with the notable exception of toxoplasmosis. This may indicate that aging plays a general role in IgG production. An alternative explanation that requires further study is that this could be the consequence of reactivation or recurrent exposure. In sum, our study provides evidence that age, sex and host genetics contribute to natural variation in humoral responses in humans. The identified associations have the potential to help improve vaccination strategies, and/or dissect pathogenic mechanisms implicated in human diseases related to immunoglobulin production such as autoimmunity. dehydrogenase deficiency as a cause of pulmonary surfactant dysfunction. (A) Relationships between Log10-transformed IgG (upper left), IgA (upper right), IgM (bottom left) and IgE (bottom right) levels and age. Regression lines were fitted using linear regression, with Log10-transformed total antibody levels as response variable, and age and sex as treatment variables. Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. (B) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between age and Log10-transformed antigenspecific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (C) Relationships between Log10-transformed anti-rubella IgGs (left), and Log10-transformed anti-toxoplasma gondii IgGs (right) and age. Regression lines were fitted using linear regression described in (B). Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. Fig. 3 3 Fig.3 Age and sex impact on total and antigen-specific antibody levels.(A) Relationships between Log10-transformed IgG (upper left), IgA (upper right), IgM (bottom left) and IgE (bottom right) levels and age. Regression lines were fitted using linear regression, with Log10-transformed total antibody levels as response variable, and age and sex as treatment variables. Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. (B) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between age and Log10-transformed antigenspecific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (C) Relationships between Log10-transformed anti-rubella IgGs (left), and Log10-transformed anti-toxoplasma gondii IgGs (right) and age. Regression lines were fitted using linear regression described in (B). Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. Fig. 4 4 Fig.4 Association between host genetic variants and serological phenotypesManhattan plots of association results for (A) EBV anti-EBNA IgG, (B) Rubella IgG levels. The dashed horizontal line denotes genome-wide significance (P = 3.3 x 10 -9 ). Fig. S1 S1 Fig.S1 Distribution of serological variables, and clinical threshold used for determination of serostatus. (A) Distribution and probability density curve of Log10-transformed IgG, IgM, IgA, IgE levels in the 1,000 study participants. (B) Distribution of Log10-transformed antigen-specific IgG levels. The vertical lines indicate the clinical threshold determined by manufacturer, and used for determining the serostatus of the donors for each serology. Fig.S2 Seroprevalence data in the 1 , 1 Fig.S2 Seroprevalence data in the 1,000 healthy donors. (A) Percentage of seropositive donors for each indicated serology in the MI study (for HBV serology, percentages of anti-HBs IgGs are indicated). (B) Distribution of the number of positive serologies in the 1,000 healthy donors regarding the 8 persistent or recurrent infections tested in our study (i.e. CMV, Influenza, HSV1, HSV2, TP, EBV_EBNA, VZV, HP). Fig. S3 S3 Fig.S3 Impact of non-genetic factors, age, and sex on serostatus.(A) Adjusted P-values (FDR) of the large-sample chi-square likelihood ratio tests of effect of non-genetic variables on serostatus, obtained from mixed models. (B-C) Adjusted P-values (adj. P) of the tests of effect of age (<45 = reference, vs. >45 years old) (B) and sex (Men = reference, vs. Women) (C) on serostatus, obtained using a generalized linear mixed model, with serostatus as response variables, and age and sex as treatment variables. Odd ratios were color-coded. Vertical black line indicates the -log10 of the chosen threshold for statistical significance (-log10(0.05) = 1.30103). Fig. S4 S4 Fig.S4 Evolution of serostatus with age and sex.(A-O) Odds of being seropositive for each of the 15 antigens considered in our study, as a function of age in men (blue) and women (red). Indicated P-values were obtained using a logistic regression with Wald test, with serostatus binary variables (seropositive, versus seronegative) as the response, and age and sex as covariates. Fig. S5 S5 Fig.S5 Impact of age and sex on antigen-specific IgG levels.(A) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between sex and Log10-transformed antigen-specific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model described in (Figure3B). Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (B-C) Adjusted P-values (adj. P) of the tests of effect of age (B) and sex (C) on Log10-transformed antigen-specific IgG levels, obtained using a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Normalized effect sizes were color-coded. The vertical black line indicates the -log10 of the threshold for statistical significance (-log10(0.05) = 1.30103). The clinical study was approved by the Comité de Protection des Personnes -Ouest 6 on June 13th, 2012, and by the French Agence Nationale de Sécurité du Médicament (ANSM) on June 22nd, 2012. The study is sponsored by Institut Pasteur (Pasteur ID-RCB Number: 2012-A00238-35), and was conducted as a single center study without any investigational product. The protocol is registered under ClinicalTrials.gov (study# NCT01699893). Table 1 . 1 Significant associations with EBV EBNA and Rubella antigens at the level of SNP, HLA allele and protein amino acid position Phenotype EBV EBNA IgG levels Rubella IgG levels Table 2 . 2 Association testing between KIR-HLA interactions and serology phenotypes Phenotype KIR HLA Estimate Std. Error P-value IgA levels KIR3DL1 HLA-B*14:02 0.456 0.077 3.9x10 -09 IgA levels KIR2DS4 HLA-B*14:02 0.454 0.077 4.5x10 -09 IgA levels KIR3DL1 HLA-C*08:02 0.449 0.076 4.9x10 -09 IgA levels KIR2DS4 HLA-C*08:02 0.448 0.076 5.7x10 -09 Table 3 . 3 Significant associations of rare variants collapsed per gene set with IgA levels. N o of Rare N o of Common Phenotype Chromosome Gene P-value Q Markers Markers IgA levels 2 2 ACADL TMEM131 7.83x10 -11 17.89 3.42x10 -11 18.09 5 13 2 2 J Biol Chem. 2014 Apr 11;289(15):10668-79. 39. Giefing-Kröll C, Berger P, Lepperdinger G, Grubeck-Loebenstein B. How sex and age affect immune responses, susceptibility to infections, and response to vaccination. Aging Cell. 2015 Jun;14(3):309-21.Serum samples from the 1,000 age-and sex-stratified healthy individuals of the Milieu Intérieur cohort were used for measuring total antibody levels (IgA, IgM, IgG and IgE), as well as for qualitative (serostatus) and quantitative (IgG levels) assessment of IgG responses against cytomegalovirus, Epstein-Barr virus (anti-EBNA, anti-VCA, anti-EA), herpes simplex virus 1 & 2, varicella zoster virus, Helicobacter pylori, Toxoplasma gondii, influenza A virus, measles, mumps, rubella, and hepatitis B virus (anti-HBs and anti-HBc), using clinical-grade serological assays. Odd ratios of significant associations (adjusted P-values (adj. P<0.05) between age (<45 = reference, vs. >45 yrs.old) and serostatus as determined based on clinical-grade serologies in the 1,000 healthy individuals from the Milieu Intérieur cohort. Odd ratios were estimated in a generalized linear mixed model, with serostatus as response variable, and age and sex as treatment variables. Dots represent the mean of the odd ratios. Lines represent the 95% confidence intervals. Figure legends Fig.1 Overview of the study. Fig.2 Age and sex impact on serostatus. (A) (B) Odds of being seropositive towards EBV EBNA (Profile 1; upper left), Toxoplasma gondii (Profile 2; upper right), Helicobacter Pylori (Profile 3; bottom left), and HBs antigen of HBV (Profile 4; bottom right), as a function of age in men (blue) and women (red) in the 1,000 healthy donors. Indicated P-values were obtained using a logistic regression with Wald test, with serostatus binary variables (seropositive, versus seronegative) as the response, and age and sex as treatments. (C) Odd ratios of significant associations (adjusted P-values (adj. P<0.05) between sex (Men=reference, vs. Women) and serostatus. Odd ratios were estimated in a generalized linear mixed model, with serostatus as response variable, and age and sex as treatment variables. Dots represent the mean of the odd ratios. Lines represent the 95% confidence intervals. Acknowledgements This work benefited from support of the French government's Invest in the Future Program, managed by the Agence Nationale de la Recherche (ANR, reference 10-LABX-69-01). It was also supported by a grant from the Swiss National Science Foundation (31003A_175603, to JF). C.A. received a PostDoctoral Fellowship from Institut National de la Recherche Médicale.
48,086
[ "1030290", "1191687", "179015", "786314", "756191", "754080", "746816", "764031" ]
[ "302851", "241012", "462515", "463040", "241012", "302851", "302851", "241012", "302851", "241012", "31970", "463018", "302851", "241012", "302851", "241012", "463040", "530926", "31970", "463018", "462515", "463040", "323938", "302851", "241012" ]
01758764
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.sorbonne-universite.fr/hal-01758764/file/posture-author-version.pdf
Yvonne Jansen email: [email protected]@di.ku.dk Kasper Hornbaek How Relevant are Incidental Power Poses for HCI? Keywords: Incidental postures, power pose, Bayesian analysis. ACM Classification H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous C D Figure 1. User interfaces can take on a variety of forms affording different body postures. We studied two types of postures: constrictive (A, C) and expansive (B, D); in two settings: on a wall-sized display (A, B) and on a large touchscreen (C, D). INTRODUCTION In 2010 Carney et al. asserted that "a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful [which] has real-world, actionable implications" [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] thereby coining the concept of power poses. However, incidental body postures may only be leveraged in HCI if they can be reliably elicited. In 2015, a large-scale replication project [START_REF]Estimating the reproducibility of psychological science[END_REF] re-opened the files on 100 published experiments and found that a considerable number of reported effects did not replicate, leading to the so-called "replication crisis" in Psychology. Neither the study by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] nor the one by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] was among the replicated studies, but multiple high powered and pre-registered studies have since then failed to establish a link between power poses and various behavioral measures [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF][START_REF] Garrison | Embodying power: a preregistered replication and extension of the power pose effect[END_REF][START_REF] Victor N Keller | Meeting your inner super (wo) man: are power poses effective when taught?[END_REF][START_REF] Ronay | Embodied power, testosterone, and overconfidence as a causal pathway to risk-taking[END_REF][START_REF] Bailey | Could a woman be superman? Gender and the embodiment of power postures[END_REF][START_REF] Bombari | Real and imagined power poses: is the physical experience necessary after all?[END_REF][START_REF] Jackson | Does that pose become you? Testing the effect of body postures on self-concept[END_REF][START_REF] Latu | Power vs. persuasion: can open body postures embody openness to persuasion[END_REF][START_REF] Klaschinski | Benefits of power posing: effects on dominance and social sensitivity[END_REF]. While a Bayesian meta-analysis of six pre-registered studies [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF] provides credible evidence for a small effect of power poses on self-reported felt power (d ≈ 0.2), the practical relevance of this small effect remains unclear [START_REF] Jonas | Power poses-where do we stand?[END_REF]. It should be noted that all of the failed replications focused on explicitly elicited postures as studied by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF], that is, participants were explicitly instructed to take on a certain posture and afterwards were tested on various measures. Most relevant to HCI are, however, the experiments by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] on incidental power poses which so far appear to have not been replicated or refuted. Thus it remains unclear whether these effects replicate in an HCI context, and we offer the following contributions with this article: • We operationalize power poses as incidental body postures which can be brought about by interface and interaction design. • We measure in a first experiment effects on self-reported felt power. Our results on their own are inconclusive as our data are consistent with a wide range of possible effect sizes, including zero. • In a second experiment we measure behavioral effects on risk-taking behavior while playing a computer game. Results indicate that the manipulation of incidental body posture does not predict willingness to take risks. BACKGROUND In this section we clarify the terminology we use in this article, motivate our work through two scenarios, summarize work on body posture in HCI and previous work in Psychology including the recent controversies around power poses. Postures versus Gestures Our use of the terms posture and gesture is consistent with the definitions of the American Heritage Dictionary: posture: position of a person's body or body parts gesture: a motion of the limbs or body made to express or help express thought or to emphasize speech. Accordingly, a gesture could be described as a dynamic succession of different hand, arm, or body postures. This article is mainly concerned with body postures as we are interested in features of postures "averaged" over the course of interaction, for example, the overall expansiveness of someone's posture during the use of a system. Motivation Within a classic desktop environment, that is, a desktop computer equipped with an external display, keyboard, and mouse, a user interface designer has little influence on a user's posture besides requiring or avoiding frequent changes between keyboard and mouse, or manipulating the mouse transfer function. As device form factors diversified, people now find themselves using computers in different environments such as small mobile phones or tablets while sitting, standing, walking, or lying down, or large touch sensitive surfaces while sitting or standing. Device form factors combined with interface design can thus impose postures on the user during interaction. For example, an interface requiring two-handed interaction on a small-screen device (phone, tablet, or laptop) requires that users bring together both hands and their gaze thereby leading to a constrictive incidental posture. On a large touchscreen interface, a UI designer can spread out elements which would require more reaching and lead to more expansive incidental postures (see Fig. 1B andD) or use techniques to bring elements closer together (e.g., [START_REF] Bezerianos | The Vacuum: Facilitating the Manipulation of Distant Objects[END_REF]) which can make postures more constrictive (see Fig. 1A andC). We now sketch two scenarios to illustrate how work on body posture from Psychology applies to HCI and why it is relevant for UI design guidelines to determine whether expansive and constrictive body postures during interface use can influence people's motivation, behavior, or emotions. Education Riskind and Gotay [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF] reported that expansive postures led to a higher persistence when solving problems while people in constrictive postures gave up more easily. Within the area of interfaces for education purposes, say, in schools, it would be important to know whether learning environments designed for small tablet devices incur detrimental effects due to incidental constrictive postures during their use. Should this be the case then design guidelines for such use cases would need to be established recommending larger form factors combined with interfaces leading to more expansive postures. Risky decision making Yap et al. reported that driving in expansive car seats leads to riskier driving in a driving simulation [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. Some professions require decision making under risk on a regular basis, such as air traffic controllers, power plant operators, or financial brokers. Should the interface designs used in such professions (e.g., see Fig. 2) have an influence on people's decisions, then it would be important to minimize such effects accordingly. However, currently we know neither whether such effects exist in these contexts nor how they would need to be counteracted, should they exist. Body Posture in HCI The role of the body in HCI has been receiving increased attention. Dourish [START_REF] Dourish | Where the action is: the foundations of embodied interaction[END_REF] as well as Klemmer and colleagues [START_REF] Scott R Klemmer | How Bodies Matter: Five Themes for Interaction Design[END_REF] emphasize in a holistic manner the importance of the body for interaction design and also consider social factors. However, its role as a feedback channel to emotion and cognition has remained largely unstudied within HCI. Body postures have received most attention in the context of affective game design. Bianchi-Berthouze and colleagues studied people's body movements during video-game play and how different gaming environments such as desktop or body-controlled games lead to different movement types [START_REF] Bianchi-Berthouze | On posture as a modality for expressing and recognizing emotions[END_REF][START_REF] Pasch | Immersion in Movement-Based Interaction[END_REF]. Savva and colleagues studied how players' emotions can be automatically recognized from their body movements and be used as indicators of aesthetic experience [START_REF] Savva | Continuous Recognition of Player's Affective Body Expression as Dynamic Quality of Aesthetic Experience[END_REF], and Bianchi-Berthouze proposed a taxonomy of types of body movements to facilitate the design of engaging game experiences [START_REF] Bianchi-Berthouze | Understanding the Role of Body Movement in Player Engagement[END_REF]. While this body of work also builds on posture work from Psychology, their interest is in understanding the link between players' body movement and affective experience, not on testing downstream effects of postures on behavior in an HCI context. Isbister et al. [START_REF] Isbister | Scoop!: using movement to reduce math anxiety and affect confidence[END_REF] presented scoop! a game using expansive body postures with the intention to overcome math anxiety in students. The focus of this work is on the system's motivation and description and does not include empirical data. Snibbe and Raffle [START_REF] Scott | Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits[END_REF] report on their use of body posture and gestures to imbue visitors of science museums with intended emotions. Only little empirical work on concrete effects directly related to the work in Psychology has been published so far. De Rooij and Jones [START_REF] De | E)Motion and Creativity: Hacking the Function of Motor Expressions in Emotion Regulation to Augment Creativity[END_REF] studied gesture pairs based on these ideas. Their work builds on the hypothesis that movements are related to approach and avoidance behaviors, and therefore inherently linked to emotion. They test the hypothesis through an application for creative tasks such as idea generation. In one variant of their application, users extend their arm to record ideas (avoidance gesture); in another variant, they move their arm towards their body (approach gesture). Results show that avoidance gestures lead to lower creativity and more negative emotion than approach gesture. Two studies within Psychology made use of interactive devices to manipulate incidental postures: (1) Hurtienne and colleagues report in an abstract that sitting hunched or standing upright during the use of a touchscreen leads to different behaviors in a dictator game [START_REF] Hurtienne | Zur Ergonomie prosozialen Verhaltens: Kontextabhängige Einflüsse von Körperhaltungen auf die Ergebnisse in einem Diktatorspiel [On the ergonomics of prosocial behaviour: context-dependent influences of body posture on the results in a dictator game[END_REF]. If participants were primed with "power concepts" they behaved more self-interested in an upright posture; if they were primed with "moral concepts" the effect was reversed. (2) Bos and Cuddy published a tech report [START_REF] Maarten | iPosture: The size of electronic consumer devices affects our behavior[END_REF] on a study linking display size to willingness to wait. They asked participants to complete a series of tasks, and then let them wait in a room with the device they were using for the tasks (iPod, iPad, MacBook Pro, or iMac). The smaller the device, the longer participants waited and the less likely they went to look for the experimenter. As no details are given about participants' actions during the waiting time (such as playing around with the device or not) nor about the postures participants took on while using the devices, it is unclear whether this correlation can indeed be linked solely to the different display sizes. Further studies are required to determine causal effects due to differences in postures. Effects of Body Posture on Thought and Behavior In Psychology, body posture has been linked to a wide range of behavioral and affective effects [START_REF] Cacioppo | Rudimentary determinants of attitudes. II: Arm flexion and extension have differential effects on attitudes[END_REF][START_REF] Yang | Embodied memory judgments: a case of motor fluency[END_REF][START_REF] Raedy M Ping | Reach For What You Like: The Body's Role in Shaping Preferences[END_REF][START_REF] Jasmin | The QWERTY effect: how typing shapes the meanings of words[END_REF][START_REF] Tom | The Role of Overt Head Movement in the Formation of Affect[END_REF]. We focus here only on those closely related to the expansive versus constrictive dyad of power poses. In 1982, Riskind and Gotay [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF] presented four experiments on the relation between physical posture and motivation to persist on tasks. They asked participants to take on either slumped or expansive, upright postures. The former group gave up much faster on a standardized test for learned helplessness than the latter group whereas both groups gave similar self-reports. More recently, the popular self-help advice to take on a "power pose" before delivering a speech has been linked by multiple studies to increases in confidence, risk tolerance, and even testosterone levels [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF][START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Amy | Preparatory power posing affects nonverbal presence and job interview performance[END_REF]. Further, Yap and colleagues reported that expansiveness of postures can also affect people's honesty [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. In contrast to Carney et al., the latter explicitly studied incidental postures, that is, postures imposed by the environment such as a small versus a large workspace or a narrow versus a spacious car seat. Their research suggests that expansive or constrictive postures which are only incidentally imposed by environments (thus allowing more variation between people's postures), can affect people's honesty: people interacting in workspaces that impose expansive postures are supposedly "more likely to steal money, cheat on a test, and commit traffic violations in a driving simulation" [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. Recent Controversies In 2015, Ranehill et al. [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] published a high-powered replication attempt of Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] contradicting the original paper. Carney and colleagues responded with an analysis of the differences between their study and the failed replication [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF]. They aimed to identify potential moderators by comparing results from 33 studies, to provide alternative explanations for the failed replication. They indicate three variables which they believe most likely determine whether an experiment will detect the predicted effect: (i) whether participants were told a cover story (Carney) or the true purpose of the study (Ranehill), (ii) how long participants had to hold the postures, i.e., comfort, (Carney 2 x 1 min, Ranehill 2 x 3 min) and (iii) whether the study was placed in a social context, i.e., "either a social interaction with another person [...] during the posture manipulation or participants were engaging in a real or imagined social task" [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] (Carney yes, Ranehill no). In 2016, Carney published a statement on her website where she makes "a number of methodological comments" regarding her 2010 article and expresses her updated belief that these effects are not real [START_REF] Carney | My position on "Power Poses[END_REF]. She went on to co-edit a special issue of a psychology journal containing seven pre-registered replication attempts of power pose experiments testing the above discussed moderators to provide "a 'final word' on the topic" [START_REF] Cesario | CRSP special issue on power poses: what was the point and what did we learn[END_REF]. All studies included the self-reported sense of power and one of the following behavioral measures: risk-taking (gambling), performance in mock job interviews, openness to persuasive messages, or self-concept content and size (number and quality of self descriptors). While none of the studies included in the special issue found evidence for behavioral effects, a Bayesian meta-analysis combining the individual results on felt power found a reliable small effect (d ≈ 0.2) [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF]. Outside of Psychology the methodology of power pose studies was criticized by statisticians such as Andrew Gelman and Kaiser Fung who argued that most of the published findings on power poses stem from low-powered studies and were likely due to statistical noise [START_REF] Gelman | The Power of the "Power Pose[END_REF]. Other statisticians analyzed the evidence base collected by Carney et al. [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] using a method called p-curve analysis [START_REF] Simonsohn | p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results[END_REF] whose purpose is to analyze the strength of evidence for an effect while correcting for publication bias. Their analyses "conclusively reject the null hypothesis that the sample of existing studies examines a detectable effect" [START_REF] Joseph | Power Posing: P-Curving the Evidence[END_REF][START_REF] Simmons | Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk[END_REF]. OBJECTIVES OF THIS ARTICLE At this point it seems credible that at least some of the initially reported effects of power poses are nonexistent. Claims related to hormone changes have been definitively refuted [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF][START_REF] Ronay | Embodied power, testosterone, and overconfidence as a causal pathway to risk-taking[END_REF], and none of the recent replications was able to detect a reliable effect on the tested behavioral measures [START_REF] Jonas | Power poses-where do we stand?[END_REF]. Nonetheless, a small effect on felt power seems credible [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF]. It is however still unclear whether "this effect is a methodological artifact or meaningful" [START_REF] Cesario | CRSP special issue on power poses: what was the point and what did we learn[END_REF]: demand characteristics are an alternative explanation for the effect, that is, participants' responses could be due to the context of the experiment during which they are explicitly instructed to take on certain postures, which may suggest to participants that these postures must be a meaningful experimental manipulation. Such demand characteristics have previously been shown to be explanatory for an earlier finding claiming that people wearing a heavy backpack perceive hills as steeper (see Bhalla and Proffitt [START_REF] Bhalla | Visual-motor recalibration in geographical slant perception[END_REF] for the original study and Durgin et al. [START_REF] Frank H Durgin | The social psychology of perception experiments: hills, backpacks, glucose, and the problem of generalizability[END_REF] for an extended study showing that the effect can be attributed to demand characteristics). As all recent replications focused on explicitly elicited postures, i.e., participants were explicitly instructed by experimenters to take on a certain posture, demand characteristics are indeed a plausible alternative explanation. This is, however, much less plausible for studies concerned with incidental postures. For the latter, participants are simply instructed to perform a task within an environment, as for a typical HCI experiment, without being aware that different types of environments are part of the experiment, thereby reducing demand characteristics. Rationale The experiments on incidental postures reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] have to our knowledge so far not been replicated or refuted. Thus it is currently unclear whether the behavioral effects reported in these experiments can be reproduced and whether they are relevant for HCI. We argue that the potential impact of such effects for HCI justifies more studies to determine whether the effect exists and, if so, under which conditions the effect can be reproduced. We investigate the potential effects of incidental power poses in two HCI scenarios: first when interacting with a touchoperated wall-sized display, then, when interacting with a large tabletop display. We consider both self-reported sense of power (experiment 1) and risk-taking behavior (experiment 2) as potential outcomes, similar to the studies reported by Yap et al. Again, we only consider incidental postures, that is, postures that are the result of a combination of device form factor and interface layout. As we are only interested to study whether these two factors alone can produce a reliable effect, we do not control for possible variations in posture which are beyond these two factors, such as whether people sit straight up or cross their legs, since controlling for these would make demand characteristics more likely. Instead, our experiment designs only manipulate factors which are in the control of a UI designer. In particular we identify the following differences to previous work in Psychology: • The existing body of work on power poses comes from the Psychology community where postures were carefully controlled by experimenters. We only use device form factors and interface design to impose postures on participants. • We do not separate a posture manipulation phase and a test phase (in experiment 2) but integrate the two which is more relevant in an HCI context. • Similar to the existing literature we measure felt power and risk-taking behavior. In contrast to previous studies which measured risk-taking behavior only through binary choices (one to three opportunities to take a gamble) we use a continuous measure of risk-taking. • For exploratory analysis, we additionally collect a taskrelevant potential covariate that has been ignored in previous work: people's baseline tendency to act impulsively (i.e., to take risks). EXPERIMENT 1: WALL DISPLAY In a first experiment, we tested for an effect of incidental posture while interacting with a touch-operated wall display. We asked 44 participants who had signed up for an unrelated pointing experiment whether they were interested to first participate in a short, unrelated "pilot study" which would only last about 3 min. All 44 participants agreed and gave informed consent. The experimenter, who was blind to the experimental hypothesis, instructed participants to stand in front of a 3 m x 1.2 m wall display, and started the experimental application. The experiment was between-subjects and participants were randomly assigned to receive either instructions for a constrictive interface or an expansive interface. Instructions were shown on the display, and participants were encouraged to confirm with the experimenter if something was unclear to them. To make the interface independent of variances in height and arm span, participants were asked to adapt it to their reach. Participants in the expansive condition were instructed to "move these two circles such that they are located at the height of your head, then move them apart as far as possible so that you can still comfortably reach both of them" (as in Figure 1B). In the constrictive condition, participants were asked to "move these two circles such that you can comfortably tap them with your index fingers while keeping your elbows close to your body" (as in Figure 1A). Once participants had adjusted the position of the circles, they were instructed that the experiment would start once they tapped one of the targets, and that they should continue to alternately tap the two targets for 90 sec. In comparison, Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] used two poses for one minute each, Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] (study 1) one pose for one minute, and Ranehill et al. [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] two poses for three minutes each. After participants finished the tapping task, the experimenter handed them a questionnaire inquiring about their level of physical discomfort (3 items), then, on a second page, participants were asked how powerful and in charge they felt at that moment (2 items) similar to Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] but on a 7-point scale instead of their 4-point scale. An a priori power analysis using the G*Power tool [START_REF] Faul | G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences[END_REF] indicated that we would require 48 participants, if we hypothe-sized a large effect size1 of d = 0.73 and aimed for statistical power of 0.8. Since the experiment only took 3 min to complete, and we relied on participants coming in for an unrelated experiment, we stopped after 44 participants resulting in an a priori power of 0.76. Results Figures 3-5 summarize the results of experiment 1. The respective left charts show histograms of the responses (7point Likert scale), the charts in the middle show estimates of means with 95% bootstrap confidence intervals 2 and the right charts show the respective differences between the means, also with 95% bootstrap confidence intervals. Felt power. As Figure 3 indicates, our data results in large confidence intervals and their difference clearly includes zero. The estimated effect size is d = 0.38 [-0.23, 1.05] 95% CI. There might be a small effect, but to confirm an effect with d = 0.38 and statistical power of 0.76, we would need to run 156 participants. A higher powered experiment aiming for 0.95 statistical power would even require 302 participants. Sense of feeling in charge. For the feeling in charge item we find no overall difference between the two postures. We should note here that this item caused confusion among participants as many asked the experimenter to explain what the question meant. The experimenter was instructed to advise participants to simply answer what they intuitively felt, which might have led to random responses. We nonetheless report our results in Figure 4. Discomfort. The discomfort measure is derived from three items, inquiring about participants' impressions of how difficult, fatiguing, and painful they found the task. Ratings across the three items were generally similar, thus we computed one derived measure discomfort from an equal-weighted linear combination of the three items. Here, we find a very large effect between the postures, with expansive being rated as leading to much higher levels of discomfort (Fig. 5) with d = 1.53 [0.84, 2.30] 95% CI. Bayesian Meta-Analysis on the Power Item. The Bayesian meta-analysis from Gronau et al. [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF] made data and R scripts for their analysis of six pre-registered studies measuring felt power available (see osf.io/fxg32). This allowed us to rerun their analysis including our data. Figure 6 shows the results of the analysis for the original meta-analysis and for the extension including our results. The range of plausible effect sizes given our data is wider than for the previous, higher powered studies using explicit power poses. Our results are consistent with a small, positive effect of expansive postures on felt power as identified by the meta-analysis (d ≈ 0.23 [0.10, 0.38] 95% highest density interval). Discussion While inconclusive on their own, our results on felt power are consistent with a small effect size d ≈ 0.2 for expansive versus constrictive postures when using a touch interaction on a wall-sized display. More importantly though, we observed a much larger effect, d ≈ 1.5 for discomfort as participants in the expansive condition were asked to hold their arms stretched out for 90 sec to complete the task. Given the small expected effect size, we find the large increase in discomfort more important and do not recommend to attempt to affect users' sense of power through the use of expansive postures on touch-operated wall-sized displays. These considerations played into the design of a second experiment. We identified as most important factors to control: maintaining equal levels of comfort for both postures, and using an objectively quantifiable and continuous behavioral measure instead of self-evaluation. EXPERIMENT 2: INCLINED TABLETOP Our second experiment is inspired by experiment 2 from Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. There, participants' incidental posture was imposed by either arranging their tools around a large (0.6 m 2 ) or a small (0.15 m 2 ) workspace. Yap et al. study investigated the effect of incidental postures imposed by the different workspaces on people's dishonesty, whereas we applied the paradigm to risk-taking behavior which is a common behavioral measure in multiple studies on explicit power poses [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF][START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF]. These previous studies all gave binary choices to participants, asking them whether they were willing to take a single gamble [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF] or to make several risky choices in both gain and loss domain [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] using examples taken from Tversky and Kahneman [START_REF] Tversky | The framing of decisions and the psychology of choice[END_REF]. There, participants' binary response was the measure for risk-taking. We opted for a more continuous measure for risk-taking as it results in higher resolution for responses, and used the balloon analog risk task (BART), a behavioral measure for risk-taking propensity [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. We again study one main factor: incidental posture with two levels, expansive and constrictive, implemented as two variations of the same graphical user interface (see Figure 7). To keep comfort constant across conditions, we used a slightly inclined 60" tabletop display instead of a wall display so that participants in both conditions could rest their arms while performing the task (see Figure 1C&D). BART: The Balloon Analogue Risk Task The BART is a standard test in Psychology to measure people's risk-taking behavior in the form of a game [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. The basic task is to pump up 30 virtual balloons using on-screen buttons. In our implementation, two buttons were placed as indicated in Figure 7 and players were asked to place their hands near them. With each pump, the balloon grows a bit and the player gains a point. Points are commonly linked to monetary rewards and the more players pump up the balloons, the higher their payout. The maximum size of a balloon is reached after 128 pumps. The risk is introduced through a random point of explosion for each balloon with the average and median explosion point at 64 pumps. A balloon needs to be cashed in before it explodes to actually gain points for that balloon. Participants are only told that a balloon can explode (Fig. 7-D) at any point between the minimum size, i.e., after 1 pump, and the maximum, when it touches the line drawn underneath the pump (see Figure 7-B), and that they need to cash in a balloon before it explodes to gain points. The Measures The measure of the BART is the average number of pumps people make on balloons which did not explode, called adjusted number of pumps. It is used in Psychology as a measure of people's tendency to take risks: with each pump players have to weigh the risk of the balloon exploding against the possible gain of points [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF][START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF][START_REF] Melissa | The assessment of risky decision making: A factor analysis of performance on the Iowa Gambling Task, Balloon Analogue Risk Task, and Columbia Card Task[END_REF][START_REF] Fecteau | Activation of prefrontal cortex by transcranial direct current stimulation reduces appetite for risk during ambiguous decision making[END_REF]. The theoretically optimal behavior would be to perform 64 pumps on all balloons. It would maximize payout and also lead to 50% exploding balloons. Yet, previous studies found that participants stop on average much earlier [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF]. Adjusted number of pumps According to a meta-analysis of 22 studies using this measure [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], the average adjusted number of pumps is 35.6 (SE 0.28) . However, the meta-analysis showed that means varied considerably between studies from 24.60 to 44.10 (with a weighted SD = 5.93). Thus, only analyzing the BART's main measure would probably not be sensitive enough to identify a difference between the studied postures. We account for this by also computing a normalized measure (percent change) and by capturing people's general tendency to take risks as a covariate. Percent change of pumps The game can be conceptually divided into 3 phases: during the first 10 balloons, players have no prior knowledge of when balloons will explode. This phase has been associated with decision making under uncertainty [START_REF] Melissa | The assessment of risky decision making: A factor analysis of performance on the Iowa Gambling Task, Balloon Analogue Risk Task, and Columbia Card Task[END_REF]. In the second phase, players mostly consolidate the impressions gained in the first phase, whereas the last phase indicates decision making under risk: players developed intuitions and aim to maximize their payout. While the BART is widely used [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], little data is available for the individual phases. Most studies only report the main measure which is averaged over all phases. Still, we know from the original study that the average increase of pumps between the first and the last phase is about 33% [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. Since we hypothesize that a possible effect of incidental posture should occur over the course of the experiment, we expect that it should not be present while pumping up the first balloons. By comparing data from this first phase with data from the last phase, we derive a normalized measure for how people's behavior changed over the course of the experiment (∼10 min). We define this measure, percent change as follows: number of pumps required to achieve the maximum size and, most importantly, the number of pumps needed to optimize the payout is unknown to the participant. ¯X(adj. pumps in phase 3) � X(adj. pumps in phase 1) % change = X ¯(adj. pumps in phase 1) Covariate: impulsiveness We additionally tested participants on the BIS-11 Barrett impulsiveness scale [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF][START_REF] Matthew S Stanford | Fifty years of the Barratt Impulsiveness Scale: An update and review[END_REF] to capture their general tendencies to react impulsively. The scale is a 30 items questionnaire inquiring about various behaviors such as planning tasks, making decisions quickly, or buying things on impulse. We included it as a covariate as Lejuez et al. reported a correlation with the BART measure (r = 0.28 [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]). Covariate: comfort In light of our findings from experiment 1, we also included an extended questionnaire relating to both physical and mental comfort as well as fatigue (items 1-12 from the ISO 9241-9 device assessment questionnaire [START_REF] Douglas | Testing Pointing Device Performance and User Assessment with the ISO 9241, Part 9 Standard[END_REF]). Participants We recruited a total of 80 participants (42 women, 38 men, mean age 26) in two batches. Similar to experiment 1, we initially recruited 40 participants. A Bayes factor analysis [START_REF] Dienes | Using Bayes to get the most out of non-significant results[END_REF] at that point indicated that our data was not sensitive enough to draw any conclusions, and we decided to increase the total number of participants to 80. As is common in this type of experiment and as suggested by Carney et al. [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF], we used a cover story to keep our research question hidden from participants. The study was advertised as a usability study for a touchscreen game. Participants were unaware of the different interface layouts since posture was manipulated between subjects, making it more difficult for them to guess the real purpose of the study. Procedure Similar to experiment 1, participants were alternately assigned in order of arrival to either the constrictive or the expansive condition. After signing an informed consent form for the "usability study", participants were introduced to the tabletop setup and asked to go through the on-screen instructions of the game. They were informed that the amount of their compensation would depend on the number of points they achieved in the game. They then pumped up 30 balloons. Once finished, they filled a questionnaire on their level of comfort during the game (12 items), and the BIS-11 impulsivity test [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF] (30 items). Finally, participants filled a separate form to receive a cinema voucher for their participation. The value of the voucher was between 13e and 20e, depending on how many points they accumulated in the game following the original BART protocol [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. The entire experiment lasted about 20 min. BAYESIAN ANALYSIS We analyze our data using Bayesian estimation following the analysis steps described by Kruschke [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF] for the robust analysis of metric data in nominal groups with weakly informed skeptical priors which help to avoid inflated effect sizes [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF]. We reuse R code supplied by Kruschke [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF] combined with the tidybayes 3 package for R to plot posterior distributions. Our analysis setup can be seen as a Bayesian analog to a standard ANOVA analysis yet without the prerequisites of an 3 Tidybayes by Matthew Kay, github.com/mjskay/tidybayes ANOVA, normality and equal variances, and with the possibility of accepting the null hypothesis if the posterior credibility for parameter ranges falls into a pre-defined region of practical equivalence (ROPE) [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF]. For example, we could decide that we consider any difference between groups of less than + � 5% as too small a difference to be of practical relevance. As we did not decide on a ROPE before data collection, we refrain from using this tool. Most importantly, the outcome of the analysis are distributions for credible ranges of parameter estimates which is more informative than dichotomous hypotheses testing [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF]. Model Figure 8 shows that for adjusted number of pumps, the distributions from both groups are rather similar and mostly symmetric. For percent change, the data is positively skewed. . y[i] ∼ T (ν, a 0 + a[x[i]], σ y [x[i]]) Priors We choose weakly informed skeptical priors. Since previous work on the BART reports large variances between studies [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], we scale the prior for the intercept a 0 based on our data and not on estimates from previous work. For the deflection parameters a[x[i]], we choose a null hypothesis of no difference between groups expressed through a normally distributed prior centered at 0 with individual standard deviations per group. For the scale parameters σ y and σ a we assume a gamma distribution with shape and rate parameters chosen such that its mode is SD(y)/2 and its standard deviation is 2 * SD(y) [46, page 560f]. The regularizing prior for degrees of freedom ν is a heavy-tailed exponential. a 0 ∼ N(X ¯(y), (5 * SD(y)) 2 ) a[x[i]] ∼ N(0, σ a ) σ y ∼ G(β, γ) σ a ∼ G(β, γ) ν ∼ Exp(1/30) Fitting the model We fit the model using Markov chain Monte Carlo (MCMC) sampling in JAGS [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF]. We ran three chains with 10,000 steps burnin, thinning of 10 for a final chain length of 50,000. Convergence of chains was assessed through visual inspection of diagnostic plots such as trace plots, density plots, and autocorrelation plots as well as by checking that all parameters passed the Gelman-Rubin diagnostic [START_REF] Gelman | Inference from Iterative Simulation Using Multiple Sequences[END_REF]. The results presented in the next section are computed from the respective first chains. RESULTS The outcome of our analysis is posterior distributions for the parameters in our model. These distributions indicate credible values for the parameters. One way of representing these is to plot the density of these distributions together with a 95% highest density interval (HDI) as so-called eyeplots [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF] (as done in Figures 9 & 11). Any value within an HDI is more credible than all values outside an HDI. The width of an HDI is an indicator for the certainty of our beliefs: narrow intervals indicate high certainty in estimates whereas wide ones indicate uncertainty. Finally, not all values within an HDI are equally credible which is indicated through the density plot around the HDI: values in areas with higher density have a higher credibility than values in less dense areas. We now present our results by first analyzing the posterior parameter estimates for our Bayesian model for both the standard BART measure and our percent change measure (summarized in Figure 9) and analyze contrasts pertaining to our research question as to whether incidental posture had an influence on people's behavior. Posterior Parameter Estimates Posterior distributions for our parameter estimates are summarized in Figure 9. The intercept, a 0 , indicates the estimate for the overall mean across both groups, whereas the groupwise estimates, a 0 + a[x i ], show distributions for estimates of the means split by expansive-constrictive posture. The difference plots in the middle indicate whether a group differs from the overall mean, and the third plot to the right indicates the difference between the two groups. Standard BART Measure The results for the standard BART measure are shown in Figure 9-left. For the adjusted number of pumps we find a shared intercept a 0 of 42.6 with a [39.1, 46.0] 95% highest density interval (HDI). This value is within the upper range of previous studies using the BART which varied between 24.60 and 44.1 [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF]. The estimates for the group-wise means for the two body postures are both close to the overall mean which is confirmed by the HDIs for the credible differences to the intercept as well as the difference between postures: point estimates are all within the range of [-1,1] from the intercept with HDIs smaller than [-5,5]. Percent Change Measure The results for the percent change measure are illustrated in Figure 9-right. For the percent change measure we find an overall intercept a 0 of 24.7% [15.0, 34.7] 95% HDI which is below the average increase of 33% found by Lejuez et al. [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. Similar to the standard BART measure we find very small differences for the two posture groups which are within [-0.5, 0.5] for the point estimates with 95% HDIs smaller than [-9,9]. Not only is the credible range for the estimates considerably larger than for the BART measure, but also the posterior distribution for the difference between the two postures is rather uncertain with a wide HDI spanning [-17.3, 15.8]. Effects and Interactions with Covariates We captured comfort, impulsiveness [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF], and gender as covariates. Both comfort and gender showed only negligible variance both across postures and within groups. We therefore only report the analysis for impulsiveness in more detail. Impulsiveness To test for possible influence of the impulsiveness covariate, we split participants into either "high risk takers" (BIS11 index >= 64) or "low risk takers" (BIS11 index < 64, where 64 is the median value within our sample population). This split leads to different profiles between the resulting four groups as Body posture accounts for some of the uncertainty but similarly for both conditions. For high impulsiveness indices positive values are slightly more credible than negative values and vice versa. It seems most credible that the interaction parameters crossing body posture and impulsiveness account for most of the observed differences. To analyze the data for this measure taking the covariate into account, we extend our previous one-factor model with a second factor including an interaction term as follows: y[i] ( [i] [x [i] x [i]] 2 ∼ T µ , σ y 1 , 2 , ν) with µ[i] ∼ a 0 + a 1 [x 1 [i]] + a 2 [x 2 [i]] + a 1 a 2 [x 1 [i], x 2 [i]] Priors were chosen skeptically as detailed before. Results. The results are summarized visually in Figure 11. We find again almost completely overlapping credible intervals for the posture factor centered within [-0.5,0.5] with HDIs smaller than [-10,10]. The impulsiveness factor also played a rather negligible role. Surprisingly, we find an interaction between posture and impulsiveness: it appears that body posture affected low risk-takers as predicted by Yap et al. whereas it seems to have reversed the effect for high risktakers. However, this part of the analysis was exploratory and a confirmatory study would be needed to verify this finding. Additionally, the two experimental groups were slightly unbalanced, that is, the BIS scores in the expansive group had a slightly lower mean than in the constrictive group (µ exp = 63.2, µ cons = 66.0, [-7.1, 1.5] 95% CI on difference). DISCUSSION We first summarize our findings and then discuss them in light of our research question and approach. Summary of our findings We ran two experiments designed to identify possible effects of incidental power poses on the sense of power (experiment 1) and on risk-taking behavior (experiment 2). While multiple replication attempts on explicitly elicited power poses had failed to show reliable effects for behavioral effects and only a small effect on felt power, it remained unclear whether the effects for incidental power poses, reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] would replicate and whether incidental power poses are important to consider when designing user interfaces. Experiment 1 The first experiment found a considerably larger effect for discomfort (d ≈ 1.5 [0.8, 2.3]) than for felt power (d ≈ 0.4 [-0.2, 1.1]). On its own the first experiment thus failed to find the effect expected based on Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF], and the optimism for incidental power poses generated from that study is not supported by our findings. Our results are however consistent with a much smaller effect of d ≈ 0.2 as was recently suggested by a meta-analysis [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF]. Thus, we can at best conclude that a small effect might exist. In practice, the effect remains difficult to study as the small effect size requires large participant pools to reliably detect the effect. Such large participant pools are rather uncommon in HCI [START_REF] Caine | Local Standards for Sample Size at CHI[END_REF] with the exception of crowdsourced online experiments where the reduced experimental control might negatively effect the signal to noise ratio of an already small effect. Besides such practical considerations, the very large effect on (dis)comfort severely limits the range of acceptable expansive interfaces. Experiment 2 The second experiment found that incidental body posture did not predict participants' behavior. As with experiment 1, this is consistent with the findings of the recent replications which elicited postures explicitly; none of those were able to detect an effect on behavior either. Again, a large effect as reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] is highly unlikely in light of our results. We thus conclude that incidental power poses are unlikely to produce measurable differences in risk-taking behavior when tested across a diverse population. An exploratory analysis of interaction effects on the normalized measure suggests that an effect of body posture as predicted by Yap et al. could be observed within the group of participants showing low BIS-11 scores, while the effect was reversed for participants with high BIS-11 scores. Should this interaction replicate, then it would explain why overall no effect for the expansiveness of postures can be found. However, a confirmatory study verifying such an interaction is needed before one can draw definitive conclusions and possibly amend design guidelines. Relevance of Power Poses for HCI Overall we found an apparent null or at best negligible effect of body postures on behavior. For a user interface targeted at diverse populations, it thus seems futile to attempt to influence people's behavior through incidental postures. As a general take-away, we recommend avoiding both overly expansive as well as constrictive postures and to rather focus on factors such as general comfort or efficiency as appropriate to the purpose of an intended user interface. In some previous work it was argued that a social interaction would be necessary to observe a power pose effect [START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF][START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF]. While our experiments did not investigate this claim, recent work by Cesario and Johnson [START_REF] Cesario | Power Poseur: Bodily Expansiveness Does Not Matter in Dyadic Interactions[END_REF] provides evidence against this claim. It thus seems equally unlikely that power poses would be of concern for social user interfaces. However, our research only concerned power poses and tested downstream effects, that is, whether posture manipulations led to changes in behavior. We cannot draw any conclusions about the other direction: for example, posture seems to be indicative of a user's engagement or affective state [START_REF] Savva | Continuous Recognition of Player's Affective Body Expression as Dynamic Quality of Aesthetic Experience[END_REF]. Need for Replication Concerning the interaction observed in our second experiment, we want to again caution that this finding needs to be replicated to confirm such an interaction. The analysis that brought forward this finding was exploratory, and our experiment included only 80 participants -more than usual inperson experiments in HCI [START_REF] Caine | Local Standards for Sample Size at CHI[END_REF] but less than the failed replications of explicitly elicited power poses. We suggest that replications could focus on specific, promising or important application areas where effects in different directions might have an either desirable or detrimental impact on people's lives, and participants should be screened for relevant personality traits, such as impulsiveness or the "the big-five" [START_REF] Lewis R Goldberg | An alternative" description of personality": the big-five factor structure[END_REF], to examine interaction effects with these covariates. Replication is still not very common within HCI [START_REF] Kasper Hornbaek | Is Once Enough?: On the Extent and Content of Replications in Human-Computer Interaction[END_REF] despite various efforts to encourage more replications such as the repliCHI panel and workshops between 2011 and 2014 (see www.replichi.com for details and reports) as well as the "repliCHI badge" given to some CHI articles published at CHI'13/14. Original results are generally higher valued than confirmations or refutations of existing knowledge. A possible approach to encourage more replications could be through special issues of HCI journals. For example, the (Psychology) journal that published the special issue on power poses took a progressive approach to encourage good research practices, such as preregistered studies [START_REF] Cockburn | HARK No More: On the Preregistration of CHI Experiments[END_REF] or replications, by moving the review process before the collection of data, thereby removing possible biases introduced by a study's outcomes [START_REF] Kai | How can preregistration contribute to research in our field?[END_REF]: only the introduction, background, study designs, and planned analyses are sent in for review, possibly revised upon reviewer feedback, and only once approved, the study is actually executed and already guaranteed to be published, irrespective of its findings. We believe such an approach could be equally applied in HCI to work towards a conclusive evidence base for research questions the community deems interesting and important. Reflections on our Approach Power poses are an example of a construct from Psychology that has received extensive scientific and public coverage; both soon after publication and once the results of the studies were challenged. Transferring this construct to HCI raised several challenges: (i) practical relevance: identifying which areas of HCI could be impacted by this construct, (ii) ecological validity: operationalizing the construct for HCI such that the resulting manipulations and tasks resemble "realistic" user interfaces which could be encountered outside the lab, and (iii) respecting the boundary conditions within which the construct can be evoked. Concerning (i), the literature on incidental power poses provides a rich set of behaviors such as cheating and risk-taking. We gave examples in the background section for areas relevant to HCI -education and risky decision-making -in which an effect of power poses would be pivotal to understand. Concerning (ii) and (iii), the challenges were less easy to address. Carney et al. argued in their summary of past research on explicitly elicited postures [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] that replications might fail if the postures are not replicated closely enough. The experiments by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] did not carefully control the postures but only modified the environment. So it was unclear whether we would need to consider a wide set of gestures and poses and how to find out which of those instantiated the construct well. We addressed these challenges by considering the relevance for HCI as the most important experiment design criterion: since an interface designer has very little influence on users' posture beyond the positioning of interface elements, we decided to consider power poses as irrelevant for HCI if they require very specific positioning of users. CONCLUSION We investigated whether incidental postures, in particular constrictive and expansive postures, influence how users behave in human-computer interaction. The literature raised the expectation that such postures might set about cognitive and physiological reactions, most famously from findings by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] as well as Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. While the findings from Carney et al. on explicitly elicited power poses did not hold up to replications, the experiments by Yap et al. had so far not been replicated. We reported findings from two experiments which conceptually replicated experiments on incidental power poses in an HCI context. We observed an at best small effect for felt power and an at best negligible effect for a behavioral measure for risk-taking. Most surprisingly, an exploratory analysis suggested that an interaction with a personality trait, impulsiveness, might reverse the hypothesized effect for posture manipulations. However, replications controlling for this interaction are needed to determine if this interaction reliably replicates and thus poses a relevant design consideration for HCI. Overall we conclude that incidental power poses are unlikely to be relevant for the design of human-computer interfaces and that factors such as comfort play a much more important role. To support an open research culture and the possibility to replicate our work or to reanalyze our data, we share all experimental data and software as well as all analysis scripts at github.com/yvonne-jansen/posture. Figure 2 . 2 Figure 2. Left: an air traffic controller workstation. Photo courtesy: US Navy 100714-N-5574R-003 CC-BY 2.0. Right: a Bloomberg terminal featuring a double screen controlled by keyboard and mouse. Photo courtesy: Flickr user Travis Wise CC-BY 2.0. Figure 3 . 3 Figure 3. Self-reported sense of power. Error bars indicate 95% bootstrap confidence intervals. Figure 4 . 4 Figure 4. Self-reported sense of "feeling in charge". Error bars indicate 95% bootstrap confidence intervals. Figure 5 . 5 Figure 5. Level of discomfort while performing the task. Error bars indicate 95% bootstrapped confidence intervals. Figure 6 . 6 Figure 6. Extended Bayesian meta-analysis from Gronau et al. estimating effect sizes of felt power. Individual studies show fixed effect estimates, meta analysis items indicate mixed model estimates. The two bottom items include our data. Error bars indicate 95% highest density intervals. Figure 7 . 7 Figure 7. Screenshots of our implementation of the BART showing (A) the initial and (B) the maximum size of the balloon in the constrictive condition as well as (C) the initial size and (D) the explosion feedback in the expansive condition. The circles represent the buttons used to pump up the balloon. Figure 8 . 8 Figure 8. Density plots of the raw data for both measures. We model our data through a robust linear model using as likelihood a heteroskedastic scaled and shifted t distribution with degrees of freedom ν [46, page 573ff]. We assume our data to have a common intercept a 0 from which groups may differ, captured by parameter a[x[i]] where x[i] indicates group membership. The model assumes independent scale parameters per group σ y [x[i]]. Figure 9 . 9 Figure 9. Eye plots of the posterior distributions of parameters with 95% HDI (highest density interval). Left: parameter estimates for the standard BART measure; right: parameter estimates for the percent change measure. Figure 10 . 10 Figure 10. Density plots of the raw data for both measures with data split by condition and impulsiveness covariate. the two factors combined is 25.2%[15.2, 35.7]. Figure 11 . 11 Figure 11. Summary of our two-factor analysis for percent change indicating the highest density intervals for the different components of the extended linear model. Figure 10 10 Figure 10 indicates. For the adjusted # of pumps measure, the split indicates rather similar profiles across groups. For the percent change measure, however, the split separates groups with seemingly different profiles. This experiment ran in before data from failed replications were available. We chose d = 0.73 based on Yap et al.[START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF].[START_REF] Bezerianos | The Vacuum: Facilitating the Manipulation of Distant Objects[END_REF] We used the BCa method which corrects the bootstrap distribution for bias (skew) and acceleration (nonconstant variance)[START_REF] Thomas | Bootstrap Confidence Intervals[END_REF]. ACKNOWLEDGMENTS This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 648785). We thank Dan Avram and Peter Meyer for their support in running the experiments, Sebastian Boring, Gilles Bailly, Emmanouil Giannisakis, Antti Oulasvirta, and our reviewers for feedback on various drafts, and Pierre Dragicevic for detailed comments.
66,874
[ "2825" ]
[ "541937", "1003532", "460939" ]
01758581
en
[ "chim" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758581/file/2018-055.pdf
Glenna L Drisko Christophe Gatel Pier-Francesco Fazzini Alfonso Ibarra Stefanos Mourdikoudis Vincent Bley Katia Fajerwerg Pierre Fau Myrtil Kahn Air-stable Anisotropic Monocrystalline Nickel Nanowires Characterized using Electron Holography g Laboratoire plasma et conversion d'énergie, UMR 5213, Université de Toulouse, CNRS, Toulouse France. Abstract: Nickel is capable of discharging electric and magnetic shocks in aerospace materials thanks to its conductivity and magnetism. Nickel nanowires are especially desirable for such an application as electronic percolation can be achieved without significantly increasing the weight of the composite material. In this work, single-crystal nickel nanowires possessing a homogeneous magnetic field are produced via a metal-organic precursor decomposition synthesis in solution. The nickel wires are 20 nm in width and 1-2 μm in length. The high anisotropy is attained through a combination of preferential crystal growth in the <100> direction and surfactant templating using hexadecylamine and stearic acid. The organic template ligands protect the nickel from oxidation, even after months of exposure to ambient conditions. These materials were studied using electron holography to characterize their magnetic properties. These thin nanowires display homogeneous ferromagnetism with a magnetic saturation (517±80 emu cm -3 ), which is nearly equivalent to bulk nickel (557 emu cm -3 ). Nickel nanowires were incorporated into carbon composite test pieces and were shown to dramatically improve the electric discharge properties of the composite material. KEYWORDS. Electron holography, Electric discharge, Ligand stabilization, Magnetism, Nanowires, Nickel Lightning can and does strike the same place twice. In the case of airplanes, lightning hits each plane on average once per year and enters almost exclusively through the nose. Spacecraft are currently built of carbon fiber-reinforced composites, a material that is lightweight and has desirable mechanical properties, however which suffers from low electrical conductivity. Damage can be caused by low attenuation of electromagnetic radiation and electrostatic discharge (i.e. lightning strikes), 1 creating a security risk in the spacecraft and requiring expensive repairs. Typically, aluminum or copper are incorporated into the carbon fiber-reinforced composites in order to quickly dissipate the charge. However, copper suffers from oxidation and aluminum from galvanic corrosion. Nickel can effectively dissipate concentrated magnetic and electrical fields, it is resistant to extensive oxidation thanks to the natural formation of a passivating oxide, it has a reasonably low density and is comparatively inexpensive. Conductive Composites sells nickel nanostrands TM for aeronautics applications, which have been proven to effectively shield composites from electromagnetic interference and electrostatic discharge-induced damage even after 2 million cycles of fatigue loading. 1 Nickel nanostructures have been synthesized in a variety of shapes and sizes by employing several chemical protocols, [2][3][4][5] yielding nanomaterials with various physical properties. However, this current report is the first solution based synthesis of individual monocrystalline nanowires. Previously, monocrystalline nickel nanowires have been created via electrodeposition using porous templates, with the smallest nanowire diameter produced to date being 50 nm. 6 A similar technique has been used to produce Au/Ni composite wires using a porous template with a 40 nm diameter. 7 Solution chemistry protocols have produced isotropic nanoparticles, 8 short nanorods, 4 a variety of other structures 2 and polycrystalline nanowires. 9,10 Monocrystallinity is important because conductivity is related to the number of grain boundaries, as grain boundaries are a barrier to electrical transport. 11 Moreover, a protective layer of nickel oxide forms typically upon exposure to air. Oxidized nickel can be either non-magnetic or antiferromagnetic, radically decreasing the magnetization values compared to those of pure fccnickel. 12 Long monocrystalline wires of metallic nickel are ideal materials for applications that require high electrical conductivity and magnetization saturation. We report the metal-organic synthesis of highly anisotropic nickel nanowires having no grain boundaries. The Ni nanowires are obtained through the reduction of a nickel stearate complex using hydrogen gas at 150 °C, in the presence of hexadecylamine and stearic acid (experimental details in SI). The nanowires grow along a particular crystallographic axis (i.e. c), forming a singlecrystalline nanowire for the first time using solution chemistry techniques. Using the appropriate relative concentrations of ligand and nickel precursor allowed us to increase the length of the nanowires and to transition away from nanorod-based sea urchin structures. We investigate the magnetic properties of these anisotropic structures using off-axis electron holography and discuss the correlation of such properties with the nanowire structure. The organic ligand layers capping the nickel nanowires protected them from oxidation. The nickel nanostructures appear either as sea urchin-like structures or as highly anisotropic nanowires, depending on the synthesis conditions (Figure 1, a movie showing the tomography can be found as SI). Anisotropic structures can result from templating or from a difference in the rate of crystal growth along a certain axis. A difference in crystallographic growth rate can occur to minimize the surface energy 11 or from capping certain facets with surfactants, ions or solvent. [14][15][16] The sea urchin-like nanostructures are collections of individual nanowires growing from a single nucleus. 17 The predominance of a wire versus urchin morphology can be explained using nucleation and growth kinetics, as has been seen in CoNi nano-objects. 18 High nucleation and growth rates led to CoNi nanowires, where slow nucleation and fast growth led to a sea urchin morphology. The same likely applies to the nickel nanostructures presented here. When the nickel precursor and ligand were highly diluted, nucleation was favored over growth and spherical particles were produced (Figure 2). By decreasing the quantity of solvent, growth was favored over nucleation, producing a dense sea urchin nanostructure (Figure 2b). Upon further concentrating the solution, a less highly branched nanostructure was observed (Figure 2c), which cannot be explained with nucleation and growth kinetics, but rather to surfactant organization and templating effects. The stearic acid ligand played a major role in the formation of anisotropic nanowires. In the absence of stearic acid, spherical particles were produced (Figure 1b). By increasing the concentration of stearic acid, the anisotropy of the nanoparticles increased (Fig. 1cd). In this later case, branched nanowires were still present, but unbranched nanowires were commonly found. The nanowires were about 20 nm in width and up to 2 μm in length. Thus, both crystal growth kinetics and surfactant templating seem responsible for the nickel nanostructure morphology. The nickel nanowires terminate with a square pyramid tip, with faces of (111) crystallographic orientation, the natural extension of the <001> crystallographic lattice. The sea urchin shape with capped tips has been previously observed in CoNi nanostructures. 17,18 The capped tips can be explained using growth kinetics. 18 Towards the end of the synthesis, the nickel precursor is nearly consumed and the consequential drop in its concentration slows the particle growth rate significantly. Similarly for nickel nanoparticle growth, at the end of the reaction the extremely dilute conditions allow simultaneous growth along other axes, thus generating an arrowhead. The nickel nanowires are monocrystalline, thus they grew continuously from a nucleus until the nickel precursor depleted (Figure 3a). By measuring the interplanar distances in the diffraction pattern, we determined that the nickel nanorods grew in the <001> direction, and thus the faces present (100) or (111) crystallographic planes. These planes minimize the surface energy. 17 No grain boundaries, crystallographic defects or nickel oxide are visible in the microscopy images (Figure 3a). Normally nickel forms a 2 nm passivating nickel oxide shell upon exposure to air. 20 If a 2 nm crystalline oxide shell were present, the diffraction pattern would show differences in the interplane distances between Ni and NiO (Figure 3b inset). The diffraction pattern, which is characteristic of a single crystal lattice, was obtained for the 1085 nm long segment of the nanowire shown in Figure 3b and is representative of the other nanowires analyzed. Thus, the representative TEM images and the associated diffraction pattern prove the nanowires are monocrystalline, and lack a NiO passivating layer. It seems that the organic ligands used are highly effective at protecting the surface from oxidation. Using TGA, it was found that the organic ligands composed 6.8 wt% of the sample. Taking the surface area of the Ni nanowires as 22.6 m 2 g -1 and assuming that the hexadecyl amine and the stearic acid form ion pairs, 22 the surface coverage is 6.9 ligand molecules/nm 2 . This is an extremely high charge, indicating that there is probably two or more layers of ligands protecting the nanowire. SQUID measurements show that the material presents ferromagnetic properties (Figure 3c). The saturation magnetization at room temperature is equal to 460 emu cm -3 , slightly lower than bulk nickel (557 emu cm -3 ). This lower value may be due to the coordination of the stearic acid to the nickel surface. 4 SQUID measurements of the magnetism at 2 K and ambient temperature also confirm the absence of oxidation. An oxide layer around nickel modifies its magnetic properties. 22 Nickel is ferromagnetic, where nickel oxide is antiferromagnetic. A thin shell of NiO around a Ni core generates a temperature dependent exchange bias, observed as a horizontal shift of the magnetic hysteresis loop. 20,23 As the hysteresis loop corresponding to the field-cooled measurement is not shifted along the H-axis, there is no detectable amount of NiO around the Ni nanowires. The nickel nanowires show no evidence of surface oxidation even months after preparation, stored under ambient conditions. Magnetic and electric maps can be obtained by electron holography, which measures the phase shift of the electron beam after interaction with the electromagnetic field of the sample. Electron holography thus provides the high spatial resolution known to electron microscopy and a quantitative analysis of the local magnetic configuration (Figure 4). The exact magnetic configuration can thus be correlated to the structural properties of a nanostructure, such as the crystal structure, grain boundaries, geometry, and defects. Electron holography measurements can be used to reconstruct the 3D geometry of the nano-object. In our case, these correspond to what was observed with electron tomography images (movie in supplementary information). Electron holography proved that the nickel nanowires are ferromagnetic with a magnetization laid along the nanowire axis due to shape anisotropy (Figure 4). 24,25 An off-axis electron holography experiment in the Lorentz mode was performed using a Hitachi HF 3300C microscope operating at 300 kV and achieving a 0.5 nm spatial resolution in a field-free magnetic environment (less than 10 -3 T). All the holograms were recorded in a 2 biprism configuration and the fringe spacing was set to 1.1 nm in this study. Phase and amplitude images were extracted from the holograms using homemade software with a spatial resolution of 4 nm. From the measured magnetic phase shift of 0.3 rad, we obtain a Ni magnetization of about 0.65±0.1 T, i.e. 517±80 emu cm -3 in agreement with values obtained from SQUID. The whole nanowire demonstrated a homogeneous magnetism, although some nanowires exhibited domain walls where the magnetism changed direction. The domain walls show 180° angular displacement and may have been nucleated by saturating the sample during observation. The domain walls were found to exist at the thinnest part of the nanowire, bearing in mind that the nanowire is monocrystalline, but slightly irregular in width. The domain walls were in the form of pure transverse walls, with no magnetization induction observed in the very center of the domain wall. At this center the magnetization is either parallel to the +Z or -Z direction, as the electron phase shift is only sensitive to the components perpendicular to the electron beam. Vortex states are absent, even in the nanowire arrowheads. The anisotropy of the nanowire is known to cause spin alignment in plane with the wire axis, creating a uniform magnetic state. 26 To study the electric dissipation of the nickel nanowires, they were dispersed in a polyamide epoxy resin at 0.5, 1 and 5 wt% relative to the quantity of resin, and then infiltrated into a carbon tissue (see supporting information for details). This composite was cured at 80 °C under vacuum, and then cut into test pieces using micromilling (Figure 5b, Figure S1). Potential decay measurements (Figure 5a, Figure S3) were performed on these test pieces to study how an applied surface charge is dissipated by the surface of the material. We can see in Figure 5a that the charge dissipation occurs much more quickly when nickel nanowires are incorporated into the resin relative to the non-doped carbon composite, which has a much higher concentration of electrical charge. The quantity of charge at the beginning of the measurement is already inferior for the nickel loaded samples, as the charge was largely dissipated during the charging phase. The infiltration of the nickel-charged resin into the tissue was not perfectly homogeneous, as can be seen in Figure 5c, which led to inhomogeneities in the dissipation measurements. However, the measured trend was constant: with 5 wt% nickel nanowire loading, the dissipation was much more efficient and complete within around 1 min. In conclusion, we report the first solution-based synthesis of monocrystalline nickel nanowires. The nickel nanowires are 20 nm in diameter and up to 2 μm in length, and are synthesized via the decomposition of metal-organic compounds under air-free and water-free conditions. These nanostructures nucleated and then grew progressively in the <100> direction, where the anisotropy results from a combination of crystal growth kinetics and surfactant templating. There are no grain boundaries within the nanostructure. However, the nanowires are not perfectly homogeneous in width and the thinner portions are susceptible to the formation of magnetic domain walls. Further experiments will show whether the magnetic domain wall was nucleated during observation or whether it was naturally present within the wire. The intensity of the magnetic response is constant and does not show any vortexes. We are currently studying the aging properties of these nickel nanowires in the aerospatial carbon composite test pieces, to study the dissipation behavior upon electric and magnetic shocks with time and under temperature and humidity variations. Supporting Information. A description of the experimental methods used for nickel nanowire growth and characterization using microscopy, magnetic measurements and electron holography experiments, the fabrication of test pieces and measurement of their electric dissipation (PDF). A movie showing the 3D tomography of a nickel nanowire (movie clip). The following files are available free of charge. Corresponding Author * Glenna Drisko, ICMCB, [email protected]; Myrtil Kahn, LCC, [email protected] Figure 1 . 1 Figure 1. (a) TEM tomographic image of a nickel nanowire with shadows projected in the xy, xz Figure 2 . 2 Figure 2. Nickel nanostructures prepared from nickel stearate dissolved in anisole at a Figure 3 . 3 Figure 3. (a) High resolution transmission electron microscopy image showing the continuity of Figure 4 . 4 Figure 4. Electron holography of a single, isolated nickel nanowire showing: (a) The mean inner Figure 5 . 5 Figure 5. (a) The measured resistance of the carbon composite with variable mass loading of Acknowledgment Stéphanie Seyrac, Jean-François Meunier and Lionel Rechignat provided technical support. Didier Falandry from CRITT mécanique et composites Toulouse prepared composite carbon samples. Funding Sources Financial support was provided by the RTRA Sciences et Technologies pour l'Aéronautique et l'Espace. GLD was supported while writing this manuscript by the LabEx AMADEus (ANR-10-LABX-42) in the framework of IdEx Bordeaux (ANR-10-IDEX-03-02); the Investissements d'Avenir program is run by the French Agence Nationale de la Recherche. A.I. thanks the Gobierno de Aragón (Grant E81) and Fondo Social Europeo.
17,125
[ "736046", "11804", "170999", "969909", "970039", "753888", "753944" ]
[ "525101", "461", "519177", "43574", "171197", "531586", "405645", "461", "461", "461" ]
00175884
en
[ "phys" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00175884/file/HIREL_2007.pdf
Pierre Hirel Sandrine Brochard Laurent Pizzagalli Pierre Beauchamp Effects of temperature and surface step on the incipient plasticity in strained aluminium studied by atomistic simulations Keywords: computer simulation, aluminium, surfaces & interfaces, dislocations, nucleation come Effects of temperature and surface step on the incipient plasticity in strained aluminium studied by atomistic simulations The study of mechanical properties takes a new and more critical aspect when applied to nanostructured materials. While plasticity in bulk systems is related to dislocations multiplying from pre-existing defects, such as Franck-Read sources [START_REF] Hirth | Theory of dislocations[END_REF], nanostructured materials are too small for such sources to operate, and their plasticity is more likely initiated by dislocations nucleation from surfaces and interfaces [START_REF] Albrecht | Surface ripples, crosshatch pattern, and dislocation formation : cooperating mechanisms in lattice mismatch relaxation[END_REF][START_REF] Xu | Homogeneous nucleation of dislocation loops under stress in perfect crystals[END_REF][START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF][START_REF] Godet | Theoretical study of dislocation nucleation from simple surface defects in semiconductors[END_REF]. In particular, nucleation from grain boundaries is of great interest for the understanding of elementary mechanisms occuring in work hardening of nano-grained materials [START_REF] Spearot | Nucleation of dislocations from [001] bicrystal interfaces in aluminum[END_REF][START_REF] Swygenhoven | Atomic mechanism for dislocation emission from nanosized grain boundaries[END_REF][START_REF] Yamakov | Length-scale effects in the nucleation of extended dislocations in nanocrystalline al by molecular dynamics simulation[END_REF][START_REF] Yamakov | Dislocation processes in the deformation of nanocrystalline aluminium by moleculardynamics simulation[END_REF]. The mechanisms involving the nucleation of dislocations from crack tips are also of great importance to account for brittle to ductile transition in semiconductors [START_REF] Cleri | Atomic-scale mechanism of cracktip plasticity : dislocation nucleation and crack-tip shielding[END_REF][START_REF] Zhou | Large-scale molecular dynamics simulations of three-dimensional ductile failure[END_REF][START_REF] Zhu | Atomistic study of dislocation loop emission from a crack tip[END_REF]. In epitaxially-grown thin films, misfit induces a strain and can lead to the formation of dislocations at interfaces [START_REF] Ernst | Interface dislocations forming during epitaxial growth of gesi on (111) si substrates at high temperatures[END_REF][START_REF] Wu | The first stage of stress relaxation in tensile strained in 1-x ga x as 1-y p y films[END_REF][START_REF] Trushin | Surface instability and dislocation nucleation in strained epitaxial layers[END_REF]. The presence of defects in a surface, such as steps, terraces or hillocks, can also initiate plasticity [START_REF] Xu | Analysis of dislocation nucleation from a crystal surface based on the peierls-nabarro dislocation model[END_REF]. In particular, experimental and theoretical investigations have established that stress concentration near surface steps facilitates the nucleation of dislocations from these sites [START_REF] Brochard | Grilhé Stress concentration near a surface step and shear localization[END_REF][START_REF] Zimmerman | Surface step effects on nanoindentation[END_REF]. Dislocations formation in such nanostructures changes their mechanical, electrical, and optical properties, and then may have a dramatic effect on the behaviour of electronic devices [START_REF] Carrasco | Characterizing and controlling surface defects[END_REF]. Hence, the understanding of the mechanisms initiating the formation of dislocations in these nanostructures is of high importance. Since these mechanisms occur at small spatial and temporal scales, which are difficult to reach experimentally, atomistic simulations are well suited for their study. Face-centered cubic metals are first-choice model materials, because of their ductile behaviour at low temperatures, involving a low thermal activation energies. In addition, the development of semi-empirical potentials for metals has made possible the modelling of large systems, and the accurate reproduction of defects energies and dislocation cores structures. Aluminium is used here as a model material. In this study we investigate the first stages of plasticity in aluminum f.c.c. slabs by molecular dynamics simulations. Evidence of the role of temperature in the elastic limit reduction and in the nucleation of dislocation half-loops from surface steps is obtained. Steps in real crystals are rarely straight, and it has been proposed that a notch or kinked-step would initiate the nucleation of a dislocation half-loop [START_REF] Pirouz | Partial dislocation formation in semiconductors: structure, properties and their role in strain relaxation[END_REF][START_REF] Edirisinghe | Relaxation mechanisms in single in x ga 1-x as epilayers grown on misoriented gaas( 1 11)b substrates[END_REF]. This is investigated here by comparing the plastic events obtained from either straight and non-straight steps. Our model consists of a f.c.c. monocrystal, with two {100} free surfaces (Fig. 1). Periodic boundary conditions are applied along the two other directions, X= [0 11] and Z= [011]. On one {100} surface, two opposite, monoatomic steps are built by removing atoms. They lie along Z, which is the intersection between a {111} plane and the surface. Such a geometry is therefore well suited to study glide events occuring in {111} planes. We investigate tensile stress orthogonal to steps. In this case, Schmid analysis reveals that Shockley partials with a Burgers vector orthogonal to the surface step are predicted to be activated in {111} planes in which glide reduces the steps height [START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. In some calculations, consecutive atoms have been removed in the step edge, forming a notch (Fig. 1), for investigating the effect of step irregularities on the plasticity. Various crystal dimensions have been considered, from 24 × 16 × 10 (3680 atoms), up to 60 × 40 × 60 (142800 atoms). The latter crystal size was shown to be large enough to have no influence on the results. Interactions between aluminum atoms are described by an embedded atom method (EAM) potential, fitted on experimental values of cohesive energy, elastic moduli, vacancy formation and intrinsic stacking fault energies [START_REF] Aslanides | Atomistic study of dislocation cores in aluminum and copper[END_REF]. It is well suited for our investigations since it correctly reproduces the dislocations core structures. Fig. 1. System used in simulations, with periodic boundary conditions along X and Z, and free {100} surfaces. The {111} glide planes passing through the steps edges are drawn (dashed lines). Here, a notch is built on the right-side step. Without temperature, the system energy is minimized using a conjugategradient algorithm. The relaxation is stopped when all forces fall below 6.24 × 10 -6 eV. Å-1 . Then the crystal is elongated by 1% of its original length along the X direction, i.e. perpendicular to the step. The corresponding strain is applied along the Z direction, according to the isotropic Poisson's ratio of aluminum (0.35). Use of isotropic elasticity theory is justified here by the very low anisotropy coefficient of this material: A = 2C 44 /(C 11 -C 12 ) = 1.07 (EAM potential used here) ; 1.22 (experiments [START_REF] Zener | Elasticity and Anelasticity of Metals[END_REF][START_REF] Thomas | Third-order elastic constants of aluminum[END_REF]). After deformation, a new energy minimization is performed, and this process is repeated until a plastic event, such as the nucleation of a dislocation, is observed. The occurence of such an event defines the elastic limit of the material at 0K. At finite temperature, molecular dynamics simulations are performed with the xMD code [START_REF] Rifkin | Xmd molecular dynamics program[END_REF], using the same EAM potential. Temperature is introduced by initially assigning an appropriate Maxwell-Boltzmann distribution of atomic velocities, and maintained by smooth rescaling at each dynamics step. The time step is 4 × 10 -15 s, small enough to produce no energy drift during a 300K run. After 5000 steps, ie. 20 ps, the crystal is deformed by 1%, similarly to what is done at 0K, and then the simulation is continued. If a nucleation event occurs, the simulation is restarted from a previously saved state and using a lower 0.1% deformation increment. To visualize formed defects, atoms are colored as a function of a centrosymmetry criterion [START_REF] Li | Atomeye : An efficient atomistic configuration viewer[END_REF]: atoms not in a perfect f.c.c. environment, ie. atoms on surfaces, in dislocation cores and stacking faults, can then be easily distinguished. In case of dislocation formation, the core position and Burgers vector are determined by computing the relative displacements of atoms in the glide plane. These displacements are then normalized to the edge and screw components of a perfect dislocation. At 0K, the deformation is found to be purely elastic up to an elongation of 10%. Then a significant decrease of the total energy suggests an important atomic reorganisation. Crystal vizualisation reveals the presence of defects located in {111} planes passing through step edges, and a step height reduction by 2/3 (Fig. 2). Atomic displacements analysis in these planes shows that plasticity has occured by the nucleation of dislocations, with Burgers vectors orthogonal to the steps and with a magnitude corresponding to a 90 • partial. This is consistent with the 2/3 reduction of the steps height. The dislocations are straight, the strain being homogeneous all along the steps, and intrinsic stacking faults are left behind. The formation of dislocations from a surface step has already been investigated from 0K simulations, using quasi-bidimensional aluminum crystals [START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. It has been shown that straight 90 • Shockley partials nucleate from the steps. However, the small dimension of the step line did not allow the bending of dislocations. Here, although this restriction has been removed by considering larger crystals (up to 90 atomic planes along Z), only straight partial dislocations have been obtained. In order to bring the role of steps to light, calculations are performed with a 30 × 20 × 30 crystal with two free surfaces, but without step. In that case, plasticity occurs for a much larger elongation, 20%, and leads to a complex defects structure. It clearly shows the important role played by steps, by significantly reducing the energy barrier due to dislocation-surface interaction, and initiating the nucleation in specific glide planes. The effect of temperature has been first investigated at 300K. Plasticity occurs for a 6.6% elongation, showing that thermal activation significantly reduces the elastic limit. Another important difference due to temperature is the geometry of the formed defect. Instead of a straight dislocation, a dislocation half-loop forms and propagates throughout the crystal (Fig. 3). As expected, the nucleation of a half-loop dislocation is thermally activated. Contrary to the 0K simulation, a dislocation has nucleated from only one step: no dislocation is emitted from the other surface step, which remains intact. Atomic dis- placements at different simulation times (Fig. 4) indicates that this half-loop dislocation has a Burgers vector orthogonal to the step, the screw component being almost zero. The formed dislocation is then a Shockley partial, leaving a stacking fault in its path. Atomic displacements have been fitted with an arctan function, according to elasticity theory. This allows to monitor the position of the dislocation core defined as the maximum of the derivative, during the simulation. Before the complete propagation of a dislocation, several half-loop embryos starting from both steps have been observed, appearing and disappearing (Fig. 3). Only one of them will eventually become large enough and propagate into the crystal (Fig. 3). This is related to the existence of a critical size for the dislocation formation, due to attractive interaction with the free surface. As the dislocation moves through the crystal and reaches the opposite surface, a trailing partial does not nucleate. Though it would significantly reduce the total energy of the system, especially in aluminum which have a high stacking-fault energy, this would require the crossing of a high energy barrier. On the contrary, the successive nucleation of dislocations in adjacent {111} can be achieved with a much lower energy barrier. So, although it relaxes less energy than a trailing partial would, this mechanism is more likely to be activated. This is what we obtained in most simulations, similar to the twinning mechanism proposed by Pirouz [START_REF] Pirouz | Partial dislocation formation in semiconductors: structure, properties and their role in strain relaxation[END_REF]. The remaining smaller step on the top surface, as well as the step created by the emergent dislocation on the bottom surface, become privileged sites for the nucleation of other dislocations in adjacent {111} planes, leading to the formation of a twin. While sufficient stress remains, successive faulted half-loops will be formed in adjacent planes, increasing the thickness of the twin. After 76 ps, the crystal structure does not evolve anymore. The plastic deformation is then characterized by a micro-twin (Fig. 3), located around the previous position of the step, with an extension of eight atomic planes, and delimited by two twin boundaries whose total energy equals the energy of an intrinsic stacking fault. We have also investigated how the dislocation formation process is modified in the case of irregular steps. We used a crystal with the same geometry, except that 10 consecutive atoms have been removed from one surface step edge (see Fig. 1), creating two step kinks between which lies a notch. The other step remains straight. First, at 0K, no defect is obtained up to 10% elongation, beyond which plasticity occurs. This elastic limit is similar to the one obtained for the system with perfect steps. Moreover, nucleated dislocations are also Shockley partials with a Burgers vector orthogonal to the step, and are emitted from both surface steps, despite the system asymmetry. However, two dislocations have been formed from the irregular step. In fact, a second partial nucleates and propagates in the {111} plane passing through the notch (Fig. 5). Both dislocations remain in their respective glide plane, leaving two stacking faults. This suggests that kinks are strong anchors for dislocations. Nevertheless, at 0K, it seems they have a negligible effect on the elastic limit, or regarding the nature of the nucleation event. At 300K and for the same geometry, the elastic limit is reached for a 6.6% elongation, i.e. similar to the crystal with straight steps. Again, it suggests that irregular steps have no effect on the elastic limit. The dislocation half-loop does not nucleate from a step kink, but about 15 atomic planes away from it (Fig. 6a). It propagates into the crystal, but stays anchored to the kink, which acts like an obstacle to the movement. Then, another dislocation nucleates in the adjacent {111} plane, within the notch (Fig. 6b). Another simulation on a similar system leads to a dislocation nucleation from the straight step, despite the presence of a kinked step. These results show that kinks are not preferential sites for nucleation. It can be explained because step kinks are 0-D defects, contrary to straight steps, what prevents to initiate 1-D defects such as dislocations. After the first nucleation, the twinning mechanism, already described above, is observed. At about 70 ps, the formed twin cannot be distinguished from the one obtained in the crystal with straight steps. Finally, there is no indication left whether the step was initially irregular or not. Molecular dynamics simulations have been used to investigate the influence of temperature and of step geometry on the first stages of plasticity in f.c.c. aluminum slabs. Surface steps were shown to be privileged sites for the nucleation of dislocations, significantly reducing the elastic limit compared to a perfect surface. Simulations with straight surface steps have revealed that only straight 90 • dislocations could nucleate at 0K. Temperature reduces the elastic limit, and makes possible the nucleation of faulted dislocation half-loops. Due to the system geometry and the strain orientation, only Shockley partials were obtained. Successive nucleation of partials in adjacent {111} planes are observed, similar to the twinning mechanism described by Pirouz in semiconductors. Simulations with an irregular step have shown that a kink is not a systematic site for nucleation. Instead, half-loops have been obtained from a straight portion of the step. The kinks introduced along a step seem to be strong anchor points for dislocations, making their motion more difficult along the step. During all simulations including temperature, several dislocation half-loop embryos were observed before one eventually becomes large enough and propagates into the crystal. Calculations are in progress to determine the critical size a half-loop must reach to fully propagate. To determine the activation energy of the nucleation from surface steps, two methods may be used. First, the nudged elastic band method [START_REF] Jonsson | Classical and Quantum Dynamics in Condensed Phase Simulations[END_REF][START_REF] Henkelman | Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points[END_REF][START_REF] Henkelman | A climbing image nudged elastic band method for finding saddle points and minimum energy paths[END_REF], applied to the nucleation and propagation of a half-loop, would provide the minimum energy path for this event. Second, by performing several simulations at a given strain, one would obtain the average nucleation time as a function of temperature, thus allowing determination of the activation energy from Arrhenius plots. The dislocations speeds, as well as the size and shape of the dislocation half-loops, can be expected to depend on temperature, which will also be investigated through simulations. As a sequel to the nucleation event, several scenarii were observed. The twinning mechanism is supposed to be in competition with the nucleation of a trailing partial, which requires the crossing of a higher energy barrier. However this last mechanism was obtained during a simulation, showing it is still possible. More investigations would allow to determine the exact dependancy on temperature, strain, or other parameters. P. Hirel's PhD work is supported by the Région Poitou-Charentes. We greatly aknowledge the Agence Nationale de la Recherche for financing the project (number ANR-06-blan-0250). Fig. 2 . 2 Fig. 2. Formation of two dislocations at 0K, after a 10% elongation of a 60 × 40 × 60 crystal. Initial positions of the surface steps are shown (arrows). Only atoms which are not in a perfect f.c.c. environment are drawn: surfaces (yellow-green), stacking fault (dark blue), dislocation cores (light blue) (color online). Fig. 3 . 3 Fig. 3. Evolution of the aluminum crystal after a 6.6% elongation at 300K. Same color convention as Fig. 2. The origin of time is when the applied strain is increased to 6.6%. (a) At 12 ps, several dislocation embryos appeared on both steps (arrows). (b) At 20 ps, a faulted half-loop dislocation has nucleated on one step. (c) After 76 ps, a stable twin was formed. The other step (on the right) remains intact. Fig. 4 . 4 Fig.[START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. Calculated edge component of the relative displacements of atoms in the activated glide plane and in the Z-layer corresponding to the dislocation front line, at 300K and for different times (triangles). They are fitted with an arctan function (solid lines) according to elasticity theory, for monitoring the dislocation core position during the simulation. The abcissa labels the depth of the atoms from the top surface : 1 corresponds to atoms at the edge of the initial step, and 40 corresponds to the opposite surface, at the bottom of the system. Fig. 5 . 5 Fig. 5. Dislocations nucleated in a crystal with one straight step, and one irregular, elongated by 10% at 0K. Fig. 6 . 6 Fig. 6. Evolution of the aluminum crystal with an irregular step, under a 6.6% elongation at 300K. Same color and time conventions as in Fig. 3. The position of the notch is highlited in red. (a) After 7.4 ps, a faulted half-loop dislocation nucleates in the original {111} plane. (b) At 10 ps, another dislocation is emitted in the adjacent {111} plane, passing through the notch.
21,496
[ "1384141", "1364394", "177609" ]
[ "968", "968", "968", "968" ]
01742595
en
[ "phys" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01742595/file/paper.pdf
Francois Sanson email: [email protected] Francesco Panerai Thierry E Magin Pietro M Congedo Robust reconstruction of the catalytic properties of thermal protection materials from sparse high-enthalpy facility experimental data Keywords: Uncertainty Quantification, Bayesian Inference, Catalysis, Thermal Protection Systems Quantifying the catalytic properties of reusable thermal protection system materials is essential for the design of atmospheric entry vehicles. Their properties quantify the recombination of oxygen and nitrogen atoms into molecules, and allow for accurate computation of the heat flux to the spacecraft. Their rebuilding from ground test data, however, is not straightforward and subject to uncertainties. We propose a fully Bayesian approach to reconstruct the catalytic properties of ceramic matrix composites from sparse high-enthalpy facility experimental data with uncertainty estimates. The results are compared to those obtained by means of an alternative reconstruction procedure, where the experimental measurements are also treated as random variables but propagated through a deterministic solver. For the testing conditions presented in this work, the contribution to the measured heat flux of the molecular recombination is negligible. Therefore, the material catalytic property Introduction In the design of thermal protection systems for atmospheric entry vehicles, the catalytic properties of the heatshield material allow us to quantify the influence of the highly exothermic molecular recombinations occurring at the surface. In order to estimate these properties for a given material, groundbased high-enthalpy facilities are used to simulate flight conditions at the material surface and to provide relevant experimental data [START_REF] Chazot | Hypersonic Nonequilibrium Flows: Fundamentals and Recent Advances[END_REF]. The plasma flow can be achieved using different techniques. In inductively-coupled plasma (ICP) wind tunnels, often referred to as plasmatrons, the plasma is generated by electromagnetic induction. A strong electromagnetic field ionizes the flow confined into a cylindrical torch and the plasma jet exits at subsonic speed into a low pressure test chamber that hosts material probes. The stagnation point conditions corresponding to a given spacecraft entry are reproduced for several minutes and the plasma flow carries sufficient energy to reproduce actual aerothermal loads experienced by a thermal protection system (TPS) in flight. Thanks to a flow of high chemical purity, plasmatron facilities are particularly suited to study gas/surface interaction phenomena for reusable TPS materials [START_REF] Kolesnikov | RTO-EN-AVT-008 -Measurement Techniques for High Enthalpy and Plasma Flows[END_REF][START_REF] Marschall | [END_REF]4,5,6,7,8,9,10,11] or composite ablative material [12] . High-temperature experiments enable characterizing the catalytic properties of the tested TPS sample by combining direct measurements using various diagnostics and a numerical reconstruction based on computational fluid dynamics (CFD) simulations. Even for well-characterized facilities, the determination of catalytic properties is affected by the noise present in the experimental data. The quantification of uncertainties in high-enthalpy experiments has previously been studied in the literature [13,14,15,16]. In particular, in our previous work [16], we evaluated the uncertainties on catalytic properties by coupling a deterministic catalytic property estimation with a Polynomial Chaos (PC) expansion method. The probabilistic treatment of the uncertainties helped mitigating over-conservative uncertainty estimates found in the literature by computing confidence intervals. The influence of the epistemic uncertainty on the catalytic property of a reference calorimeter used in the reconstruction was also investigated in [16]. However, the method developed has two shortcomings: the number of experiments is limited and statistics about the measurements distribution are not available, even though they are an essential input for the PC approach. Two important aspects are explored in the present work. First, we develop a robust methodology for quantifying the uncertainties on the catalytic property following a Bayesian approach. The Bayesian framework has already been successfully applied to the study of graphite nitridation [14] and hightemperature kinetics [17], for model parameter estimation as well as for experimental design [18], but it is a novel approach for the case of reusable materials, bringing a new insight on the ceramic matrix composites on which this paper focuses. In a Bayesian approach, one computes the probability distribution of the possible values of a quantity of interest compatible with the experimental results and with prior information about the system. This is fundamentally different from the PC approach proposed in [16]. While both approaches aim at quantifying the uncertainty on the catalytic properties, the experimental data are direct inputs of the deterministic solver combined to the PC method, whereas they are observed outputs of a model for the Bayesian method. Second, a thorough comparison between the two methods is developed in order to explain the results obtained in view of their conceptual differences. We investigate the case of two experiments necessary for the reconstruction of the flow enthalpy and material catalytic property. The PC approach sequentially considers the experiments, whereas the Bayesian approach merges them into a unique simultaneous reconstruction. Additionally, the Bayesian approach has a major advantage: it allows us to determine the catalytic property of a reference copper calorimeter used in the reconstruction methodology, along with the catalytic property of the sample material. The robustness of the method is also examined for cases where the problem is not well posed, for instance when there are too many parameters to rebuild, and no sufficient information from the experiments. In this contribution, we propose to revisit measurements performed in a high-enthalpy ICP wind-tunnel (Plasmatron) at the von Karman Institute for Fluid Dynamics (VKI) to characterize the catalytic response of ceramic matrix composites. Based on the robust uncertainty quantification methodology developed, we will assess whether accurate information on the catalytic properties of these thermal protection materials can be extracted from the experimental data. The paper is structured as follows. In section 2, we recall the main features of the combined experimental/numerical methodology developed at VKI to analyze data obtained in the Plasmatron facility, and then, present the sources of experimental uncertainties involved in the process. In section 3, we reformulate the problem of determining the catalytic properties in a Bayesian framework. In Section 4, we apply this approach to experimental data presented in [4] and compare our results to the uncertainty estimate obtained in [16] by means of the PC approach. Experimental/numerical methodology The present study uses a set of data measured during an experimental campaign documented in [4]. The first section briefly recalls the quantities measured experimentally for each testing conditions and their associated uncertainties, whereas the next section introduces the numerical simulations performed to rebuild quantities that cannot be directly measured. The last section introduces some uncertainty quantification terminology. Experimental setup In order to derive the catalytic property γ of a ceramic matrix composite sample, the reconstruction methodology used in [4] is based on two sequential experiments. The first step consists in rebuilding the free stream enthalpy h e of the plasma flow, using the cold wall heat flux measurement q cw from a copper calorimeter (see Fig. 1) of catalytic property γ ref . The uncertainties on the heat flux measurements were computed to be ±10%. Note that the quantity γ ref is a source of large uncertainties [16]. A commonly adopted assumption is to consider the surface as fully catalytic [START_REF] Kolesnikov | RTO-EN-AVT-008 -Measurement Techniques for High Enthalpy and Plasma Flows[END_REF]19]. While this is a conservative practice, there is compelling evidence that the actual surface of copper calorimeters is not fully catalytic, owing to the rapid oxidation of copper upon exposure to plasma. Numerous studies have been dedicated to characterize the catalytic properties of copper and its surface oxides (CuO and Cu 2 O) [10,13,20,21,[START_REF] Prok | Effect of surface preparation and gas flow on nitrogen atom surface recombination[END_REF][START_REF] Rosner | [END_REF][START_REF] Ammann | Heterogeneous recombination and heat transfer with dissociated nitrogen[END_REF][START_REF] Dickens | [END_REF]26,27,28,29,30,31,32,33,[START_REF] Nawaz | 44th AIAA Thermophysics Conference, AIAA[END_REF][START_REF] Cipullo | [END_REF][START_REF] Viladegut | 45th AIAA Thermophysics Conference[END_REF][START_REF] Driver | 45th AIAA Thermophysics Conference, AIAA 2015-2666[END_REF]. Together with the heat flux, the total pressure is measured during the first experiment. A water-cooled Pitot probe is introduced in the Plasmatron flow in order to measure the dynamic pressure P d (featuring an uncertainty of ±6%). The surface temperature of water-cooled probes T cw is known by measuring the differential of temperature between the inlet and outlet water lines. The static pressure P s of the test chamber is measured with a 2 Pa accuracy. In a second step, hot wall measurements are performed on the TPS material sample in order to determine its catalytic property γ, for a known test condition determined through the rebuilding of cold-wall measurements. The emissivity ε of the sample is measured with 10% accuracy. The front It is further assumed that the free stream flow is identical during both experiments and that local thermodynamic equilibrium (LTE) holds at the edge of the boundary layer. At steady state, the surface radiated heat flux is assumed to be equal to the incoming heat flux from the plasma flow. Numerical computations The Plasmatron flow conditions in front of the TPS test sample are rebuilt using experimental data and a 1D non-equilibrium Boundary Layer (BL) solver [START_REF] Barbante | Accurate and Efficient Modelling of High Temperature Nonequilibrium Air Flows[END_REF][START_REF] Barbante | [END_REF] that propagates the flow field quantities from the outer edge of the BL to the stagnation point. The rebuilding methodology is sketched in Fig. 2. The BL solver computes the stagnation point heat flux q cw (or q w for the TPS sample) that, mathematically, is a function of the probe geometry, the surface temperature T cw (or T w for the TPS sample), and the wall catalytic property of the reference calorimeter γ ref (or γ for the TPS sample), given the following set of the plasma flow free stream parameters: enthalpy h e , pressure p e , and velocity v e . The PEGASE library [40], embedded with the boundary layer solver, provides the physico-chemical properties of the plasma flow. The BL solver can be called by a rebuilding code using Newton's method to determine the quantities h e and γ in a two-step strategy involving one rebuilding per experiment. The static pressure p e is assumed to be equal to the static pressure P s measured in the chamber. The enthalpy rebuilding uses the measured dynamic pressure P d to compute the free stream velocity v e using a viscous correction, as well as the heat flux q cw measured at the surface of the reference calorimeter to reconstruct the free stream enthalpy h e . In a second step, the results from the second experiment and the flow field parameters computed during the first step are combined to determine the sample material catalytic property γ. Despite the fact that a large number of inputs are measured or unknown, the method is fully deterministic and provides no indication about the outputs uncertainty. Our previous work [16] was based on the propagation of uncertainties using this inverse deterministic solver. Uncertainty characterization in catalytic property reconstruction The determination of the TPS catalytic property directly depends on experimental data, and intrinsically carries the uncertainty associated with actual measurements. Uncertainty Quantification (UQ) tools model and quantify the error associated to the variables computed using uncertain inputs. Table 1 reviews the measured quantities and their uncertain. The uncertainties can be classified into three categories: • The measured quantities (MQ) come from the two experimental steps described earlier. The following quantities are measured: T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas , namely the calorimeter probe temperature, the calorimeter probe heat flux, the sample temperature, the sample emissivity, the plasma jet dynamic pressure and static pressure. Note that the heat flux from the second experiment (q w,meas ) is not directly measured but derived from quantities T w,meas and ε meas using Stefan-Boltzmann's law: q w,meas = σε meas T 4 w,meas . The MQ are aleatory quantities that are assumed to be noisy versions of their true values denoted T cw , q cw , T w , ε, P d , P s . In this study, they are modeled as realization of a Gaussian distribution. The quantity T cw,meas denotes the measurement of the probe temperature, so we have: T cw,meas = T cw + ζ, ( 1 ) where ζ is the realization of a zero mean Gaussian random variable. • The quantities of interest (QoI) are the unknown quantities crucial to engineering applications. In this study, the sample and the probe catalytic properties denoted γ and γ ref , along with the flow enthalpy h e , are the QoIs. The objective is not only to compute the most likely value of the catalytic property or the one that minimizes the square error but to compute the full probability distribution of all admissible values of the QoI given the measurements for a thorough quantification of uncertainties. • The Nuisance Parameters (NP) are unknown quantities that must be estimated along with the QoI in order to estimate the sample catalytic property. Quantities T cw , T w , P d , P s , ε are NPs as they have to be estimated in order to run the BL solver used to derive the sample catalytic property. Bayesian-based approach One objective of this work is to make a joint estimation of the catalytic properties γ ref and γ of the reference calorimeter and sample material, re-spectively, along with the flow enthalpy h e , for a given set of experiments. In [16], a polynomial chaos expansion was built on top of the inverse deterministic solver described earlier. In this section, we detail the derivation of the probability distribution of these quantities given the experimental results using a Bayesian approach. This probability distribution of these quantities is referred to as the posterior distribution. This distribution carries all the necessary information for the uncertainty quantification analysis. It provides a robust estimate of the uncertainty through confidence intervals and the variance. In section 3.1, the posterior distribution is decomposed into a ratio of probabilities using Bayes' rule (Eq. 4) that can be numerically evaluated. Detailed calculations of each terms of the decomposition are then presented in section 5. Finally, the posterior distribution is numerically evaluated using a Markov Chain Monte Carlo (MCMC) algorithm described in appendix. Figure 4 summarizes the rebuilding methodology from a Bayesian perspective. Note that, contrary to the deterministic strategy illustrated in Figure 2, the QoI are rebuilt using both experiments simultaneously. The differences in the two approaches are further discussed in section 4. Bayesian framework We recall that the heat flux to the sample material wall q w,meas is completely defined by the material emissivity ε meas and temperature T w,meas through Stefan-Boltzmann's law. Introducing the vector of measured quantities m = (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas ), the posterior probability is then indicated as follows: Furthermore, the vector of NP is introduced as ω nuis = (T cw , T w , ε, P d , P s ). P (γ ref , h e , γ|m) . ( 2 ) The posterior distribution (Eq. Let us now focus on the non-marginalized posterior from Eq. 3. The flowchart in Fig. 4 shows the relationships between the unknowns γ ref , h e , γ, ω nuis and the MQ (i.e., the vector m) and how they interact with each other. In order to evaluate Eq. 3, Bayes' rule is applied as follows: P (γ ref , h e , Results This section illustrates the results derived from the application of the Bayesian framework to the problem of interest. The objective is twofold: i) to compute an estimate of the QoI (flow enthalpy h e and catalytic properties γ ref and γ of the reference calorimeter and sample material) and ii) to compare the results with the uncertainty estimates obtained in [16] from a more standard PCE approach. In order to demonstrate the potential of the Bayesian approach, two sets of experimental conditions are selected among the experiments presented in [4]. They are denoted as S1 and S8, as detailed in Table 2. For both experiments, we study the following two cases: a. The calorimeter reference probe catalytic property γ ref is assumed to be constant and equal to 0.1 (Section 4.1). The results for the posterior distribution are presented. Uncertainty estimates are compared with the ones obtained in [16]. Qualitative and quantitative explanations of the differences between the results obtained by the two approaches are given. b. Secondly, the probe catalytic property is treated as an unknown quantity determined along with the other NPs and QoIs (section 4.3). Again the results are compared against the method developed in [16]. Constant calorimeter reference probe catalytic property Quantity γ ref is assumed to be constant and equal to 0.1, focusing on the computation of the posterior distribution of the flow enthalpy h e and material catalytic property γ. The statistical moment and the 95% confidence interval are given in Tables 4 and3 for quantities h e and γ, respectively. Their mean values are in good agreement with the nominal results obtained in [4]. Figure 5 shows their distributions for sample S1. It is observed that the reconstructed quantities h e and γ both have symmetrical distributions. Theses results can be related to the typical S-shape enthalpy versus catalytic property curve reported in the literature [16,[START_REF] Viladegut | 45th AIAA Thermophysics Conference[END_REF]. In this case, most of the posterior lies within the high gradient zone on the S-shape, meaning that the small changes in catalytic property induce large variations in the computed heat flux at the wall as they are related through a one to one mapping in that region. In other words, if the measured heat flux takes values in that region, it is expected that the catalytic property posterior will have limited variance. The Maximum A Posteriori (MAP) is defined as the maximum of the posterior density probability. It is an alternative point estimator to the mean of a QoI. In the special case of a Gaussian posterior, the MAP and the mean are equals. The analysis of sample S8 yields similar results and conclusions. The relative error computed as the ratio of the mean and the 95% confidence interval (CI) is one order of magnitude large for the catalytic property compared to the rebuilt enthalpy. Comparison with Polynomial-Chaos approach This section compares the proposed Bayesian approach to the PC approach presented in [16]. In the deterministic solver, the two steps of the experiments are taken sequentially (cf. Figure 2 in [16]): first the flow field is computed using the results from the first experiment, namely measurements of cold-wall heat flux, as well as the static and dynamic pressures. Then, the sample catalytic property is determined using the quantities rebuilt from the first experiment. In order to propagate uncertainties, a polynomial approximation of the solver was derived and used to generate the statistical moments of the sample catalytic property. More precisely, the MQ are the only inputs to the Sample q cw,meas polynomial approximation in [16], whereas the probe catalytic property γ ref is kept constant. In order to include the uncertainty of the probe catalytic property, several polynomial approximations of the solver are computed in [16] for different values of γ ref . P s P d T cw,meas h e T w,meas ε meas γ [kW.m -2 ] [Pa] [Pa] [K] [MJ.kg -1 ] [K] [-] [-] S1 195 In the following section, we highlight the differences between the PC and Bayesian methods. Both qualitative (section 4.2.1) and quantitative (section 4.2.2) illustrations are provided. Sample S1 conditions are chosen for this exercise. Qualitative differences between the PC and Bayesian methods The PC and Bayesian approaches tackle the problem from different angles leading to different results. The main differences between the two methods can be summarized as follows. • The experimental data accumulated during the two reconstructions are not exploited in the same way. In the Bayesian formulation, the measurements are treated silmutaneously in order to reconstruct the catalytic property distribution at once (cf. Fig. 4), whereas the PC approach coupled with the deterministic inverse problem use sequential reconstructions of each quantity (see Fig. 2). In particular, the flow enthalpy is estimated in the PC approach only using the first experiment, whereas the Bayesian approach uses information from both experiments to rebuild the flow enthalpy. As mentioned in [41], in [4,16] the link between the two experiments acts like a valve: the information (or uncertainty) only goes one way. The information from the second experiment does not flow back to the determination of the flow enthalpy. Only information from the first experiment goes to the second reconstruction via the boundary layer edge enthalpy h e . This method presents some similarities with the Cut-model used in Bayesian networks [41], but it generally leads to a wrong posterior distribution. • Input uncertainties are modeled differently. The PC approach makes stronger hypothesis about the input distribution by assuming that its mean is the experimental value. In the Bayesian framework, it is only assumed that the experimental value obtained is sampled from a Gaussian distribution with mean function of the NP and QoI. This is a strong assumption since a single experimental result can be significantly different from the mean value. • Not only the input measurements are not modeled the same way, but the way they are propagated is also different. The PC approach, and the results presented in [16], depend on the deterministic method used to solve the inverse problem. In fact, the PC approach only provides the variance of the outputs and higher statistical moments. On the other hand, the Bayesian method leads to an unbiased asymptotically efficient estimation of the sample catalytic property [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Gelman | Bayesian Data Analysis, Third Edition, Chapman & Hall/CRC Texts in Statistical Science[END_REF]. • Finally, the Bayesian approach offers more flexibility in order to add uncertainties without major issues in the computational time, whereas the PC approach is limited by the problem of the curse of dimensionality [START_REF] Bellman | Applied dynamic programming[END_REF][START_REF] Foo | [END_REF], i.e., the lack of convergence speed of the numerical method when an increasing number of uncertainties is considered. Moreover, the Bayesian framework is well-suited for modeling epistemic uncertainty, such as the reference probe catalytic property. In the method developed in [16], this property is not modeled as a distribution, since no information is available to characterize it. Therefore, a limited set of values of γ ref on an arbitrary interval are tested to provide an envelope of the uncertainty on the QoI. On the other hand, the Bayesian implementation can use the information collected during the experiments to compute a posterior distribution of the reference probe catalytic property. Using that posterior distribution, the method yields a much more precise estimation of the uncertainty in the QoI along with an estimation of γ ref . Quantitative differences between the PC and Bayesian methods In this section, numerical tests are performed with sample S1 (see Table 2). The comparison focuses on the distributions of the material catalytic properties, as well as on the modeling uncertainties coming from the unknown catalytic property γ ref of the reference calorimeter. The reconstructions of the material catalytic property γ are first compared using a constant value of γ ref equal to 0.1. Although this case may be unrealistic, since the probe catalytic property is rarely well known, it illustrates the differences between the two methods in a basic setting. Figure 6 shows differences in the sample catalytic property distribution obtained with the PC [16] and Bayesian methods. Note that, in Table 5, the first moment of the two distributions are very close, however the standard deviations and the confidence intervals are significantly larger for the distribution obtained with the Bayesian approach. This explains the much larger magnitude of the relative error. Moreover, the MAP estimates are substantially different: for the Bayesian case, the distribution is skewed and the most probable value and the mean values of the sample catalytic property are different. This is not observed when using the Polynomial Chaos, since the catalytic property distribution is Gaussian. Since γ ref is rarely known, its variability and influence on the QoI are also investigated here. In particular, the approach used in [16] for including the epistemic uncertainty due to γ ref is compared to the Bayesian implementation. For the PC method, the uncertainty on the unknown. In Figure 7, the cumulative density function (CDF) of the flow enthalpy derived from the PC approach is plotted for the extreme values of γ ref , i.e. 1 and 0.01, as well as derived from the Bayesian approach with an a priori unknown value of γ ref . For both values of γ ref , the CDF obtained by means of the PC approach exhibits a much steeper increase compared to the state-of-the-art Bayesian approach, leading to a much more precise estimate of the uncertainty on the enthalpy. This is due to the different degree of knowledge of the probe catalytic property for the two methods. Since the Bayesian implementation uses the measurements to estimate the probe catalytic property, the uncertainty due to the epistemic quantity decreases. Conversely, for the PC implementation, no information about the probe catalytic property is available, leading to an overestimation of the uncertainty in the enthalpy. In summary, the Bayesian method makes a better use of the information available from the experiments and provides an optimal, reliable estimate of the uncertainty. The distributions of the material catalytic property obtained by means of the Bayesian approach with γ ref a priori unknown will be studied in the following section. Case where the reference probe catalytic property is unknown In contrast to an approach commonly followed in the literature, we consider here the value of the probe catalytic property to be unknown, instead of arbitrarily set to a constant value. Therefore, γ ref is determined along with the other unknown quantities and the target distribution is the new posterior: P (γ ref , h e , γ , ω nuis |T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas ). Hence, the influence of the probe catalytic property on the sample catalytic property uncertainty can be rigorously quantified. Due to the increase in the number of unknowns and in order to increase the speed of convergence of the MCMC algorithm, the Markov Chain is adapted using the Adapted Metropolis (AM) algorithm presented in [46] with a modification from [47] (see algorithms 4 and 5). This approach is more precise and more flexible than the approach used in [16] where a robust brute force method is presented to explore the influence of the probe catalytic property. In this work, the Bayesian approach gives finer results thanks to a better knowledge of the probe catalytic property. Results obtained for the estimation of the material catalytic property γ, flow enthalpy h e , and reference calorimeter catalytic property γ ref for the two samples S1 ans S8 are presented. Figure 8 shows the distribution of γ ref and Table 6 summarizes their statistics. Means and variances results should be used with care as the computed distributions are extremely far from Gaussian. Based on the experimental data of sample S1, the computed value for the reference probe catalytic property is 0.018, as shown in table 6. This result indicates that the assumption of γ ref = 0.1 utilized in [4] is over-conservative. The results obtained for γ ref for the two conditions (S1 and S8) are rather different but not contradictory. The relative error is extremely large. Note that with sample S1, γ ref can be estimated with slightly more accuracy than with sample S8. This observation shows that the precision of the determination of the estimation of γ ref depends on the experimental conditions and not only on the accuracy on the measurements. The addition of an extra NP increases the uncertainty on the QoI and other NP. Figure 9 shows the distribution of h e for sample S1 that can be compared to earlier results presented in Figure 5 for the case with a constant γ ref . The distribution support is significantly increased and shifted toward higher values. This change can be explained by a simple physical reasoning: for the same value of the experimental heat flux measurement, the reference probe catalytic property has been estimated by means of the Bayesian approach to a value of 0.018 much lower than 0.1. Consequently, the contribution to the heat flux due to catalytic recombination is lower than in the γ ref = 0.1 case and the contribution from the convective heat flux therefore becomes larger and the flow enthalpy is estimated as well to a higher value than in the γ ref = 0.1 case. Figure 10 shows the distribution of the material catalytic property for samples S1 and S8. For both samples, the material catalytic property uncertainty is much more widespread with respect to the previous case where quantity γ ref was assumed to be constant. In particular, the support of the distribution covers eight orders of magnitudes and does not present a clear maximum for a precise a posteriori estimation. In the case of an unknown quantity γ ref , the experiments do not contain sufficient information. Indeed, one can notice that the posterior distribution is similar to the beta prior distribution, meaning that the likelihood is not informative in this case. even though the support of the distribution is extremely large and seems non informative, some remarks can be made about the CDF. In particular even the slight uncertainty on the determination of the flow enthalpy is associated with large uncertainty on the catalytic property of the material. This means that for the range of enthalpy between 4 MJ/kg and 8 MJ/kg (see Fig. 9) it is challenging to precisely estimate the sample catalytic property for those testing conditions. To illustrate the problem, Figure 12 shows the Bayesian reconstruction of the sample catalytic property for a case where the probe catalytic property γ ref is set to a constant value of 0.02. The sample material experiment considered here is S1. The bivariate distribution of the flow enthalpy h e and material catalytic property γ show that, for a given flow enthalpy, the curve of enthalpy versus catalytic property has a very low gradient. Even though the probe catalytic property is known and constant, the uncertainty is comparable to the case where the probe catalytic property has to be computed. Therefore the increase of uncertainty in the sample catalytic property is due to the experimental conditions rather than to the precision of the measurements. This remark shows that while the specific experimental condition had been selected based on a relevant flight environment, it is not optimal for accurately estimating the TPS material catalytic property. A similar conclusion can be made for sample S8. Conclusion In this study, a rigorous method for estimating the catalytic property of a material and the associated uncertainties is presented. By comparing a Bayesian approach with an alternative uncertainty quantification method presented in [16], we showed that the two methods do not yield the same results. By construction, the Bayesian approach is more adapted to cases where a limited number of experiments are available while the approach presented in [16] makes stronger assumptions on the measurement distribution that are only valid when a large number of experiments is available. Moreover, we found that the Bayesian approach is also more flexible as it can naturally include epistemic variables such as the unknown reference calorimeter catalytic property uncertainty. The uncertainty analysis carried out in the case of the unknown reference calorimeter catalytic property showed that the experimental set up is not adequate to precisely estimate the catalytic property of a given material. For the testing conditions presented in this work, the contribution to the measured heat flux of the molecular recombination is negligible. Therefore, the material catalytic property cannot be estimated precisely. Conversely, in this study, we were able to have some estimation of the reference calorimeter catalytic property. We have found that the assumption of constant value γ ref = 0.1 is wrong and introduces a bias in the estimation of the material catalytic property. As future work, we propose to identify experimental conditions that are optimal for accurately estimating the TPS material catalytic properties. ε meas |γ, h e , ω nuis ) is the likelihood of the measurements obtained during the catalytic property reconstruction. Note that quantity γ ref is solely involved in the first experiment, whereas quantity γ in the second one. However, both experiments are still connected through the free stream conditions (such as the enthalpy h e ) that are assumed to be constant for both probes that are injected sequentially in the plasma jet. The two likelihoods can still be computed in two different steps, as shown in the following sections. Derivation of the first experiment likelihood The enthalpy rebuilding step does not involve ε meas and T w,meas . The expression becomes: P (T cw since the measurements are considered independent. Each term from the right hand side of Eq. 1 has to be evaluated individually. For instance, for the cold wall surface temperature, one has that: P (T cw,meas |γ ref , h e , ω nuis ) = P (T cw,meas = T cw + ζ|γ ref , h e , ω nuis ) = 1 2πσ 2 Tcw,meas exp - (T cw,meas -T cw ) 2 2σ 2 Tcw,meas . The last equality comes from the fact that ζ is a zero mean Gaussian random variable. Very similarly, one has P (q cw,meas |γ ref , h e , ω nuis ) = P (q cw,meas = q cw + ζ|γ ref , h e , ω nuis ) = 1 2πσ Derivation of the second experiment likelihood For the second set of experiments the material sample is tested in order to measure its catalytic property γ. The catalytic property rebuilding step consists in computing P (T w,meas , ε meas |γ, h e , ω nuis ). In the rebuilding procedure, the heat flux radiated by the TPS is assumed to be equal to the heatflux q w from the flow to the TPS, which is computed by means of the BL solver. Mathematically we have: q w (γ, h e , ω nuis ) = σεT 4 w . ( 13 ) Following the same procedure as for the enthalpy rebuilding, the likelihood for the catalytic property rebuilding has the following form: P (T Similarly, it follows that: Injecting Eqs. 12 and 17 in Eq. 5 provides an explicit way to numerically evaluate the likelihood. Unfortunately, even though there are analytical solutions for the likelihood and the prior distribution, in order to compute the posterior, it is necessary to compute the normalization factor in Eq. 4. In this study, this is computationally intractable. To bypass that issue, a classical Markov Chain Monte Carlo method is used to directly compute the posterior without having to evaluate the normalization factor. In fact, the Metropolis algorithm enables to sample from the posterior distribution [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]49,50]. Details of the implementation are given in appendix B. 1. In our case for an efficient exploration of the distribution of γ it is natural to choose the random walk as h e,n =h e,n-1 + ξ 1 ω n =ω n-1 + ξ 2 log(γ n ) = log(γ n-1 ) + ξ 3 (22) Therefore the random walk is not symmetrical for γ and the ratio becomes: R = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ n , h e,n ω n ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , p s,meas |γ n , h e,n ω n ) × (γ n -γ max ) 2 (γ n -γ min ) 2 γ n-1 (γ n-1 -γ max ) 2 (γ n-1 -γ min ) 2 γ n ( 23 ) The rest of the algorithm of the implementation follows the MH algorithm described in [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF]. Figure 1 : 1 Figure 1: ESA standard probes (5 cm diameter) used for the measurements performed in the Plasmatron facility: (left to right) stagnation-point probe holding a material sample, copper calorimeter, and water-cooled Pitot probe. Figure 2 : 2 Figure 2: Flow chart of the deterministic estimation of material catalytic property Figure 3 : 3 Figure 3: Flow chart of the BL solver with its main inputs Figure 4 : 4 Figure 4: Flowchart of the Bayesian-based estimation of material catalytic properties. Figure 5 : 5 Figure 5: Bayesian reconstruction (γ ref =0.1) of the material catalytic property on a semi-log scale (top) and flow enthalpy on a linear scale (bottom) for material sample S1. Figure 11 compares the CDF of γ based on the sample S1 in the two cases where γ ref is either constant (equal to 0.1) or unknown. The constant γ ref case is actually a worst case scenario that overestimates the molecular recombination rate at the surface of the sample. The unknown γ ref case shows that the actual material sample catalytic property is certainly lower. Its distribution is hardly usable as it is, especially for the low values of γ since for those the posterior is very similar to the arbitrary prior chosen for this study. However, the CDF remains useful to estimate probabilities and confidence intervals. Now, we investigate reasons for the large increase in the γ uncertainty for an unknown γ ref quantity compared to the constant case. It is partially due to the addition of γ ref as NP but also to the lower estimation of γ ref =0.018 leading to an increase in the estimated flow enthalpy. The dependence of material catalytic property versus the flow enthalpy is weak. By inspecting the distributions of γ in Figure 10, one notices that these are flat in particular for sample S8. In other words, the sample catalytic property does not influence the measured heat flux for the tested conditions. It follows that scarce information from the measured heat flux can be used to estimate γ. Figure 8 : 8 Figure 8: Distribution of the reference probe catalytic property γ ref for samples S1 and S8 on a semi-log scale, obtained by means of the Bayesian approach. Figure 9 : 9 Figure 9: Distribution of the flow enthalpy h e for sample S1 obtained by means of the Bayesian approach (γ ref a priori unknown). Figure 10 : 10 Figure 10: Distribution of the material catalytic property γ for samples S1 and S8 on a semi-log scale, obtained by means of the Bayesian approach (γ ref a priori unknown). Figure 11 : 11 Figure 11: CDF of the material catalytic property γ for sample S1 on semi-log scale, obtained by means of the Bayesian approach (γ ref = 0.1 and γ ref a priori unknown). Figure 12 : 12 Figure 12: Bivariate distribution of the flow enthalpy h e and material catalytic property γ for sample S1 on semi-log scale obtained by means of the Bayesian approach (γ ref =0.02). P 2 2 (ε meas |γ, h e , ω nuis ) catalytic property likelihood becomes: P (T w,meas , ε meas |γ, h e , ω nuis ) = 1 2πσ Tw,meas exp -(T w,meas -T w ) Table 1 : 1 Measured quantities used for the flow and sample material characterization Symbol Variable Uncertainty P d,meas Dynamic pressure 6% P s,meas Static pressure 0.3% q cw,meas Heat flux 10% T cw,meas Probe temperature 10% T w,meas TPS temperature 1% ε Emissivity 5% γ, ω nuis |m) = P (m|γ ref , h e , γ, ω nuis ) P (γ ref , h e , γ, ω nuis ) The likelihood quantifies the amount of information carried by the measurements to the QoI and the NP. It is the probability of observing the measured quantities knowing the QoI and the NP. It measures the compatibility between the measurements and the value of unknown parameters, such as the catalytic property of the material sample. When the value of the catalytic property is compatible with the experimental results, the likelihood increases. The amount of this increase is directly related to the amount of P (m) (4) where P (m|γ ref , h e , γ, ω nuis ) is the likelihood, P (γ ref , h e , γ, ω nuis ) the prior, and P (m) a normalization factor such that the probabilities add up to one. information brought by the measurements. If the measurements are very informative, the increase (or decrease if the catalytic property gets less and less compatible with the experiments) is very steep. The prior accounts for the knowledge of the unknown parameters before any experiment. In our case, as scarce prior information is available for ω nuis and h e , uniform priors are considered. As γ and γ ref are defined on the interval [0;1], a beta distribution with parameters α = 1 and β = 1 is chosen with a support of [10 -8 ; 1]. The next section is devoted to the determination of the likelihood. Table 2 : 2 Deterministic conditions for material samples S1 and S8. Here, reported values of h e and γ are determined using the standard rebuilding procedure detailed in [4] . Table 3 : 3 Flow enthalpy h e [MJ kg-1 ] statistics obtained by means of the Bayesian approach (γ ref =0.1) Sample Mean SD MAP 95% CI UQ(95% CI) [%] S1 6.0 0.43 6.06 [5.06;6.76] 28.3 S8 9.7 0.43 6.66 [8.80;10.51] 17.6 Sample Mean SD MAP 95% CI UQ(95% CI) [%] S1 7.4e-3 4.1e-03 6.2e-3 [2.4e-3;1.7e-2] 197.2 S8 3.7e-3 1.8e-03 2.7e-3 [1.4e-3;8.38e-3] 188.6 Table 4 : 4 Material catalytic property γ statistics obtained by means of the Bayesian approach (γ ref =0.1) QoI due to the MQs are computed for discrete values of γ ref , whereas for the Bayesian method, γ ref is a priori Method Mean SD MAP 95% CI UQ(95% CI) [%] Polynomial Chaos 0.00747 1.6e-03 0.007 [0.0045 ; 0.0094] 65.6 Bayesian 0.00747 4.1e-03 0.0059 [0.0024 ; 0.017] 195.4 Table 5 : 5 Comparison between the statistics of catalytic property γ for material sample S1 obtained by means of the PC and Bayesian approaches (γ ref =0.1). Table 6 : 6 Reference probe catalytic property γ ref statistics obtained by means of the Bayesian approach. The reason for this P (P d,meas |γ ref , h e , ω nuis ) = P (P d,meas = P d + ζ|γ ref , h e , ω nuis ) P (P s,meas |γ ref , h e , ω nuis ) = P (P s,meas = P s + ζ|γ ref , h e , ω nuis ) Note that q cw can be computed using the BL solver as it is a function of γ ref , h e , and ω nuis . Finally Eq. 6 becomes: P (T cw,meas , q cw,meas , P d,meas , P s,meas |γ ref , h e , ω nuis ) = Ps,meas σ P d,meas σ Tcw,meas σ qcw,meas exp -(P s,meas -P s ) 2 2σ 2 2 qcw,meas exp - (q cw,meas -q cw ) 2 qcw,meas 2σ 2 , (8) = 1 2πσ 2 P d,meas exp   - (P d,meas -P d ) 2 P d,meas 2σ 2   , (9) = 1 2πσ 2 Ps,meas exp - (P s,meas -P s ) 2 Ps,meas 2σ 2 . (10) (11) 1 (2π) 2 σ ps × exp   - (P d,meas -P d ) 2 2σ 2 P d,meas - (T cw,meas -T cw ) 2 2σ 2 Tcw,meas - (q cw,meas -q cw ) 2 2σ 2 qcw,meas   . (12) w,meas , ε meas |γ, h e , ω nuis ) = P (T w,meas |γ, h e , ω nuis ) P (ε meas |γ, h e , ω nuis ) , (14) and the following expression can be computed: P (T w,meas |γ, h e , ω nuis ) = 1 2πσ 2 Tw,meas exp -(T w,meas -T w ) 2 2σ 2 Tw,meas . Acknowledgment The authors would like to thanks Andrien Todeschini for the fruitful discussions and remarks on the Bayesian approach. Appendix A: Determination of the likelihood The likelihood represents the link between the MQ, the QoI and NP and is directly related to the experiments. The two experiments from the first and second steps are independent, so that the likelihood can be rewritten as: Chain toward the desired distribution [50]. Complete proof of the convergence of the Algorithm can be found in [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]. In this section, the basics of the algorithm and the specificity of the implementation are presented. Consider a random walk Markov Chain X n on state space S. In the case studied, S contains all the admissible values of the NPs and QoIs. Consider two states x and y ∈ S, the probability to go from x to y is P(x,y) referred as the transition probability. Let π(x) be the distribution of X n , if x∈S π(x)P (x, y) = π(y). Then, the distribution π is said to be invariant or stationary. In the special case of random walks, the invariant distribution is unique and the random walk converges to π asymptotically (see [49] or [START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]). In other words, no matter where the Markov Chain started, we have, The Metropolis algorithm uses the right transition probability P (x, y) such as π is the distribution of interest (the QoI distribution). It uses this results from Markov Chain theory cf. [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF] or [49] for more further details) : If π(x)P (x, y) = π(y)P (y, x) then π is the limiting distribution for X n π(x)P (x, y) = π(y)P (y, x) is called the detailed balanced equation. In short, the algorithm models a random walk but between each step it adapts the next random step so that the detailed balanced equation is verified. Asymptotically, the MC behaves like the stationary distribution and using Monte Carlo method one can compute the distribution after convergence of the Markov Chain. In our case the state space has 6 or 7 dimensions and the Markov Chain we aim to build is X n = (γ smpl,n , h e,n , ω n ) and since we are interested in the posterior distribution, in our case we choose: π(γ, h e , ω) = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ, ω, h e ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d , P s ) ×P (γ, ω, h e ), (19) that can be computed up to a normalization factor. The advantage of the Metropolis Hasting (MH) algorithm is that it only uses the ratio Since the priors for h e,n , ω n are uniform and γ follows a beta distribution, the ratio simplifies into: R = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ n , h e,n ω n ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,i |γ smpl,n , h e,n ω n ) where P (n -1 → n) is the probability to go from state n -1 to n. If the random walk is symmetrical P (n -1 → n) = P (n → n -1) and the ratio is
48,123
[ "740708", "3984" ]
[ "409746", "109232", "109232", "56044" ]
01480523
en
[ "phys" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01480523/file/delobelle2016.pdf
V Delobelle email: [email protected] G Chagnon D Favier T Alonso Study of electropulse heat treatment of cold worked NiTi wire: From uniform to localised tensile behaviour Electropulse heat treatment is a technique developed to realise fast heat treatment of NiTi shape memory alloys. This study investigates mechanical behaviour of cold worked NiTi wires heat treated with such a technique. It is demonstrated that milliseconds electropulses allow to realise homogeneous heat treatments and to adapt the mechanical behaviour of NiTi wires by controlling the electric energy. The material can be made elastic with different elastic modulus, perfectly superelastic with different stress plateau levels and superelastic with important local residual strain. Due to the short duration and high temperature of the heat treatment, this technique allows to obtain mechanical properties that cannot be obtained with classical heat treatments of several minutes in conventional furnaces such as linear evolution of the final loading and high tensile strength to 1500 MPa for superelastic material or increase of the stress plateau level with cycling for superelastic material. Introduction Since several years, NiTi shape memory alloys (SMA) are the most widely used SMA in engineering fields as reported by [START_REF] Van Humbeeck | Non-medical applications of shape memory alloys[END_REF], but more especially for biomedical applications as reviewed by [START_REF] Duerig | An overview of nitinol medical applications[END_REF] due to their excellent mechanical properties, corrosion resistance and biocompatibility. To design their applications, engineers use industrial basic components such as NiTi wires, tubes or plates. These components are generally shaped with several successive hot and cold rolling operations. During these operations, the material is severely deformed causing important grain size reduction, amorphisation of the material, and finally leading to a suppression of the phase transformation which confers the unique superelastic or ferroelastic properties to SMA as shown in [START_REF] Jiang | Nanocrystallization and amorphization of NiTi shape memory alloy under severe plastic deformation based on local canning compression[END_REF]. To restore these properties, annealing and ageing treatments are classically used as shown in [START_REF] Jiang | Crystallization of amorphous NiTi shape memory alloy fabricated by severe plastic deformation[END_REF] for example. These heat treatments are generally long, classically 60 min for annealing and 10-120 min for ageing step as proposed in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF] for example. Such heat treatments are performed in conventional furnace so the entire sample is heat treated homogeneously. Recent studies investigated heat treatments of NiTi wires with Joule effect. Duration of such heat treatments is very dispersed. [START_REF] Zhu | The improved superelasticity of NiTi alloy via electropulsing treatment for minutes[END_REF] proposed heat treatment of several minutes duration, [START_REF] Wang | Effect of short time direct current heating on phase transformation and superelasticity of Ti-50.8 at.% Ni alloy[END_REF] and [START_REF] Malard | In situ investigation of the fast microstructure evolution during electropulse treatment of cold drawn NiTi wires[END_REF] of seconds duration and [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF] studied milliseconds heat treatments. Interest of such heat treatments is twofold: (i) reducing the time of heat treatment and (ii) performing local heat treatment. This last point is of key interest to realise architectured materials with multiple or graded mechanical properties as obtained in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF] with 300 s heat treatment. For milliseconds heat treatments, [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF] focused on the microstructures observation of the material and few informations are available about transformation and mechanical properties of the created materials. Moreover, [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF] mentioned the presence of an important gradient during cooling phase of their heat treatments but the structure is supposed homogeneous. For longer heat treatments proposed in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF], when a thermal gradient is applied to the sample, the material has graded mechanical properties. Thus, for milliseconds heat treatments, it is important to analyse the homogeneity of the created material from a mechanical point of view. In this study, heat treatments are realised with milliseconds electropulse. Investigation of the transformation and mechanical behaviours of the heat treated part of the NiTi SMA wires is realised. Transformation behaviour is studied by means of differential scanning calorimetry technique. Local mechanical behaviour is studied by means of digital image correlation (DIC) technique. Investigation of strain fields allows to study the impact of the heat treatment on the uniformity of the mechanical behaviour. In Section 2, experiments and methods are presented. In Section 3, results are described and discussed in Section 4. Experiments and methods Electropulse heat treatment The experiments were performed on cold worked Ti-50.8 at.% Ni SMA wire of diameter 0.5 mm, from the commercial provider Fort Wayne Metals (NiTi # 1). The as-received material was in 45% cold worked condition. Short time electrical pulses were generated with a direct current welder (commercial ref.: DC160, Sunstone Engineering) in wire of length L HT = 20 mm, as shown in Fig. 1a. The wire was maintained with two massive brass grips. Brass grips conduce electricity to the sample and act as a thermal mass. Thus, as shown in Fig. 1b, after heat treatment, the tips of the wire are unchanged and the centre is heat treated. In this study, six heat treatments, called A, B, C, D, E and F were carried out. During the heat treatment, voltage U W at the sample terminals and voltage U R = RI at the resistance terminal are measured, with R the resistor value and I the electrical current in the electrical loop, as shown in Fig. 1a. Power P = U W I dissipated in the wire is estimated. Evolutions of U W , I and P are presented in Fig. 2a,b,c, respectively. The dissipated power is almost constant during the heat treatment and equal to P = 3000 W. From these measurements, heat treatment duration T and final dissipated energies in the wire E = tP are estimated and summarised in Table 1 for all treatments. Note that the decrease of Uw = R wire I (Fig. 2a), where R wire is the wire electrical resistance, is in good agreement with [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF] that observed significant decrease of the electrical resistance of the wire during pulse annealing. An infrared camera associated to a high magnification lens (Commercial reference: Camera SC 7600, Flir) was used to record images of the wire during tests. Due to the symmetry of the experimental setup, measurements are presented only on the half of the wire. Then, due to the unknown variation of the wire emissivity during tests, only radiation values are presented. Fig. 3a shows the maximal wire radiation for tests A to E, obtained at the end of heating and measured along the main axis of the sample y. Fig. 3b shows the wire radiation measured along the main axis of the sample y during cooling of test D, from the maximal radiation obtained to room temperature. For all tests at the end of heating (Fig. 3a), close to the clamps, a strong thermal gradient is observed on 1 mm, due to the heat loss into the clamps. It is assumed that the bump observed between 1 and 5 mm is due to a reflection of the wire itself via the clamp to the camera which increases the radiation. Then, between 1 and 10 mm, the radiation and thus the temperature are uniform for all tests. Note that for test E presented in Fig. 3a, the plateau observed between 2 and 7 mm is due to a saturation of the infra-red sensors. During cooling (Fig. 3b), the observed gradient is due to the presence of the clamps acting as thermal mass, and increasing the cooling rate close to them. From this observation, it can be supposed that heat treatment is heterogeneous during the cooling phase. To estimate the sample temperature during the experiment, it is considered that the wire is submitted to: (i) step electrical power pulse P = 3000 W, of duration equal to times indicated in Table 1 presented in full lines in Fig. 4 and (ii) radiation and convection heat losses. During heating, the sample temperature is supposed uniform as observed in Fig. 3a. Thus, the sample temperature was estimated by solving the following heat diffusion equation: mC(T ) dT (t) dt = P(t) -Ah(T (t) -T 0 ) -A T 4 (t) (1) where m is the sample mass, C the heat capacity taking in [START_REF] Smith | The heat capacity of NiTi Alloys in the temperature range 120 to 800[END_REF], depending of the sample temperature T at instant T, A the . From these estimations, maximum temperatures T max obtained at the end of the electropulse are summarised in Table 1. The maximum temperature estimated for heat treatment F is T F = 1440 • C which is superior to the melting temperature of almost equiatomic NiTi SMA T NiTi melting = 1310 • C. Experimentally, it was observed that the sample melt for such a pulse. Thus, experimental observation and theoretical approximations of the temperature are in good agreement. Due to the melting of the sample, experimental results cannot be presented for heat treatment F. The cooling time of the wire to room temperature is comprised between 40 and 50 s as shown in Fig. 4b. It remains important to keep in mind that these values are only estimations. As shown in Fig. 1b. the resulting wire is a material having two properties. However, from Fig. 3, due to an important temperature gradient at the junction of the two materials, it can be assumed that a gradient of property is also present between the two materials. Nevertheless, this study only focuses on the transformation and mechanical behaviours of the part of the wire heat treated homogeneously. Transformation and mechanical behaviours study The transformation behaviour of materials was studied by means of DSC. DSC experiments were performed with a TA Q200 DSC between 90 and -90 • C with heating/cooling rate of 10 • C min -1 . The transformation behaviour of the wire was then studied between 80 and -70 • C, when the cooling ramp is stabilised. DSC measurements were realised for all the specimens. All the tensile tests were performed using a Gabo Eplexor tensile machine. The tests were realised at room temperature T 0 ≈ 25 • C, at constant cross head velocity U = 0.1 mm min -1 , where U is the cross head displacement. The initial gauge length of the wire was L 0 = 18 mm, thus, the applied global strain rate was U/L 0 = 9.3 × 10 -5 s -1 . In this study the transition zone between heat treated material and as received material is not studied (Fig. 1b). During the tensile test, the axial force F was recorded. The nominal stress = F/S 0 , where S 0 is the initial section of the wire is calculated. In this study, the local strain field yy was estimated by means of DIC method. The strain field is averaged along the main axis the sample, in order to obtain the global strain of the material, noted in the following study. Results In the following, the cold worked material is noted CW. Then, CW material heat treated with pulses A, B, C, D, E are called material A, B, C, D, E, respectively. Transformation behaviour of CW is plat and does not exhibit any peak (Fig. 5a). This result is in good agreement with [START_REF] Kurita | Transformation behavior in rolled NiTi[END_REF]. The transformation behaviour of material A remains plat (Fig. 5b). A small peak is observed for material B, as sketched on the close up of Fig. 5c. A difference of 10 • C between heating and cooling peak temperatures and small heat of transformation of 2.8 J g -1 are well the signature of the austenite -R phase transformation (noted A -R). The transformation peak temperature at cooling and heating are about T A-R = 10 • C and T R-A = 20 • C, respectively. For material C (Fig. 5d), a A R transformation is observed with of mation estimated to approximately 2.5 J g -1 at cooling and heating. Transformation behaviour Transformation peak temperatures at cooling and heating are lower than with treatment B with T A-R = -18 • C and T R-A = -7 • C, respectively. For material D (Fig. 5e), the Austenite-Martensite (noted A -M) transformation is observed with peak transformation temperatures equal to T A-M = -44 • C and T M-A = -14 • C. Direct and reverse heat of transformations are estimated to be 11.0 J g -1 and 13.5 J g -1 , respectively. Almost identical transformation behaviour is observed for material E (Fig. 5f) with a A-M transformation having heat of transformation equal to the ones found for material D. However, the peak transformation temperatures are higher than material D ones and are equal to T A-M = -38 • C and T M-A = -12 • C. Mechanical behaviour Mechanical tests presented in Fig. 6 are composed of a loading-unloading cycle and a final loading to failure. Fig. 6a shows the global mechanical behaviour of CW material and material A. For material A, strain profiles estimated along the main axis of the wire are plotted in Fig. 6b for instants defined in Fig. 6a. Strain profiles are similar to the ones obtained for CW material. Fig. 6c shows the global mechanical behaviour of materials B and C. For material C, strain profiles are plotted in Fig. 6d for instants defined in Fig. 6c. Strain profiles are similar to the one obtained for material B. Finally, Fig. 6e shows the global mechanical behaviour of materials D and E. For material D, strain profiles are plotted in Fig. 6f for instants defined in Fig. 6e. Strain profiles are similar to the one obtained for material E. Fig. 7a shows (i) the initial elastic modulus, noted E ini , during the first loading and (ii) the elastic modulus after localisation plateau, noted E end , observed on the stress-strain curves. Slopes to estimate the elastic moduli are sketched in dashed line in Fig. 6. Fig. 7b shows the plateau stresses at loading, unloading and the hysteresis height noted high , low and , respectively. CW material exhibits purely elastic brittle behaviour (Fig. 6a), i.e. stress-strain curve of the material is linear, and no plasticity occurs before failure. Its elastic modulus is estimated to 53 GPa. Ultimate tensile strength is about 1500 MPa. With short duration heat treatment, the material can be soften. Material A remains purely elastic and brittle with lower elastic modulus estimated to 43 GPa. The ultimate tensile strength remains high for metallic material and is about 1500 MPa. For these two materials, the strain field is uniform along the wire axis (Fig. 6b). When increasing heat treatment energy, the materials B and C exhibit classical behaviour of superelastic NiTi SMA without residual strain (Fig. 6c andd). Stress plateaus due to direct and reverse phase transformations are observed at loading and unloading with an important hysteresis of = 200 MPa for the two heat treatments. The stress plateaus are lower for material C because the maximum temperature reached by material C is higher than material B one. The plateau stresses decreases between first and second cycles and the difference is estimated to -30 MPa and -10 MPa for materials B and C, respectively. For these two materials, classical localisation phenomena are observed during the stress plateau. During the ultimate loading, when the material is stretched to the maximum stress reached during the first cycle, stress drops up to the value of the first cycle plateau. The elastic moduli of first slopes increase with heat treatment duration. It is estimated to 50 GPa and 68 GPa for materials B and C, respectively. The elastic moduli after stress plateau are estimated to be about 21 GPa and 25 GPa for materials B and C, respectively. Ultimate tensile strength is about 1300 MPa for both materials. Stress-strain evolution after the plateau is linear. The material exhibits brittle behaviour. For materials D and E, a superelastic behaviour with a residual strain is observed during the first loading, unloading cycle (Fig. 6e andf). Their loading plateau stresses are equal to 430 MPa and 450 MPa, respectively. Material D plateau stress is lower than C one. It is also lower than material E one while maximum temperature reached by material E is higher than material D one. Elastic moduli during the first slopes are estimated to 63 GPa for the two materials. During the ultimate loading, a non-classical behaviour is observed. When the plateau stress of the first loading is reached, the stress still increases with strain and reaches a plateau to a higher value. Then, when the material is stretched to the maximum strain reached during the first cycle, stress drops down to the stress value of the first cycle plateau. Then, global behaviour is identical to the behaviour observed during the first loading. Finally after the plateau, the elastic moduli presented in Fig. 6e are estimated to 20 GPa and 17 GPa for materials D and E Fig. 7a, respectively. Finally, strain hardening is observed. A maximal strain of 40% was reached for both materials without failure. During first loading, the local strain field exhibit localisation phenomenon as materials B and C. However, during the second loading, the local strain field is non-uniform with two localisation fronts. This specific local behaviour is not presented here but will be analysed in a forthcoming study. During the strain hardening phase, localisation is observed. Finally, from Fig. 6c ande, the transformation strain of materials B, C, D and E are estimated to be 5%, 7%, 9% and 10%, respectively. When the pulse time increases, the transformation strain increases too. Discussion On the homogeneity of the heat treatment In [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF], thermal gradient was observed during 300 s heat treatment, leading to mechanical graded material. Considering thermal gradient observed during cooling phase of the heat treatment (see Fig. 3), an identical mechanical behaviour could be assumed. However, if local strain behaviours are very different from one to another materials, the heat treatment is uniform along the main axis of the sample. Initially, material A deforms homogeneously as clearly shown in Fig. 6b. For materials B and C, during elastic phases, the strain fields are uniform (Fig. 6d instants a,b,e,f). For all superelastic materials, during the plateau, i.e. when localisation is observed, outside localisation front, the strain is uniform to a high or low strain value (Fig. 6d andf). The specific local behaviour of materials D and E during localisation zone and strain hardening, presented in the previous section, is due to the partial pre-straining of the sample during the first loading. Thus, even if the material temperature during cooling is heterogeneous, materials are heat treated homogeneously. For such heat treatment, the governing parameter is the maximal temperature reached during heating. Such a is not valid when increasing the duration of heat treatment as observed in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF]. et al. (2010, 2011) studied microstructure evolution of identical CW wires during milliseconds electropulse heat treatment. From comparison of Fig. 6 of this study and Fig. 3 of [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF], it is considered that materials A, B, C, D, and E can be compared with materials called 6 ms, 10 ms, 12 ms, 16 ms and 18 ms in their paper, respectively. In the following discussion, the microstructure is considered as the one summarised in Table 2, taken from [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF]. Mechanical properties and microstructure Delville About materials elasticity To begin, observations that CW and A materials, i.e. amorphous and polygonised material, have a high elastic potential and exhibits brittle behaviour (Fig. 6a andb) are in good agreement with [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF] and [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF]. [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF] showed that amorphous NiTi exhibits classical properties of metallic glasses. In the report of [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] about mechanical properties of metallic glasses, it is mentioned that the elastic modulus of amorphous material is about 30% lower than crystallised material. From Fig. 7a it is observed that the difference of elastic modulus between CW and D materials, i.e. partly amorphous and crystallised material is about 20%. The order of magnitude is in good agreement with [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] and the difference is assumed to be due to the presence of an important amount of austenite phase in the CW material. [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] also indicated that the room temperature elastic modulus decreases with increasing annealing temperature which is in good agreement with measurement of elastic modulus of CW and A materials (Fig. 7a). Values of elastic modulus E ini proposed in Fig. 7a for materials B, C, D and E are generally associated to austenite elastic modulus. Results are in good agreement with dispersed values found in the literature comprised between 40 and 90 GPa as mentioned in [START_REF] Liu | Apparent modulus of elasticity of near-equiatomic NiTi[END_REF]. However, for materials B and C, considering that the material is composed of polygonised material and nanocrystals, these values cannot be associated to Young modulus of crystallised austenite because it can be supposed that nanograins and polygonised part deform in different manner. Values of elastic modulus E end are generally associated to martensite elastic modulus. Results are in good agreement with dispersed values found in the literature comprised between 20 and 50 GPa as mentioned in [START_REF] Liu | Apparent modulus of elasticity of near-equiatomic NiTi[END_REF]. The specific mechanical behaviour obtained after superelastic plateau is discussed in Section 4.2.3. About superelasticity of materials Since several decades, it is known that superelasticity of NiTi SMA is due to the phase transformation from austenite to martensite on the material grains. In the proposed case, it is assumed that identical deformation mechanisms occur in the nanograins and micrograins of B, C and D, E materials, respectively. Other deformation mechanisms such as nanograins rotation and boundary sliding can occur in materials A, B and C, as mentioned in [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF], but cannot be proven or discussed from the proposed results. For materials D and E, the material is recrystallised and composed of micrograins and it is assumed that the classical deformation mechanisms are observed. The irreversible strain observed in the stress strain curve is due to the sliding of dislocations observed in [START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF]. Localisation phenomenon (Fig. 6d andf) is also classically associated to the superelastic behaviour obtained in tension. Results about superelasticity obtained via electropulse heat treatment are in good agreement with the literature about superelasticity obtained via classical heat treatment as in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF]: increasing heat treatment temperature decreases the plateau stress and increases stress hysteresis (Fig. 7b). For materials B and C, the plateau stress decreases with cycling as observed in [START_REF] Liu | Effect of pseudoelastic cycling on the Clausius-Clapeyron relation for stress induced martensitic transformation in NiTi[END_REF] and the yield drop phenomenon is observed as in [START_REF] Eucken | The effects of pseudoelastic prestraining on the tensile behaviour and two-way shape memory effect in aged NiTi[END_REF], for example. However, longer electropulse heat treatments, as for material D and E, create a unique mechanical behaviour: the plateau stress increases with cycling. This phenomenon is characteristic of electropulse heat treatments and was also observed in [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF]. The yield drop phenomenon is also observed but from a higher stress value. For these materials, very specific localisation phenomenon can be observed and it will be developed in an other study. Brittle behaviour vs. strain hardening In Fig. 6c, for materials B and C, the ultimate loading is linear and the material is brittle with important tensile strength. This behaviour is similar to the one observed for CW and A materials (Fig. 6a). With classical heat treatments, strain hardening are classically observed in cold worked material directly aged as in [START_REF] Saikrishna | On stability of NiTi wire during thermo-mechanical cycling[END_REF] or annealed and aged as in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF]. Materials having linear and potentially elastic behaviour after transformation plateau, with important tensile strength, have already been observed in [START_REF] Pilch | Final thermomechanical treatment of thin NiTi filaments for textile applications by electric current[END_REF] with Joule heating but have never been obtained with conventional heat treatments to the knowledge of authors. This property is of great interest for SMA engineering. From [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF], it is known that materials D and E are recrystallised into micrograins and that important dislocation phenomenon occurs. This observation is in good agreement with strain hardening observed in Fig. 6e because dislocations create large irreversible deformations with strain hardening of the material. In the profile h of Fig. 6f, localisation is observed. This localisation is due to the pre-straining of the localised area during first loading. Conclusion This study investigated mechanical behaviours of cold worked NiTi wires heat treated with electropulse. From this study, one can conclude that milliseconds electropulse is an efficient method to realise homogeneous heat treatments and allows to adapt the functional properties of cold worked NiTi SMAs. On wire of diameter 0.5 mm, of length 20 mm and for heat pulse of power P = 3000 W: • Low duration heat treatment (1.5 ms) allows to soften the initial cold worked material but preserves the elastic and brittle with a high ultimate tensile strength properties. • Middle duration heat pulse (2-3 ms) restores the classical superelastic behaviour observed on NiTi SMAs. After the plateau, the material deforms linearly with a brittle behaviour and an important ultimate tensile strength as the initial material: such a property cannot be obtained with classical heat treatments. • Long duration heat pulse (4-5 ms) allows to obtain a superelastic behaviour with important residual deformation. With cycling, the strain of successive plateaus increases with loading: such a property has never been observed with classical heat treatments. The residual deformation is due to the sliding of dislocations defects in the material. After the plateau, strain hardening is observed. This study brings important information about mechanical behaviour of cold worked NiTi SMA heat treated with milliseconds electropulses. Fig. 1 . 1 Fig. 1. a) Experimental set-up presentation. (b) Resulting material with local heat treatment. Fig. 2 . 2 Fig. 2. Evolution of (a) Voltage UW, (b) electrical current I and (c) power P during electropulse heat treatment. Fig. 3 . 3 Fig. 3. Radiation measured with an infrared camera. (a) Radiation at the end of heating for tests A to E. (b) Radiation during cooling phase of test D. Fig. 5 5 Fig.5shows the transformation behaviour for CW, A, B, C, D and E materials.Transformation behaviour of CW is plat and does not exhibit any peak (Fig.5a). This result is in good agreement with[START_REF] Kurita | Transformation behavior in rolled NiTi[END_REF]. The transformation behaviour of material A remains plat (Fig.5b). Fig. 4 . 4 Fig. 4. P step (full lines) used to estimate temperature T (dotted lines) for heat treatment A to F during the heating phase. (b) Temperature T evolution during the cooling phase. Fig. 5 . 5 Fig. 5. DSC of (a) material CW, (b) material A, (c) material B, (d) material C, (e) material D, and (f) material E. Fig. 6 . 6 Fig. 6. Stress-strain curves of (a) initial material and material A, (c) materials B and C and (e) materials D and E. Local strain fields for (b) material A, (d)material C and (f) material D. Fig. 7 . 7 Fig. 7. (a) Elastic moduli E ini and E end in of pulse duration. (b) Plateau stresses at loading ( high ), unloading ( low ) and hysteresis height ( ) in function of pulse duration. Table 1 1 Heat treatments parameters: duration, energy and estimated temperature. Material t (ms) E (J) Tmax ( • C) CW 0 0 - A 1.44 4.3 380 B 2.27 6.8 570 C 3.08 11.2 760 D 3.99 12.1 970 E 5.04 14.1 1200 F 6.10 18.3 1440 Table 2 2 Comparison with the literature data from[START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF]. Material Sample name in Microstructure Grain size (nm) Delville et al. (2010) CW 0 ms Mainly austenite - and amorphous A 6 ms Polygonised and 5-10 amorphous B 10 ms Polygonised 20-40 nanocrystalline C 12 ms polygonised 25-50 nanocrystalline D 16 ms Recrystallised 200-700 E 18 ms Recrystallised 800-1200 Acknowledgement The authors wish to acknowledge the financial support of the ANR research programme "Guidage d'une Aiguille Médicale Instrumentée -Déformable" (ANR-12-TECS-0019).
32,945
[ "14058", "172344", "172780" ]
[ "398528", "398528", "398528", "398528" ]
01480542
en
[ "phys" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01480542/file/rebouah2016.pdf
Marie Rebouah Grégory Chagnon Gregory Chagnon email: [email protected] Patrick Heuillet G Chagnon Anisotropic viscoelastic models in large deformation for architectured membranes Keywords: Viscoelasticity, Sphere unit model, Anisotropy, Stress-softening B published or not. The documents may come Anisotropic viscoelastic models in large deformation for architectured membranes Introduction Depending on the process, rubber-like materials can be considered as initially isotropic or anisotropic. Even if isotropic behaviour is the most widespread, calender processing [START_REF] Itskov | A class of orthotropic and transversely isotropic hyperelastic constitutive models based on a polyconvex strain energy function[END_REF][START_REF] Diani | Directional model isotropic and anisotropic hyperelastic rubber-like materials[END_REF][START_REF] Caro-Bretelle | Constitutive modeling of a SEBS cast-calender: large strain, compressibility and anisotropic damage induced by the process[END_REF] generates anisotropy by creating a privileged direction in the material. The differences of mechanical properties affect the stiffness, stress softening or viscoelastic properties. Even if rubber-like materials are isotropic, some induced anisotropy can be generated by the Mullins effect [START_REF] Diani | A review on the Mullins effect[END_REF]Rebouah and Chagnon 2014b) for most materials. Numerous studies have dealt with the modelling of rubber-like materials. In large deformations, the viscoelasticity is tackled either by the Boltzmann superposition principle [START_REF] Green | The mechanics of non-linear materials with memory: part I[END_REF]Coleman and Noll 1963) which leads to the K-BKZ models or by internal variable models [START_REF] Green | A new approach for the theory of relaxing polymeric media[END_REF]. Many constitutive equations were proposed to describe different viscoelastic behaviours. The modelling of calendered rubber sheet necessitates taking into account the initial anisotropy. Many constitutive equations are developed to describe anisotropic hyperelastic behaviour [START_REF] Chagnon | Hyperelastic energy densities for soft biological tissues: a review[END_REF], these equations were often initially developed to describe soft biological tissues. Different equations were also developed to describe viscoelasticity for orthotropic materials or materials having one of two reinforced directions [START_REF] Holzapfel | A structural model for the viscoelastic behavior of arterial walls: continuum formulation and finite element analysis[END_REF][START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF][START_REF] Haslach | Nonlinear viscoelastic, thermodynamically consistent, models for biological soft tissue[END_REF][START_REF] Quaglini | A discrete-time approach to the formulation of constitutive models for viscoelastic soft tissues[END_REF][START_REF] Vassoler | A variational framework for fiber-reinforced viscoelastic soft tissues[END_REF]. These constitutive equations rely on the representation of the material by a matrix with different reinforced directions, inducing the anisotropy in the material. The viscoelastic constitutive equations are introduced in the fibre modelling and can also be introduced in the matrix modelling. In a different way, [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF] developed a discrete fibre model with dissipation for biological tissues. The model relies on a structural icosahedral model with six discrete fibres. These models do not correspond to calendered materials. A calendered material can be represented as a macromolecular network in which the repartition of macromolecules was not equiprobable in space. A way to treat this problem is to describe the material by a uniaxial constitutive equation integrated in space considering different orientations. Different formalisms were proposed [START_REF] Verron | Questioning numerical integration methods for microsphere (and microplane) constitutive equations[END_REF]. For soft tissues and rubber-like materials, the formalism proposed by [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] is the most widespread. It was used to describe the stress softening or the viscoelasticity. [START_REF] Miehe | A micro-macro approach to rubber-like materials. Part ii: the micro-sphere model of finite rubber viscoelasticity[END_REF] were the first to use this formalism to describe the viscoelasticity but in an isotropic framework. The same formalism was later used by [START_REF] Diani | Observation and modeling of the anisotropic visco-hyperelastic behavior of a rubberlike material[END_REF], introducing induced anisotropy by the stress softening, but the viscoelasticity remained isotropic. Moreover, [START_REF] Rey | Hyperelasticity with rate-independent microsphere hysteresis model for rubberlike materials[END_REF] used this formalism to describe hysteresis loops but also in an isotropic approach. In fact, the discretisation in privileged directions is often used to induce anisotropy for hyperelasticity and stress softening, but not for viscoelasticity. In this paper, we propose to characterise the mechanical behaviour of initially anisotropic rubber-like materials. The experimental data will be used to adapt the formalism proposed by Rebouah and Chagnon (2014a) to describe the anisotropic viscoelasticity and stress softening of the material. Two rubber-like materials that possess an initial anisotropic behaviour are studied: first, a room temperature vulcanized silicone rubber which was made anisotropic by a stretching during reticulation, and second, a thermoplastic elastomer made anisotropic by the industrial process. In Part 2, the cyclic mechanical behaviour of the materials is described by means of a tensile test performed on specimens oriented in different directions in the plane of the membrane. In Part 3, the mono-dimensional constitutive equation is first described, and next the three-dimensional formulation is proposed. In Part 4, a discussion about the abilities of the constitutive equations to describe the two materials is proposed. Finally, a conclusion closes the paper. Experiments Materials and specimen geometry In this paper, two materials that possess an anisotropic mechanical behaviour are used, a silicone rubber (RTV3428) and a thermoplastic elastomer (TPE). They are detailed in the next paragraphs. RTV3428a An RTV3428 silicone rubber is used here, which was previously studied in other works [START_REF] Rey | Influence of the temperature on the mechanical behavior of two silicone rubbers above crystallization temperature[END_REF]). This material is initially isotropic and only has an anisotropy induced by Mullins effect [START_REF] Machado | Induced anisotropy by the Mullins effect in filled silicone rubber[END_REF]. It is proposed to modify its microstructure by changing the elaboration process to generate an initially anisotropic behaviour. The process is illustrated in Fig. 1. To obtain this anisotropic plate, two components are mixed first. The mixture is then put into a vacuum pump and finally injected into a mould. The mould is put into the oven at 70 °C for 22 minutes. The crosslinking of the obtained membrane is not fully performed after being removed from the mould. Next the membrane is installed in a clipping system made of two jaws and applying a constant displacement between the two extremities of the membrane (as represented in the fifth step in Fig. 1). The global deformation of the membrane in the system is about 60 %. The system is put into the oven at a temperature of 150 °C for two hours. The new obtained material is named RTV3428a. This process generates a preferential orientation of the macromolecular chains in the material. TPE Different processes can be used to manufacture TPE [START_REF] Caro-Bretelle | Constitutive modeling of a SEBS cast-calender: large strain, compressibility and anisotropic damage induced by the process[END_REF]. In this study, an industrial material provided by Laboratoire de recherches et de contrôle du caoutchouc et des plastiques (LRCCP) is used. This material is obtained by means of an injection process which gives it a predominant direction and makes it initially anisotropic. Specimen geometry Each material is initially elaborated as a membrane as illustrated in Fig. 2. The RTV3428a membrane dimensions after the second vulcanization are 150 mm in length, 70 mm in width and 1.6 mm in thickness; for the TPE the membrane dimensions are 150 mm in length, 100 mm in width and 2 mm in thickness. For each material tensile test samples (20 mm long and 2 mm wide) are cut in the middle of the membrane to avoid edge effects. These specimens are cut with different angles 0°, 45°and 90°compared to the preferential direction of the material, considering that 0°matches the preferential direction imposed to the macromolecular chains for both processes as illustrated in Fig. 2. Loading conditions Mechanical tests were realised with a Gabo Eplexor 1500 N mechanical test machine with a load cell of 50 N. Samples were submitted to a cyclic loading, two cycles up to a stretch λ = 1.5, two cycles up to λ = 2, and finally two cycles up to λ = 2.5. The tests were carried out at a strain rate of 0.016 s -1 . The loading history is detailed at the top of the Fig. 3 and Fig. 4. Results Figure 3 presents the results of the test for the three samples cut from the RTV3428a. The three samples do not have the same mechanical behaviour, and several phenomena are observed. First, to evaluate the amount of anisotropy, an anisotropic factor ξ is defined as the ratio of stresses for a stretch λ = 2.5 for different orientations as ξ = σ 0 • (λ = 2.5)/σ 90 • (λ = 2.5). It permits qualitatively quantifying the anisotropy of the two materials. For the RTV3428a an anisotropic factor of approximately 1.3 can be calculated between the sample cut at 0°and the one cut at 90°. This emphasises that the second vulcanization undergone by the membrane modifies the microstructure of this filled silicone and is efficient to generate anisotropy in the silicone rubber. The sample cut at 0°(which is the same direction as the loading direction imposed during the second vulcanization) has the most important stress hardening compared to 45°and 90°, the latter being the softest specimen. This test also highlights that the material has few viscous effects and permanent set even at slow strain rate. Stress softening is still the major non-linear effect associated with the mechanical behaviour. Figure 4 presents the results for the same test obtained for the three samples of the TPE material. As before the anisotropic factor ξ can be evaluated and is approximately equal to 1.5. As for the RTV3428a, stress softening, hysteretic behaviour and permanent set are also observed. It is to note that the stress softening and permanent set are very large. The The observed phenomena are the same for the two materials. They both have an anisotropic mechanical behaviour with viscoelasticity, stress softening and permanent set for any loading direction. All the phenomena are even more amplified for the TPE than for the RTV3428a. The stress softening of the material between the first and the second cycles seems to be the same for any direction, this should mean that stress softening is not affected by the initial anisotropy. On the contrary, the stiffness and viscoelastic properties are modified for the two materials depending on the direction. Constitutive equations This section aims to detail the constitutive equation developed to describe the mechanical behaviour observed experimentally for both materials. This constitutive equation must take into account the anisotropy, stress softening and viscoelastic effects (including the permanent set) undergone by the materials. It is to note that the material is considered as a homogeneous structure and not as a matrix with reinforced fibres as it is classically done to represent the anisotropic materials (see, for example, [START_REF] Peña | Mechanical characterization of the softening behavior of human vaginal tissue[END_REF][START_REF] Natali | Biomechanical behaviour of oesophageal tissues: material and structural configuration, experimental data and constitutive analysis[END_REF]. The constitutive equation relies on the representation of space by an integration of a uniaxial formulation by means of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] directions: σ = 42 i ω (i) σ (i) a (i) n ⊗ a (i) n (1) where a (i) n are the normalized deformed directions, ω (i) the weight of each direction and σ (i) the stress in the considered direction. The directions are represented in Fig. 6. The idea of the Fig. 6 Representation of a microsphere with a spatial repartition of 2 × 21 directions proposed by [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] modelling is to propose the constitutive equation in tension-compression for each direction, i.e. σ (i) . A classical rheological scheme is used to model viscoelasticity; the first schema is illustrated in Fig. 5. The deformation gradient F is decomposed into an elastic part F e and an inelastic part F i between the initial configuration (C 0 ) and the instantaneous configuration (C t ). The formalism proposed by [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] is used. An application of the Second Principle of Thermodynamics leads to the equation of dissipation D int , namely D int = σ : D -Ẇ ≥ 0 ( 2 ) where W is the strain energy (it can be decomposed into two parts, W 1 for the elastic branch of the model and W 2 for the inelastic branch of the model), σ is the Cauchy stress tensor and D is the rate of deformation tensor. [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] proved that the sufficient condition to verify is 2F e ∂W ∂B e F e : D i ≥ 0. (3) The indices e and i are referring to the elastic and the inelastic parts of the model. As [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] detailed the simplest sufficient condition to satisfy, this equation is chosen as: Ḃe = LB e + B e L T - 2 η 0 B e ∂W 2 ∂B e B e D ( 4 ) where B is the left Cauchy-Green tensor, L is the velocity gradient equal to ḞF -1 and D strands for the deviatoric part of the tensor. Any constitutive equation can be used by assuming that each spring of the model is modelled by a neo-Hookean model [START_REF] Treloar | The elasticity of a network of long chain molecules (I and II)[END_REF], C (i) 1 and C (i) 2 are the material parameters of each branch. Index i denotes that this model will be used in every directions of the microsphere decomposition. To be used, the governing equation must be written in uniaxial extension as λ(i) e = λ e (i) λ(i) λ (i) -4 C (i) 2 3η (i) 0 λ (i) e 3 -1 (5) where λ (i) and λ (i) e are the stretch and the elastic part of the stretch in the considered direction, and η (i) 0 is a material parameter associated to each direction. To take into account the stress softening phenomenon, a non-linear spring is added to the previous rheological scheme. It consists of a non-linear spring that can be added to the rheological scheme as illustrated in Fig. 6. [START_REF] Rebouah | Anisotropic Mullins softening of a deformed silicone holey plate[END_REF] proposed using an evolution function F (i) that records the loading history of each direction, this function alters the stiffness of the non-linear spring: F (i) = 1 -η (i) m I 1max -I 1 I 1max -3 I (i) 4 max -I (i) 4 I (i) 4 max - 1 I (i) 4 max I 4 max 4 (6) where η (i) m is a material parameter, I 1 is the first invariant of Cauchy-Green tensor and I (i) 4 is the fourth invariant associated to direction i. The term I (i) 4 max is the maximum value reached at the current time for each direction, and I 4 max is the maximum value of I 4 for every direction. Summing the viscoelastic part and the stress softening enables us to define the stress in direction i considering that each direction endures only tension-compression, an incompressibility hypothesis is used to write the stress in each direction as σ (i) = 2C (i) 1 λ (i) 2 - 1 λ (i) + 2C (i) 2 λ (i) e 2 - 1 λ (i) e + 2λ (i) 2 F (i) ∂W cf ∂I (i) 4 (i) (7) where W (i) cf is the strain energy of the material oriented in direction i for the stress softening part. Due to the differences observed experimentally between both materials, the strain energy W (i) cf used to describe the RTV3428a and the TPE is different: • The isotropic RTV3428 silicone rubber was already studied by [START_REF] Rebouah | Anisotropic Mullins softening of a deformed silicone holey plate[END_REF], and the same strain energy function was used in that study, namely W (i) cf = K (i) I (i) 4 -1 2 (8) where K (i) is a material parameter. • The TPE has a very smooth behaviour with no strain hardening, as a consequence a square root function is chosen (Rebouah and Chagnon 2014b): W (i) cf = K (i) 2 I (i) 4 -1 I (i) 4 dI (i) 4 ( 9 ) where K (i) is a material parameter. The global dissipation of the model is obtained by summing the dissipation of each direction. As each dissipation is positive by construction of the evolution equation, the global dissipation is also positive. To conclude, five parameters in each direction are used to handle the anisotropy of the material, C 1 (i) for the hyperelastic part, C 2 (i) and η (i) 0 for the viscoelastic part and K (i) and η (i) m for the stress softening part. The model needs the integration of a differential equation (Eq. ( 5)). An implicit algorithm is used to determine the equilibrium solution in each direction. Comparison with experimental data 4.1 Parameter identification strategy The model has five material parameters in each direction. It is important not to fit globally all the material parameters, but to impose some restrictions due to experimental observations: • First, the stiffness of the material is different depending on the direction (0°, 45°and 90°), this stiffness is principally controlled by the parameters controlling the hyperelasticity of the constitutive equation, C (i) and K (i) , these parameters must be different in the different directions. • Second, no significant difference was observed for the stress softening for the three orientations, i.e. the difference between the first and second loadings. Thus, the material parameter describing the stress softening η (i) m is chosen to be independent of the direction. • Third, as exposed by [START_REF] Petiteau | Large strain rate-dependent response of elastomers at different strain rates: convolution integral vs. internal variable formulations[END_REF] and Rebouah and Chagnon (2014b), the hysteresis loop size depends both on the elastic parameter, C 2 , and the time parameter, η 0 . As the hysteresis loops are very similar, but at different stress levels, the governing parameter is C (i) . As a consequence, it is chosen to impose the same η 0 in all the directions. It was experimentally observed that the stress is maximal for the privileged direction of the fabrication process. The variation of the mechanical parameters according to the spatial repartition enables us to increase or decrease the initial anisotropy of the material. According to the representation of the spatial repartition of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] as illustrated in Fig. 6, the closest directions of the microsphere to the preferential direction induced by the process (i.e., direction 1 in Fig. 6) are the directions with the largest material parameter values and minimum for the orthogonal directions 2 and 3 (of the microsphere). The values of the parameters are the same for the directions which are symmetrical with respect to the privileged direction of the sphere unit (direction 1). The values for the intermediary directions of the microsphere are obtained according to their relative position compared to direction 1. The material parameters are supposed to vary linearly between the two extrema. All these choices permit us to avoid non-physical responses of the model for other loading conditions. RTV3428a According to the limitations detailed in the previous paragraph, the material parameters are fitted for the three samples with different orientations in comparison of the principal direction. The material parameters which values are independent of the directions are η m = 4 and η 0 = 200 MPa s -1 . The values of the other parameters are listed in Table 1. Figure 7 presents a comparison between the experimental and theoretical tests for the three samples of RTV3428a with different orientations. The stiffness of the material is well described for any direction. The viscoelastic effects are also well described for the three directions. A difference can be observed for the second loading curves at the maximum stretch λ = 2.5. This error corresponds to the stress softening part of the model. This could be improved by modifying the form of Eq. ( 6) by imposing a more important loss of stiffness. Nevertheless, the model is able to globally describe the anisotropic mechanical behaviour of the RTV3428a material. • RTV TPE C 1 (i) MPa C 2 (i) MPa K (i) MPa C 1 (i) MPa C 2 (i) MPa K (i TPE As before the material parameters of the TPE are fitted to the tensile tests of the three specimens with different orientations. The material parameters which values are independent of the directions are η 0 = 500 MPa s -1 , η m = 8. The values of the other parameters are listed in Table 1 and are obtained by the same strategy as the one described for the RTV3428a. Figure 8 presents a comparison of the model with experimental data. The variations of stiffness of the material with the directions are well described. Nevertheless, the model is not able to describe very large hysteresis loops and very important stress softening. Important differences are observed for the model according to the direction, but the size of the hysteresis is underestimated. This is due to the form of the constitutive equations that were chosen. Only neo-Hookean constitutive equations were used in the viscoelastic part, and it is well known that this model cannot describe large variations. Moreover, the governing equation of the viscoelasticity (i.e., Eq. ( 5)) is a very simple equation that also cannot take into account large non-linearity of the mechanical behaviour. Nevertheless, even if the proposed model is a first approach written with simple constitutive elements, all the phenomena are qualitatively described. The limits correspond to the limits of each part of the constitutive equation. Discussion The model succeeds in depicting the anisotropic viscoelasticity with stress softening mechanical behaviour of architectured membranes. The use of different material parameters in the different directions leads to an important number of parameters. A global fit of the parameters could lead to parameter values with no physical meaning. By analysing the experimental data, a strategy was proposed to fit the material parameters. As exposed in the Introduction, constitutive equations developed in the literature to model anisotropic viscoelasticity often rely on an isotropic matrix reinforced with some viscoelastic fibres. These models were principally elaborated for soft biological tissues and could be applied in a phenomenological approach to the two materials tested in this paper. It would consist in considering the rubber as a soft matrix having the mechanical properties of the soft direction, reinforced by fibres in the predominant direction of the material. Even if this approach were to succeed in describing the material, it would not characterise the macromolecular network of the material. All the equations in the literature to model for viscoelasticity can be written in tensioncompression and introduced into the present model, by replacing Eq. ( 5). This would permit us to represent non-linear viscoelasticity. Conclusion This paper developed a study of anisotropic materials. Two micro-mechanical architectured materials were obtained in two different ways. A silicone rubber was turned anisotropic by applying a deformation state during the second reticulation, and an initially anisotropic injected TPE was used. In both cases an orientation of the macromolecular chains was imposed to the material to create a microstructural architecture. The anisotropic membranes were tested in their plane, highlighting their anisotropic mechanical behaviour. A three-dimensional equation was obtained by considering the integration in space of a uniaxial equation with the 42 directions of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF]. This equation describes hyperelasticity, viscoelasticity and stress softening. In this first approach, we chose to use simple constitutive equations to prove the feasibility of the method. As a consequence, a neo-Hookean model was chosen to describe the elasticity and a simple linear equation for viscoelasticity. The anisotropy was obtained by considering that the material parameters could be different in all directions. Nevertheless, we chose to limit the variations of parameter values depending on the directions. This permitted us to limit the number of independent mechanical parameters. It was even possible to use different parameter values in every direction. It appears that the model succeeded in qualitatively describing all the phenomena. When the phenomena (stress softening, hysteresis) were not too large, the model succeeded also in quantitatively describing the tests. Some errors between experimental data and the model appeared when the phenomena became too large. This was due to the use of simple elements for hyperelasticity and viscoelasticity. Indeed, the most robust constitutive equation for hyperelasticity, stress softening and viscoelasticity should be used when the phenomena are very large. For instance, the neo-Hookean hyperelastic equation should be replaced by a model accounting for stress hardening, or the viscoelastic equation should be replaced by a non-linear one as in, e.g. [START_REF] Bergstrom | Constitutive modeling of the large strain time dependant behavior of elastomers[END_REF]. Fig. 1 1 Fig. 1 Elaboration of the microstructural architectured membrane of filled silicone Fig. 2 2 Fig. 2 Representation of the membrane and the samples oriented in different directions for the RTV3428a and the TPE materials Fig. 3 Fig. 4 34 Fig. 3 Cyclic tensile test A performed on the RTV3428a architectured membrane Fig. 5 5 Fig. 5 Definition of the configurations and of the rheological modelling Fig. 7 7 Fig. 7 Comparison with the experimental data of cyclic tensile test on RTV3428a for the different samples with different orientations Fig. 8 8 Fig. 8 Comparison with the experimental data of cyclic tensile test on TPE for the three samples with different orientations Table 1 1 Material parameters for the RTV3428a and the TPE n Acknowledgements This work is supported by the French National Research Agency Program ANR-12-BS09-0008-01 SAMBA (Silicone Architectured Membranes for Biomedical Applications).
27,644
[ "14058" ]
[ "1042068", "1042068", "492412" ]
01480510
en
[ "phys" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01480510/file/chagnon2014.pdf
G Chagnon M Rebouah D Favier Hyperelastic Energy Densities for Soft Biological Tissues: A Review Many soft tissues are naturally made of a matrix and fibres that present some privileged directions. They are known to support large reversible deformations. The mechanical behaviour of these tissues highlights different phenomena as hysteresis, stress softening or relaxation. A hyperelastic constitutive equation is typically the basis of the model that describes the behaviour of the material. The hyperelastic constitutive equation can be isotropic or anisotropic, it is generally expressed by means of strain components or strain invariants. This paper proposes a review of these constitutive equations. Introduction Soft tissues are composed of several layers, each one of these layers has different compositions. It is considered that four typical tissues exist: epithelial tissue, connective tissue, muscular tissue and neuronal tissue [156]. For the mechanical studies on soft tissues the connective tissues are often considered as the most important from a mechanical point of view [START_REF] Epstein | Isolation and characterisation of cnbr peptides of human [α 1 (iii)] 3 collagen and tissue distribution [α 1 (i)] 2 α 2 and [α 1 (iii)[END_REF]156,177]. They are composed of cells and of extra cellular matrix. The extra cellular matrix is composed of ground substance and of three types of fibres: collagen, reticular and elastic fibres. Collagen fibres are often considered as more important than others, particularly because of their large size, and represent most of the mechanical behaviour. The reticular fibres, which are thin collagen fibres with different chemical properties, allow creating ramifications with the collagen fibres. Finally the elastic fibres mainly composed of elastin present a purely elastic behaviour and are also linked to the collagen fibres. The elastic properties of soft tissues are mainly due to these fibres. Soft tissues are often able to support large deformations. The first mechanical study of soft tissues started in 1687 with Bernoulli experiments on gut. The first constitutive equation was proposed in 1690 by Leibniz, before Bernoulli and Riccati proposed other equations [START_REF] Bell | The Experimental Foundations of Solid Mechanics, Mechanics of Solids[END_REF]. Since these works, many experimental studies have been performed. As an illustration, some experimental data can be found, not exhaustively, in the literature for arteries [209,262], aortic valve tissues [162], veins [START_REF] Alastrué | Experimental study and constitutive modelling of the passive mechanical properties of the ovine infrarenal vena cava tissue[END_REF], vaginal tissues [196], anterior malleolar ligament [START_REF] Cheng | Mechanical properties of anterior malleolar ligament from experimental measurement and material modeling analysis[END_REF], muscles [START_REF] Gras | Hyper-elastic properties of the human sternocleidomastoideus muscle in tension[END_REF], human trachea [254], cornea [235], skin [START_REF] Groves | An anisotropic, hyperelastic model for skin: experimental measurements, finite element modelling and identification of parameters for human and murine skin[END_REF] or gallbladder walls [143]... Even if many soft tissues are studied, the largest database in the literature concerns arteries. Soft tissues present a complex behaviour with many non-linear phenomena as explained by different authors [118,124] as the time dependency [START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF]202] or the stress softening phenomenon [154,198], i.e., their mechanical behaviour mainly depends on time and on the maximum deformation previously endured. Most of soft tissues dissipate energy when loading, nevertheless, the elastic behaviour generally dominates their behaviour and it represents the asymptotic behaviour when the dissipation diminishes to zero. In this way, in a first approach, most of the soft tissues are described in the context of hyperelasticity [START_REF] Harb | A new parameter identification method of soft biological tissue combining genetic algorithm with analytical optimization[END_REF]149,246]. To take into account the fibrous structure of the soft tissues, anisotropic formalism is introduced. The diversity among the mechanical characteristics of soft tissues has motivated a great number of constitutive formulations for the different tissue types. For example, the reader is referred to [224], wherein the author treats the history of biaxial techniques for soft planar tissues and the associated constitutive equations. Anisotropic hyperelasticity can be modeled by using the components of the strain tensor or by the use of strain invariants. The two formulations permit the development of different families of anisotropic strain energy densities. Soft tissues are numerous and present different tissue architectures that lead to various anisotropy degrees, i.e., difference of mechanical behaviour in each direction, and different maximum admissible deformation. In this way, many constitutive equations are proposed to describe the tissues. The aim of this paper is to propose a review of most of the hyperelastic strain energy densities commonly used to describe soft tissues. In a first part, the different formalisms that can be used are recalled. In a second part, the isotropic modelling is described. In a third part, the anisotropic modelling is presented. The deformation tensor component approach based on Fung's formulation is briefly presented, and invariant approaches are detailed. In a fourth part, the statistical approaches, considering the evolution of the collagen network, are described. Last, a discussion about the models closes the paper. Mechanical Formulation Description of the Deformation Deformations of a material are classically characterised by right and left Cauchy-Green tensors defined as C = F T F and B = FF T , where F is the deformation gradient. In the polar decomposition of F, the principle components of the right or left stretch tensors are called the stretches and are denoted as λ i with i = 1..3. The Green-Lagrange tensor is defined as E = (C -I)/2, where I is the identity tensor, and its components are denoted as E ij with i, j = 1...3. Nevertheless, some prefer to use the logarithmic strains e i = ln(λ i ), instead of a strain tensor, generalised strains as e i = 1 n (λ n i -1) [185], or others measures as, for example, e i = λ i λ 2 j λ 2 k with j = i and k = i [START_REF] Gilchrist | Generalisations of the strain-energy function of linear elasticity to model biological soft tissue[END_REF]; all these measures are written in their principal basis. Instead of using directly the strain tensors, strain invariants are often preferred as they have the same values whatever the basis is. From an isotropic point of view, three principal strain invariants I 1 , I 2 and I 3 are defined by I 1 = tr(C), (1) I 2 = 1 2 tr(C) 2 -tr C 2 , ( 2 ) I 3 = det(C), (3) where "tr" is the trace operator, and "det" the determinant operator. Characteristic directions corresponding to the fibre orientations must be defined. For one material, one or many material directions (the number of directions is noted q) can be defined according to the architecture of the considered tissue. In the undeformed state, the ith direction is noted N i in the initial configuration. The norm of the vector N i is unit. Due to material deformation, the fibre orientations are evolving in the deformed state. The current orientation is defined by n (i) = FN (i) . (4) Note that n (i) is not a unit vector. Two orientation tensors can be defined, one in the undeformed and another in the deformed state: A (i) = N (i) ⊗ N (i) , a (i) = n (i) ⊗ n (i) . ( 5 ) The introduction of such directions lead to the definition of new invariants related to each direction. The invariant formulation of anisotropic constitutive equations is based on the concept of structural tensors [START_REF] Boehler | Applications of Tensor Functions in Solid Mechanics[END_REF][START_REF] Boehler | A simple derivation of representations for non-polynomial constitutive equations in some cases of anisotropy[END_REF]238,239,241,243]. 1 The invariant I 4 and I 5 can be defined for one direction i as I (i) 4 = tr CA (i) = N (i) • CN (i) , and I (i) 5 = tr C 2 A (i) = N (i) • C 2 N (i) . ( 6 ) In practice, some prefer to use the cofactor tensor of F, i.e., Cof(F), [120] and to define J (i) 5 = tr(Cof(C)A (i) ), in order to easily ensure the polyconvexity of the strain energy (see Sect. 2.3). In the literature, in the case of two fibre directions (1) and (2), a notation I 4 and I 6 is often used for soft tissues [108] instead of I (1) 4 and I (2) 4 (or I 5 and I 7 instead of I (1) 5 and I (2) 5 ). In this paper, it is preferred to keep only I (i) 4 notation and to generalise the notation to n directions. These invariants depend only on one direction but it is possible to take into account the interaction between different directions, by introducing a coupling between directions i and j by means of two other invariants: I (i,j ) 8 = N (i) • N (j ) N (i) • CN (j ) , and I (i,j ) 9 = N (i) • N (j ) 2 . ( 7 ) I (i,j ) 9 is constant during deformation, thus it is not adapted to describe the deformation of the material but it represents the value of I (i,j ) 8 for zero deformation. Let us denote I k as the invariants family (I 1 , I 2 , I 3 , I (i) 4 , I (i) 5 , I (i,j ) 8 , I (i,j ) 9 ) and J k as the invariants family 1 Details about the link between structural tensors and a method to link a fictitious isotropic configuration to render an anisotropic, undeformed reference configuration via an appropriate linear tangent map is given in [163]. (I 1 , I 2 , I 3 , I (i) 4 , J (i) 5 ). When only one direction is considered, the superscript (i) is omitted in the remainder of this paper. The I k invariants are the mostly used invariants in the literature, although other invariants have been proposed. Some authors [START_REF] Ciarletta | Stiffening by fiber reinforcement in soft materials: a hyperelastic theory at large strains and its application[END_REF] propose to use invariants that are zero at zero deformation. In this way, they introduce the tensor G = H T H, with H = 1 2 (F -F T ). This motivates the definition of a new class of invariants I k : ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ I 1 = tr(G), I 2 = tr G 2 , I 4 = tr GA (i) , I 5 = tr G 2 A (i) . ( 8 ) Ericksen and Rivlin [START_REF] Ericksen | Large elastic deformations of homogeneous anisotropic materials[END_REF] proposed another formulation, adapted to transversely isotropic materials only, characterised by a vector N (i.e., only one direction i). This direction often corresponds to a fibre reinforced direction. Their work was further used by different authors [START_REF] Agoras | A general hyperelastic model for incompressible fiber-reinforced elastomers[END_REF][START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF][START_REF] Debotton | Neo-Hookean fiber-reinforced composites in finite elasticity[END_REF][START_REF] Debotton | Mechanics of composites with two families of finitely extensible fibers undergoing large deformations[END_REF] who proposed to define other invariants (λ p , λ n , γ n , γ p , ψ γ ), denoted as Cr k . They can be expressed as a function of the I k invariants: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ λ 2 p = I 3 I 4 , λ 2 n = I 4 , γ 2 n = I 5 I 4 -I 4 , γ 2 p = I 1 - I 5 I 4 -2 I 3 I 4 , tan 2ψ γ = 2λ p H + / -γ p γ 4 n γ 2 p (4λ 2 p + γ 2 p ) -H 2 λ p H + / -2λ p γ 4 n γ 2 p (4λ 2 p + γ 2 p ) -H 2 , ( 9 ) with H = (2λ 2 n + γ 2 n )(2λ 2 p + γ 2 p ) + 2λ 4 p -2I 2 . The advantage is that these invariants have a physical meaning. λ n is the measure of stretch along N, λ p is a measure of the in-plane transverse dilatation, γ n is a measure of the amount of out-of-plane shear, γ p is the amount of shear in the transverse plane, and ψ γ is a measure of the coupling among the other invariants. Criscione et al. [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF] criticised these invariants for not being zero for zero deformation, as is the corresponding strain tensors. They proposed to use the β k invariants: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ β 1 = ln I 3 2 , β 2 = 3 ln I 4 -ln I 3 4 , β 3 = ln I 1 I 4 -I 5 2 √ I 3 I 4 + I 1 I 4 -I 5 2 √ I 3 I 4 2 -1 , β 4 = I 5 I 2 4 -1, β 5 = I 1 I 4 I 5 + I 1 I 3 4 + 2I 3 I 4 -I 2 5 -2I 2 I 2 4 -I 5 I 2 4 (I 5 -I 2 4 ) I 2 1 I 2 4 + I 2 5 -2I 1 I 4 I 5 -4I 3 I 4 . ( 10 ) These invariants also have a physical meaning. β 1 is the logarithmic volume strain, β 2 specifies a fibre strain of distortion, β 3 specifies the magnitude of cross-fibre, i.e., pure shear strain, β 4 specifies the magnitude of along fibre strain, i.e., simple shear strain and β 5 specifies the orientation of the along fibre shear strain relative to the cross-fibre shear strain. These last two families of invariants were developed for a one fibre direction material; it can easily be generalised to q directions but this has not yet been used yet in the literature. All these invariants are useful. In practice, the I k are the most used and the other invariants are not often used for calculations in finite element software. But, as they can be written by means of the I k invariants, all the expressions can be deduced from these invariants. As a consequence in this work, the theoretical development is only presented for the I k formulation. Strain-Stress Relationships Living tissues are often considered as incompressible. To use constitutive equations in finite element codes, a volumetric/isochoric decomposition is used. All the equations are written using the pure incompressibility hypothesis in order to avoid any non-physical response of these equations [100], but some details about the consequences of the volumetric-isochoric choice split is detailed in [227]. Nevertheless, they can be written in a quasi-incompressible framework by means of the incompressible invariants Īk : ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ Ī1 = I 1 I -1/3 3 , Ī2 = I 2 I -2/3 3 , Ī4 (i) = I (i) 4 I -1/3 3 , Ī5 (i) = I (i) 5 I -2/3 3 , Ī8 (i,j ) = I (i,j ) 8 I -1/3 3 . ( 11 ) This formulation is particularly useful for finite element implementation. All the equations for the elasticity tensor can be seen in different papers [START_REF] Bose | Computational aspects of a pseudo-elastic constitutive model for muscle properties in a soft-bodied arthropod[END_REF]133,145,152,195,271]. In this case, a penalty function depending on I 3 is used to ensure incompressibility. One can refer to [START_REF] Doll | On the development of volumetric strain energy functions[END_REF] for a comparison of the different functions classically used. The choice of the penalty parameter to ensure incompressibility [253] is a critical issue. In this paper, all the constitutive equations are written in the purely incompressible framework, but all the models can be established in the quasi-incompressible framework as well. The second Piola-Kirchhoff stress tensor can be directly calculated by derivation of the strain energy function W (I 1 , I 2 , I (i) 4 , I (i) 5 , I (i,j ) 8 , I (i,j ) 9 ), with i, j = 1..q: S = 2 (W ,1 + I 1 W ,2 )I -W ,2 C + q i W (i) ,4 N (i) ⊗ N (i) + q i W (i) ,5 N (i) ⊗ CN (i) + N (i) C ⊗ N (i) + i =j W (i,j ) ,8 N (i) • N (j ) N (i) ⊗ N (j ) + N (j ) ⊗ N (i) + pC -1 (12) where W ,k = ∂W ∂I k , and p is the hydrostatic pressure. The Eulerian stress tensor, i.e., the Cauchy stress tensor, is directly obtained by the push-forward operation. To ensure that the stress is identically zero in the undeformed configuration, it is required that: ∀i W (i) ,4 + 2W (i) ,5 = 0, [START_REF] Baek | Theory of small on large: potential utility in computations of fluid-solid interactions in arteries[END_REF] for zero deformation [174]. The direct expressions that permit calculation of the stress with the other invariants basis can be found in [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF][START_REF] Criscione | Constitutive framework optimized for myocardium and other high-strain, laminar materials with one fiber family[END_REF]. Stability The strong ellipticity condition is a mathematical restriction on the constitutive functions. For three-dimensional problems [267], the strong ellipticity was characterised for compressible isotropic materials in [236], and for incompressible ones in [273]. In this context, the strong ellipticity was largely studied in the case of transverse isotropy for in plane strains in [165-168, 170, 240]. The generic condition to verify for the strain energy in the absence of body forces [150,167,168] can be written as: 1 J F pr F qs ∂ 2 W ∂F ir F js n p n q m i m j > 0 with m = 0 and n = 0, (14) where m and n are two non-zero vectors. Nevertheless, this condition is always difficult to verify. Thus, some have proposed another way to tackle the strong ellipticity condition. It is known that polyconvexity implies ellipticity [173,228,232]. As a consequence, the polyconvexity in the sense of Ball [START_REF] Ball | Convexity conditions and existence theorems in non-linear elasticity[END_REF][START_REF] Ball | Constitutive equalities and existence theorems in elasticity[END_REF] is used, even if it is more restrictive than strong ellipticity. Of course, some strain energies can be elliptic but not polyconvex. It is important to note that polyconvexity does not conflict with the possible non-uniqueness of equilibrium solutions, as it guarantees only the existence of at least one minimizing deformation. Hence, polyconvexity provides an excellent starting point to formulate strain energy functions that guarantees both ellipticity and existence of a global minimizer. Polyconvexity has been studied within the framework of isotropy [START_REF] Bilgili | Restricting the hyperelastic models for elastomers based on some thermodynamical, mechanical and empirical criteria[END_REF]244], and the conditions to verify it are well known for every classical isotropic model from the literature (see for example [START_REF] Hartmann | Parameter estimation of hyperelasticity relations of generalized polynomial-type with constraint conditions[END_REF][START_REF] Hartmann | Polyconvexity of generalized polynomial-type hyperelastic strain energy functions for near-incompressibility[END_REF]180,204]). Many authors have extended their study to anisotropic materials [START_REF] Ehret | A polyconvex hyperelastic model for fiber-reinforced materials in application to soft tissues[END_REF]121,171,206,245,257]. Some have studied the polyconvexity of existing constitutive equations [START_REF] Doyle | Adaptation of a rabbit myocardium material model for use in a canine left ventricle simulation study[END_REF]104,106,186,267], whereas others have attempted to directly develop polyconvex constitutive equations. Some Conditions. In case of existing constitutive equations, Walton and Wilber [267] summarised conditions to ensure polyconvexity. For a strain energy depending on I 1 , I 2 and I 4 , W (I 1 , I 2 , I (i) 4 ), the conditions are: W ,k > 0 for k = 1, 2, 4 and (15) [W ,kl ] is definite positive. ( 16) If the strain energy also depends on I (i,j ) 8 , the following condition should be added: ∂W ∂I (i,j ) 8 ≤ ∂W ∂I 1 . ( 17 ) The use of the fifth invariant I (i) 5 introduces the need to change the other invariants, as I 5 is not a polyconvex function (when used alone). Walton and Wilber [267] used I * k : ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ I * 1 = 1 2 I 1 , I * 2 = 1 2 I 2 1 -I 2 , I (i) * 4 = I (i) 4 , I (i) * 5 = I (i) 5 . ( 18 ) Here, the condition to verify for I * k is: ⎧ ⎪ ⎨ ⎪ ⎩ W ,k > 0 for k = 1, 2, 4, 5, W ,1 + κW ,4 ≥ 0 for some κ > 4, [W ,kl ] is definite positive. ( 19 ) As it will be described in next paragraph, many strain energies can be decomposed as W = W iso (I 1 ) + W aniso (I 4 ). In this case, some sufficient conditions, but not necessary for polyconvexity, have been given in [106] for the anisotropic part: ∂W aniso ∂I 4 ≥ 0 and ( 20) ∂W aniso ∂I 4 + 2I 4 ∂ 2 W aniso ∂I 2 4 ≥ 0. (21) These two restrictive conditions mean that the considered directions cannot generate negative forces when submitted to compression whereas the strong ellipticity can also be verified in compression. This is an illustration of the constraints generated by the polyconvexity compared to strong ellipticity. Development of Specific Constitutive Equations. Some authors have created elementary strain energies that satisfy polyconvexity. First, Schroder and Neff [228] worked on equations depending on I 1 and I 4 , and they proved that some functions are polyconvex: W 1 = β 1 I 4 , W 2 = β 2 I 2 4 , W 3 = β 3 I 4 I 1/3 3 , and W 4 = β 4 I 2 4 I 1/3 3 , ( 22 ) where β i are material parameters. Nevertheless, as I 5 is not a polyconvex function, some have proposed [228,229] the construction of new combinations of invariants in the case of one reinforced direction that are polyconvex; these invariants are denoted as K i : ⎧ ⎪ ⎨ ⎪ ⎩ K 1 = I 5 -I 1 I 4 + I 2 tr(A) 1/2 , K 2 = I 1 -I 4 , K 3 = I 1 I 4 -I 5 . ( 23 ) These invariants permitted the development of a list of elementary polyconvex energies [START_REF] Ebbing | Approximation of anisotropic elasticity tensors at the reference state with polyconvex energies[END_REF]231]. The different strain energies are listed in Table 1. Since a combination of polyconvex energy densities is also polyconvex, it is possible to develop many constitutive equations that can be adapted to different soft tissues. Table 1 Elementary polyconvex functions [START_REF] Ebbing | Approximation of anisotropic elasticity tensors at the reference state with polyconvex energies[END_REF]231], where β i with i = 5...23 are material parameters Elementary polyconvex functions W 5 = β 5 K 1 W 6 = β 6 K 2 1 W 7 = β 7 K 3 1 W 8 = β 8 K 1 I 1/3 3 W 9 = β 9 K 2 1 I 2/3 3 W 10 = β 10 K 2 W 11 = β 11 K 2 2 W 12 = β 12 K 2 I 1/3 3 W 13 = β 13 K 2 2 I 2/3 3 W 14 = β 14 K 3 W 15 = β 15 K 2 3 W 16 = β 16 K 3 I 1/3 3 W 17 = β 17 K 2 3 I 2/3 3 W 18 = β 18 I 2 1 + I 4 I 1 W 19 = β 19 2I 2 2 + I 2 I 5 -I 1 I 2 I 4 W 20 = β 20 3I 2 1 -I 4 I 1 W 21 = β 21 2I 2 2 + I 1 I 2 I 4 -I 2 I 5 W 22 = β 22 (3I 1 -2I 4 ) W 23 = β 23 (I 2 -2I 5 + 2I 1 I 4 ) Isotropic Hyperelastic Constitutive Equations From a macroscopic point of view, soft tissues are an assembly of cells and fibres. According to the quantity and the orientation of the fibres, the behaviour of soft tissues can be supposed isotropic or not. According to the application, anisotropic behaviour can be neglected, and isotropic modelling can be efficient. In this way, many authors decide to use an isotropic approach to model soft tissues, as for example liver [149] kidney [113], bladder and rectum [START_REF] Boubaker | Finite element simulation of interactions between pelvic organs: predictive model of the prostate motion in the context of radiotherapy[END_REF], pelvic floor [193], breast [START_REF] Azar | A deformable finite element model of the breast for predicting mechanical deformations under external perturbations[END_REF]226], cartilage [144], meniscus [START_REF] Abraham | Hyperelastic properties of human meniscal attachments[END_REF], ligaments [START_REF] Garcia | A nonlinear biphasic viscohyperelastic model for articular cartilage[END_REF], eardrum [START_REF] Cheng | Viscoelastic properties of human tympanic membrane[END_REF], arteries [192], brain [127], lungs [234], uterus [START_REF] Harrison | Towards a novel tensile elastometer for soft tissue[END_REF] or skin [142]... Many models that are used to describe an isotropic approach come from rubber like materials studies. Some literature reviews have been proposed [START_REF] Boyce | Constitutive models of rubber elasticity: a review[END_REF]258]. Constitutive equations for rubber like materials were created to represent a strain hardening for deformations of about hundreds of percent whereas soft tissues often strain harden after some tens of percent. Thus, the functions for rubber like materials may not necessarily apply. Other, more suitable constitutive equations have been developed especially for soft tissues. The main models are listed in Table 2. The main feature for the constitutive equations is the presence of an important change of slope in the strain-stress curve for moderate deformations. This explains why most of the equations include an exponential form which allows the description of strong slope changes. Nevertheless, all constitutive equations stay equivalent to the neo-Hookean model [255,256] for small strains. Moreover, most of the constitutive equations are very similar for the I 1 part as it is the exponential form that dominates in the equations. While most of the constitutive equations are only expressed with the first invariant, the second invariant can be employed to capture the different states of loading [112]. There exists some limitations to use only the first invariant [110,270]. Nevertheless, the choice of using I 1 , or (I 1 , I 2 ) mainly depends on the available experimental data. When experiments are limited to one loading case, it can be difficult to correctly fit a constitutive equation expressed by means of the two invariants. Anisotropic Hyperelastic Constitutive Equations Different approaches have been used to describe the anisotropy of soft tissues. The first one is based on Green-Lagrange components and the second one is based on strain invariants. W = c 1 (I 1 -3) + c 2 (I 1 -3) 2 Knowles [131, 274] (*) W = c 1 2c 2 1 + c 2 c 3 (I 1 -3) c 3 -1 Exponential model Demiray [57] (**) W = c 1 c 2 exp c 2 2 (I 1 -3) -1 Demiray et al. [58] W = c 1 c 2 exp c 2 2 (I 1 -3) 2 -1 Holmes and Wow [103] W = c 0 exp c 1 (I 1 -3) + exp c 2 (I 2 -3) -c 0 Arnoux et al. [7, 8] W = c 1 exp c 2 (I 1 -3) - c 1 c 2 2 (I 2 -3) Singh et al. [237] W = c 1 2c 2 exp c 2 (I 1 -3) -1 + c 3 2 (I 2 -3) 2 Volokh and Vorp [266] W = c 1 -c 1 exp - c 2 c 1 (I 1 -3) - c 3 c 1 (I 1 -3) 2 Tang et al. [251] W = c 1 (I 1 -3) + c 2 (I 2 -3) + c 3 exp c 4 (I 1 -3) -1 Van Dam et al. [261] W = c 1 - 1-c 2 c 2 3 (c 3 x + 1) exp(-c 3 x) -1 + 1 2 c 2 x 2 with x = √ c 4 I 1 + (1 -c 4 )I 2 -3 Use of Green-Lagrange Tensor Components The first model using the components of the Green-Lagrange strain tensor were developed in [118]. It consists in proposing strain energy densities that are summarily decomposed into contributions of each component with different weights; a review of these models is proposed in [116]. The first generic form was proposed by Tong and Fung [252]: W = c 2 exp b 1 E 2 11 + b 2 E 2 22 + b 3 E 2 12 + E 2 21 + 2b 4 E 12 E 21 + b 5 E 3 11 + b 6 E 3 22 + b 7 E 2 11 E 22 + b 8 E 11 E 2 22 -1 , ( 24 ) where c and b i , i = 1...8 are material parameters. Three years later, Fung [START_REF] Fung | Pseudoelasticity of arteries and the choice of its mathematical expression[END_REF] developed a generic form in two dimensions, the model was next generalised to three dimensions [START_REF] Chuong | Three-dimensional stress distribution in arteries[END_REF]. Later, shear strains were introduced [128], and finally a global formulation was proposed [116]: W = c exp(A ij kl E ij E kl ) -1 , ( 25 ) where c and A ij kl are material parameters. Different constitutive equations were then developed and written in cylindrical coordinates (r, θ , z) often used for arteries [138]. Moreover, the strain energy function can be naturally uncoupled into a dilatational and a distortional part [START_REF] Ateshian | A frame-invariant formulation of fung elasticity[END_REF], to facilitate the computational implementation of incompressibility. In the same way, as in non-Gaussian theory [137], it is possible to take into account the limiting extensibility of the fibres [175]. This exposes the possibility of a constitutive equation that presents an asymptote even if constitutive equations that include an exponential or an asymptotic form can be very close [START_REF] Chagnon | A comparison of the physical model of Arruda-Boyce with the empirical Hart-Smith model and the Gent model[END_REF]. The proposed models are listed in Table 3. The main difficulty of these constitutive equations is that they have a large number of material parameters. Q = A ij kl E ij E kl Fung et al. [77] Q = b 1 E 2 θθ + b 2 E 2 zz + 2b 4 E θθ E zz Chuong and Fung [49] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr Humphrey [116] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr + b 7 E 2 θz + b 8 E 2 rz + b 9 E 2 rθ Costa et al. [51] Q = b 1 E 2 ff + b 2 E 2 ss + b 3 E 2 nn + 2b 4 1 2 (E f n + E nf ) 2 + 2b 5 1 2 (E sn + E ns ) 2 + 2b 6 1 2 (E f s + E sf ) 2 Rajagopal et al. [213] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr + b 7 E 2 rr + E 2 θθ + b 8 E 2 θθ + E 2 zz + b 9 E 2 rr + E 2 zz Other exponential functions Choi and Vito [START_REF] Choi | Two-dimensional stress-strain relationship for canine pericardium[END_REF] W = b 0 exp b 1 E 2 11 + exp b 2 E 2 22 + exp(2b 3 E 11 E 22 ) -3 Kasyanov and Rachev [128] W = b 1 exp b 2 E 2 zz + b 3 E zz E θθ + b 4 E 2 θθ + b 5 E 2 zz E θθ + b 6 E zz E 2 θθ -1 + b 7 E θθ exp(b 8 E θθ ) + b 9 E zz + b 10 E 2 θz Other models Vaishnav et al. [259] W = b 1 E 2 θθ + b 2 E θθ E zz + b 3 E 2 zz + b 4 E 3 θθ + b 5 E 2 θθ E zz + b 6 E θθ E Humphrey [117] W = b 1 E 2 rr + b 2 E 2 θθ + b 3 E 2 zz + 2b 4 E rr E θθ + 2b 5 E θθ E zz + 2b 6 E rr E zz + b 7 E 2 rθ + E 2 θr + b 8 E 2 zθ + E 2 θz + b 9 E 2 zr + E 2 rz Takamizawa and Hayashi [249] W = -c ln 1 -1 2 b 1 E 2 θθ + 1 2 b 2 E 2 zz + b 3 E θθ E zz + b 4 E θθ E zz + b 5 E θθ E rr + b 6 E rr E zz Use of Strain Invariants Strain energy densities depend on isotropic and anisotropic strain invariants. The use of I 4 and I 5 is necessary to recover linear theory [174]. Different cases exist. In a first case, the strain energy can be split as a sum into different parts as an isotropic and anisotropic contribution: W = W iso (I 1 , I 2 ) + i W aniso I (i) 4 , I (i) 5 , ( 26 ) or some coupling can be realised between the isotropic and anisotropic parts as W aniso (I 1 , I 2 , I (i) 4 , I (i) 5 ). But very few models present a non-additive decomposition between two directions i and j , i.e., between I (i) 4 , I (i) 5 , I (j ) 4 and I (j ) 5 . When W iso is used, it is often represented by a classical energy function. We discuss W aniso in the next paragraph. The use of only I 4 or I 5 , instead of the both of these invariants is questionable as it leads to the same shear modulus in the direction of and in the direction orthogonal to the reinforced direction [174]. Different model forms can be distinguished such as the polynomial, the power, the exponential and other constitutive equations not of these types. Polynomial Development The most known model for isotropic hyperelasticity is Rivlin's series [217] that describes a general form of constitutive equations depending on the first and second invariants. The generalisation of this model to an anisotropic formulation has been proposed in different ways. One consists in introducing the anisotropic invariants in the series. First a simple I 4 series [123] was proposed: W aniso = n k=2 c i (I 4 -1) k , ( 27 ) where c i are material parameters. A linear term cannot be used, i.e., k = 1 in the previous equation, as it does not ensure zero stress for zero deformation. The term k = 2 corresponds to the standard reinforcing model [START_REF] Destrade | Surface instability of sheared soft tissues[END_REF]182,220,257], not initially proposed for soft tissues. The complete generalisation of the Rivlin series was proposed in [222]: W = klmn c klmn (I 1 -3) k (I 2 -3) l (I 4 -1) m (I 5 -1) n , ( 28 ) where c klmn are material parameters. A modified formulation was proposed in [111] to be more convenient for numerical use: W = klmn c klmn (I 1 -3) k (I 2 -3) -3(I 1 -3) l (I 4 -1) m (I 5 -2I 4 + 1) n . ( 29 ) Instead of using I 4 , one may use √ I 4 which represents the elongation in the considered direction. This leads to a new series development [115]: W aniso = c 2 (I 4 -1) 2 + c 4 (I 4 -1) 4 Basciano and Kleinstreuer [START_REF] Basciano | Invariant-based anisotropic constitutive models of the healthy and aneurysmal abdominal aortic wall[END_REF] W aniso = c 2 (I 4 -1) 2 + c 3 (I 4 -1) 3 + c 4 (I 4 -1) 4 + c 5 (I 4 -1) 5 + c 6 (I 4 -1) 6 Basciano and Kleinstreuer [START_REF] Basciano | Invariant-based anisotropic constitutive models of the healthy and aneurysmal abdominal aortic wall[END_REF] W aniso = c 6 (I 4 -1) 6 W = kl c kl (I 1 -3) k ( I 4 -1) l , ( 30 ) Lin and Yin [148] W = c 1 (I 1 -3)(I 4 -1) + c 2 (I 1 -3) 2 + c 3 (I 4 -1) 2 + c 4 (I 1 -3) + c 5 (I 4 -1) √ I 4 forms Alastrue et al. [4, 6] W aniso = c 2 ( √ I 4 -1) 2 Humphrey [115] W = c 1 ( √ I 4 -1) 2 + c 2 ( √ I 4 -1) 3 + c 3 (I 1 -3) + c 4 (I 1 -3)( √ I 4 -1) + c 5 (I 1 -3) 2 I 4 , I 5 forms Park and Youn [194] W aniso = c 3 (I 4 -1) + c 5 (I 5 -1) Bonet and Burton [START_REF] Bonet | A simple orthotropic, transversely isotropic hyperelastic constitutive equation for large strain computations[END_REF] W Ogden [111,169,170] W aniso = c 2 (I 5 -1) 2 = c 1 + c 2 (I 1 -3) + c 3 (I 4 -1) (I 4 -1) - c 1 2 (I 5 -1) Bonet and Burton [31] W aniso = c 1 + c 3 (I 4 -1) (I 4 -1) - c 1 2 (I 5 -1) Merodio and Hollingsworth and Wagner [102] W aniso = c 2 (I 5 - I 2 4 ) Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 5 -1) 2 Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 4 -1)(I 5 -1) Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 4 -1) 2 where c kl are material parameters. It is worth noting that the use of √ I 4 includes, in the quadratic formulation [START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF][START_REF] Brown | A simple transversely isotropic hyperelastic constitutive model suitable for finite element analysis of fiber reinforced elastomers[END_REF], a model that represents the behaviour of a linear spring. As other invariants were proposed, a series development based on β k invariants also has been considered [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF]: W = klm G klm β k 3 β l 4 β m 5 , ( 31 ) where G klm are material parameters. As for rubber like materials with the Rivlin's series, the whole series is not used and a good truncation of the strain energy is essential. According to the considered material and to the loading states, different developments have been given in the literature. A list of resulting equations is included in Table 4. It is also important to note that the I 4 invariant is often used whereas the I 5 invariant is most often disregarded. Power Development Ogden 's [184] isotropic constitutive equation has proved its efficiency to describe complex behaviour. It is based on elongations and a power law development. For a material with a Schroder et al. [START_REF] Ciarletta | Stiffening by fiber reinforcement in soft materials: a hyperelastic theory at large strains and its application[END_REF]228,230] W aniso = k 1 I k 2 4 Balzani et al. [START_REF] Balzani | A polyconvex framework for soft biological tissues. Adjustement to experimental data[END_REF][START_REF] Balzani | Simulation of discontinuous damage incorporating residual stresses in circumferentially overstretched atherosclerotic arteries[END_REF] W = k 1 (I 1 I 4 -I 5 -2) k 2 Schroder et al. [230] W = k 1 (I 5 -I 1 I 4 + I 2 ) + k 2 I k 3 4 + k 4 (I 1 I 4 -I 5 ) + k 5 I k 6 4 O'Connell et al. [183] W = k 6 I 4 (I 5 -I 1 I 4 + I 2 ) -1 2 single fibre direction, there is the following generic form [186]: W aniso = 2μ 1 β 2 I β/2 4 + 2I -β/4 4 -3 , ( 32 ) where μ 1 and β are material parameters. A generalised form was proposed by not imposing the same parameters for the two terms [264] W aniso = r α r I βr 4 -1 + γ r I -δr 4 -1 , (33) where α r , β r , γ r , δ r are material parameters. The same type of formulation is also proposed using the other invariants. Two general equations are of the form [122]: W = klmn c klmn (I 1 -3) a k (I 2 -3) b l (I 4 -1 ) cm (I 5 -1) dn and ( 34) W = klmn c klmn I a k 1 -3 a k I b l 2 -3 b l I cm 4 -1 I dn 5 -1 , ( 35 ) where c klmn , a k , b l , c m and d n are material parameters. In the same way, other power law constitutive equations were proposed and are listed in Table 5. Additional forms can be found in the polyconvex strain energies listed in Table 1. These models represent different forms that link different invariants. Exponential Development A key property of the constitutive equation for soft tissues is the inclusion of an important strain hardening. This is easily obtained by means of an exponential function of the I 4 invariant. This approach is largely used in the literature, the first models were proposed in the 1990s. In the beginning, two fibre directions were introduced to represent the mechanical behaviour of arteries [104]. This was extended to four directions [START_REF] Baek | Theory of small on large: potential utility in computations of fluid-solid interactions in arteries[END_REF]159] and to n directions [START_REF] Gasser | A rate-independent elastoplastic constitutive model for biological fiberreinforcedcomposites at finite strains: continuum basis, algorithmic formulationand finite element implementation[END_REF] and used for example with 8 directions for cerebral aneurysms [276]. These models may be used to model the behaviour of a complex tissue such as in different areas of a soft tissue (as for example the different layers of an artery) [START_REF] Balzani | On the mechanical modeling of anisotropic biological soft tissue and iterative parallel solution strategies[END_REF]. Various formulations are listed in Table 6. In order to take into account the ratio of isotropic to anisotropic parts of a heterogeneous material, a weighting factor has been introduced based on the contributions of I 1 and I 4 [107]. This represents a measure of dispersion in the fibre orientation. This model leads to W aniso = c 1 2c 2 exp c 2 (I 4 -1) 2 -1 Weiss et al. [268] W aniso = c 1 exp(I 4 -1) 2 -(I 4 -1) 2 -1 Peña et al. [196] W aniso = c 1 c 2 exp c 2 (I 4 -1) -c 2 (I 4 -1) -1 I 1 , I 4 forms Holzapfel et al. [105] W aniso = c 1 (I 4 -1) exp c 2 (I 4 -1) 2 Gasser et al. [83] W = c 1 2c 2 [exp c 2 κI 1 + (1 -3κ)I 4 -1 2 -1] Holzapfel et al. [107] W = c 1 2c 2 exp c 2 (1 -κ)(I 1 -3) 2 + κ(I 4 -1) 2 -1 May-Newman and Yin [161, 162] W = c 0 exp c 1 (I 1 -3) 2 + c 2 ( √ I 4 -1) 4 -1 Rubin and Bodner [221] W = c 1 2c 2 exp c 2 c 5 (I 1 -3) + c 3 c 4 ( √ I 4 -1) 2c 4 -1 Lin and Yin [148] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) 2 - W aniso = C 1 2C 2 exp C 2 (I 4 -1) 2 -1 + C 3 2C 4 exp C 4 (I 5 -1) 2 -1 the creation of different constitutive equations which are also listed in Table 6. Recently, a general form of an energy function was devised [197] in order to summarise a large number of constitutive equations: W = γ aη exp η(I 1 -3) a -f 1 (I 1 , a) + c i bd i exp d i I (i) 4 -I 0 4 b -g I (i) 4 , I 0 4 , b . ( 36 ) The choice of the functions f 1 and g allows for the wide generalization of many different models. Also, γ , η, a, b, c i , d i and I 0 4 are material parameters, and I 0 4 represents the threshold to reach for the fibre to become active. Some authors [START_REF] Einstein | Inverse parameter fitting of biological tissues: a response surface approach[END_REF][START_REF] Freed | Invariant formulation for dispersed transverse isotropy in aortic heart valves[END_REF] have proposed in a way similar as to what is done in the case of isotropy [START_REF] Hart-Smith | Elasticity parameters for finite deformations of rubber-like materials[END_REF] a constitutive equation for the stress, the energy being obtained by integration: Horgan and Saccomandi [111] W ,4 = - σ (λ) = A exp B λ 2 -1 2 -1 , ( 37 ) W = 2 3 I 4 c 2 1 + 2 c 1 √ I 4 - 3 Horgan and Saccomandi [111] W = -c 1 c 2 log 1 - (I 5 -1) 2 c 2 Lurding et al. [153] W = c 1 (I 1 -3) + c 2 (I 1 -3)(I 4 -1) + c 3 (I 2 4 -I 5 ) + c 4 ( √ I 4 -1) 2 + c 5 ln(I 4 ) Chui et al. [153] W = c 1 ln(1 -T ) + c 5 (I 1 -3) 2 + c 6 (I 4 -1) 2 + c 7 (I 1 -3)(I 4 -1) with T = c 2 (I 1 -3) 2 + c 3 (I 4 -1) 2 + c 4 (I 1 -3)(I 4 -1) Other invariants Lu and Zhang [151] W = c 2 exp c 1 ( √ I 4 -1) 2 + 1 2 c 3 (β 1 -1) + 1 2 c 4 (β 2 -2) where A and B are material parameters. Even if this approach was initially developed and used for arteries [START_REF] Chagnon | An osmotically inflatable seal to treat endoleaks of type 1[END_REF]205,260], it is also often used for different living tissues, as for example human cornea [190], erythrocytes [130], the mitral valve [203], trachea [155,254], cornea [139,179], collagen [125], abdominal muscle [101]. Some Other Models Other ideas have been developed for rubber like materials, as for example the Gent [START_REF] Gent | A new constitutive relation for rubber[END_REF] model which presents a large strain hardening with only two parameters. Its specific form gives it a particular interest for some tissues. This model was generalised to anisotropy in two ways [111]. Other different forms can be proposed with a logarithmic or a tangent function. A list of constitutive equations is given in Table 7. There are two ideas in these models. One is to describe the behaviour at moderate deformation. Thus, functions that provide for a weak slope are used; these models are principally used before the activation of muscles, i.e., when the material is very soft. When the material becomes stiffer, a function that models a large strain hardening is necessary. In this way, different functions were introduced to capture very important changes of slopes. Coupling Influence Different coupling can be taken into account in the constitutive equation, for example, the shear between the fibres and the matrix, and the interaction between the fibres. Fibre Shear. In this case, the soft tissue is considered as a composite material, the strain energy is decomposed into three additive parts W = W m + W f + W f m [199], where the three terms are the strain energy of the matrix, of the fibres and of the interactions between fibres and matrix, respectively. Moreover, the deformation gradient of the fibres F can be decomposed into a uniaxial extension tensor F f and a shear deformation F s , as F = F s F f [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF]. The decomposition of the strain energy function into different parts allows, for different loading states, the consideration of constitutive equations which are specific for the strain endured by the fibre, the matrix and the interface. This leads to the construction of different function forms [START_REF] Guo | Large deformation response of a hyperelastic fibre reinforced composite: theoretical model and numerical validation[END_REF]: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ W m = 1 2 c 1 f (I 4 )(I 1 -3), W f = c 1 g 1 (I 4 ) I 5 -I 2 4 I 4 , W f m = c 1 g 2 (I 4 ) I 1 - I 5 + 2 √ I 4 I 4 . ( 38 ) Another basic form also has been proposed [199]: W f m = g 2 (I 4 ) I 4 I 3 (I 5 -I 1 I 4 + I 2 ) -1 2 , ( 39 ) where f , g 1 and g 2 are functions to define and c 1 is a material parameter. The first function corresponds a generalisation of the neo-Hookean model [START_REF] Guo | Mechanical response of neo-Hookean fiber reinforced incompressible nonlinearly elastic solids[END_REF]. Few functions for f , g 1 and g 2 have so far been proposed, the first being based on exponential functions [START_REF] Caner | Hyperelastic anisotropic microplane constitutive model for annulus fibrosus[END_REF][START_REF] Guo | A composites-based hyperelastic constitutive model for soft tissue with application to the human annulus fibrosus[END_REF][START_REF] Guo | Large deformation response of a hyperelastic fibre reinforced composite: theoretical model and numerical validation[END_REF]. Interaction Between Fibres. Few models are proposed to take into account the influence of the coupling between different fibre directions. Different techniques can be used. In order to take into account different directions and to not limit the problem to one direction fibre, it is also possible to couple invariants from different directions [228], the following invariant expression has been proposed: (1) 4 α 2 I (1)2 4 + 2λ(1 -α)I I (2) 4 + (1 -α) 2 I (2)2 4 with α ∈ [0, 1]. ( 40 ) α represents a material parameter. This expression takes into account the deformation in two directions with only one invariant. Nevertheless, this has not yet been used in constitutive equations. Instead of employing an additive decomposition of the strain energy to account for the different directions, a function that represents a coupling between the invariants of different directions [102] can be used [242]: W = c 1 c 2 exp c 2 I (1) 4 + I (2) 4 -2 -c 2 I (1) 4 + I (2) A generalised weighted expression of the constitutive equation also has been developed [START_REF] Ehret | A polyconvex hyperelastic model for fiber-reinforced materials in application to soft tissues[END_REF]121]: W = 1 4 r μ r 1 α r exp α r i γ i I (i) 4 -1 -1 + 1 β r exp β r i γ i J (i) 5 -1 -1 . ( 42 ) Even if the model was developed for pneumatic membranes, such representations that have proposed multiplicative terms between the I 4 invariants of each direction instead of an additive decomposition can be used for soft tissues [215]: (1,2) c I (1) 4 - W = c (1) 1 I (1) 4 -1 β 1 + c (1) 2 I (1) 5 -1 β 2 + c (2) 1 I (2) 4 -1 γ 1 + c (2) 2 I (2) 5 -1 γ 2 + c (1) c (I 1 -3) δ 1 I (1) 4 -1 δ 1 + c (2) c (I 1 -3) δ 2 I (2) 4 -1 δ 2 + c 1 η I (2) 4 -1 η . ( 43 ) This strain energy introduces coupling between the different directions, but the additive decomposition of the constitutive equation allows one to fit separately the different parameters c (j ) i , δ i and β i . Use of I 8 and I 9 . As proposed in the first part of this paper, coupling terms including I (i,j ) 8 and I (i,j ) 9 can be used. Thus such terms have been added to the strain energy in order to model esophageal tissues [177]: W = c 1 c 3 exp c 3 (I 1 -3) + c 2 c 5 exp c 5 (I 2 -3) + c 4 c 2 7 exp c 7 I (1) 4 -1 -c 7 I (1) 4 -1 -1 + c 6 c 2 8 exp c 8 I (2) 4 -1 -c 8 I (2) 4 -1 -1 + c 9 I (1,2) 8 -I (1,2) 9 2 , ( 44 ) where c i with i = 1...9 are material parameters. For annulus fibrous tissues, the influence of the interaction between the layers has been modelled [178] with an energy term taking into account I (1) 4 , I (2) 4 and I (1,2) 8 : W = c 1 2c 2 exp c 2 I (1,2) 8 (I (1) 4 I (2) 4 I (1,2) 9 ) 1/2 -I (1,2) 9 2 -1 . ( 45 ) A similar form of exponential model (cf. Table 6) has been proposed to include the effect of I 8 [START_REF] Göktepe | Computational modeling of passive myocardium[END_REF]: W = c 1 c 2 exp c 2 (I (1,2) 8 ) 2 I (1,2) 9 -1 . ( 46 ) These models are not often employed, but there exist some for composite materials that can be used [200,214]. In comparison with other models, these approaches take into account the shear strain in the material whereas the first models couple the deformations of the different fibres. Statistical Approaches In this part, some statistical approaches that tend to encompass the physics of soft tissues physics are detailed. They come from the study of the collagen network and use a change of scale method [START_REF] Chen | Nonlinear micromechanics of soft tissues[END_REF]181]. A collagen molecule is defined by its length, its stiffness and its helical structure. Some studies are motivated by approaches developed for rubber like material [START_REF] Beatty | An average-stretch full-network model for rubber elasticity[END_REF]73,129]. Unlike polymer chains in rubber which are uncorrelated in nature, collagen chains in biological tissues are classified as correlated chains from a statistical point of view. Rubber chains resemble a random walk whereas biological chains often present privileged oriented directions. It this way, different theories are considered to represent the chains, as for example wormlike chains with a slight varying curvature [132], or sinusoidal, zig-zag or circular helix representations [START_REF] Freed | Invariant formulation for dispersed transverse isotropy in aortic heart valves[END_REF]126,140]. Nevertheless, to develop models which rest on statistical approaches, some hypotheses are needed. A distribution function f of the orientation of the fibres is used to represent the material. The unit vector a 0 oriented in the direction of a certain amount of fibres having a spatial orientation distribution f is defined in terms of two spherical angles, denoted as φ and ψ: a 0 = sin φ cos ψe 1 + sin φ sin ψe 2 + sin φe 3 , ( 47 ) with φ ∈ [0, π] and ψ ∈ [0, 2π] and e i is the usual rectangular Cartesian basis. The distribution function is required to satisfy some elementary properties [191]. By symmetry requirements f (a 0 ) = f (-a 0 ). The quantity f (a 0 ) sin φdφdψ represents the number of fibres with an orientation in the range [(φ, φ + dφ), (ψ, ψ + dψ)]. By considering the unit sphere S around a material point, the following property is deduced: 1 4π S f (a 0 )dS = 1 4π π 0 2π 0 f (a 0 ) sin φdφdψ = 1. (48) A constant distribution leads to isotropy [START_REF] Ateshian | Anisotropy of fibrous tissues in relation to the distribution of tensed and buckled fibers[END_REF]. The strain energy of the soft tissue can then be deduced by integration of the elementary fibre energy in each direction w(I 4 (a 0 )) by: W = 1 4π S f (a 0 )w I 4 (a 0 ) dS. ( 49 ) Finally, the stress is determined by derivation: S = 1 2π S f (a 0 ) ∂w(I 4 (a 0 )) ∂C dS. ( 50 ) The evaluation of the stress depends on different parameters: the distribution function and the energy of a single fibre. Different considerations have been proposed in the literature. For the distribution function, the principal propositions are: beta distribution [START_REF] Abramowitz | Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables[END_REF][START_REF] Cacho | A constitutive model for fibrous tissue sconsidering collagen fiber crimp[END_REF]218,225], log-logistic distribution [277], Gaussian distribution [START_REF] Billar | Biaxial mechanical properties of the native and glutaraldehyde-treated aortic valve cusp: Part II-a structural constitutive model[END_REF][START_REF] Chen | The structure and mechanical properties of the mitral valve leaflet-strut chordae transition zone[END_REF][START_REF] Driessen | A structural constitutive model for collagenous cardiovascular tissues incorporating the angular fiber distribution[END_REF]141,223,272], von Mises distribution [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Gasser | Hyperelastic modelling of arterial layers with distributed collagen fibre orientations[END_REF][START_REF] Girard | Peripapillary and posterior scleral mechanics-Part I: development of an anisotropic hyperelastic constitutive model[END_REF]191,211,263] or the Bingham distribution [START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF]. The forms of the distribution are listed in Table 8. The choice of the functions is also a key point. Different functions can be chosen to describe the mechanical behaviour of a collagen fibre; the simple linear behaviour [START_REF] Ateshian | Anisotropy of fibrous tissues in relation to the distribution of tensed and buckled fibers[END_REF], or the phenomenological laws of the exponential Fung type [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Billar | Biaxial mechanical properties of the native and glutaraldehyde-treated aortic valve cusp: Part II-a structural constitutive model[END_REF]125,211,225,263] or a logarithmic function [277] or a polynomial function [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF]248], other functions [119,207] or worm-like chain forms [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF][START_REF] Bischoff | A microstructurally based orthotropic hyperelastic constitutive law[END_REF][START_REF] Bischoff | Orthotropic hyperelasticity in terms of an arbitrary molecular chain model[END_REF][START_REF] Bustamante | Ten years of tension: single-molecule DNA mechanics[END_REF][START_REF] Garikipati | A continuum treatment of growth in biological tissue: the coupling of mass transport and mechanics[END_REF]135,136, 216,218] which are a particularisation of the eight-chain model [START_REF] Arruda | A three dimensional constitutive model for the large stretch behavior of rubber elastic materials[END_REF] to the transversely isotropic case. For some models, a parameter should be introduced in the fibre concentration factor to control collagen fibre alignment along a preferred orientation [START_REF] Girard | Peripapillary and posterior scleral mechanics-Part I: development of an anisotropic hyperelastic constitutive model[END_REF]. The different constitutive equations are listed in Table 9. The reader can refer to [START_REF] Bischoff | Continuous versus discrete (invariant) representations of fibruous structure for modeling non-linear anisotropic soft tissue behavior[END_REF] to determine which strain energy is used for each tissue. distribution β(η, γ ) = Γ (η)Γ (γ ) Γ (η+γ ) with Γ (x) = ∞ 0 t x-1 exp(-t)dt Log-logistic distribution f (ε) = k b (ε-ε 0 /b) k-1 [1+(ε-ε 0 /b) k ] 2 with ε = √ I 4 -1 Gaussian distribution f (φ) = 1 σ √ 2π exp --(φ-M) 2 2σ 2 Normalized von Mises distribution f (φ) = 4 I b 2π exp b(cos 2φ + 1) with I = 2 √ π √ 2b 0 exp -t 2 dt Bingham distribution f (r, A) = K(A) -1 exp r T • Ar A is a symmetric matrix, r a vector and K(A) a normalized constant w = k 1 k 2 exp k 2 (I 4 -1) 2 -1 Logarithmic function w = c ε -log(ε + 1) for ε > 0 with ε = √ I 4 -1 Polynomial function w = 1 2 K γ + M m=2 γ m m γ γ m m Worm-like chain w = nkθ L 4A 2 r 2 i L 2 + 1 1-r i /L - r i L - ln(I 2 4 r 2 0 ) 4r 0 L 4 r 0 L + 1 [1-r 0 /L] 2 -1 -W r with r i = √ I 4 r 0 and W r = 2 r 2 0 L 2 + 1 1-r 0 /L - r 0 L 164, The main difficulty of the different constitutive equations is that they need a numerical integration that is always time consuming [START_REF] Bischoff | Continuous versus discrete (invariant) representations of fibruous structure for modeling non-linear anisotropic soft tissue behavior[END_REF]211]. The integration of the fibre contribution is mainly realised over a referential unit sphere [134,172]. Some prefer to use a finite number of directions, the constitutive equation is thus modified as follows: 1 4π S (•)dS = m i=1 w i (•) i . ( 51 ) Different choices exist, as the 42 directions of Bazant and Oh [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] and by Menzel [164], or the 184 directions of Alastrue et al. [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF], for example. The only different approach to those mentioned above is that proposed by [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF] who used only six initial directions without employing an integration. Even if statistical approaches have more complex equations than phenomenological ones, some of these models have been implemented in finite element codes [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Garikipati | A continuum treatment of growth in biological tissue: the coupling of mass transport and mechanics[END_REF]135,136,164,275]. Discussion The main difficulty is not to find the better constitutive equation but to have suitable experimental data. In fact, the difficulty is often that there is a large dispersion in experimental data due to the dispersion between the different specimens. Moreover, it is often difficult to impose different loading conditions on similar specimens. Thus the errors are often large and the number of loading conditions is often limited. As a consequence, one can wonder if the key point is to obtain the best fit for a very specific experimental database, or if the most important point is to represent globally the mechanical behaviour keeping in mind the physics of soft tissues. As it was shown in the previous paragraphs, the number of constitutive equations that can be used to describe soft tissues non-linear elasticity is very impressive. Moreover, there exist other approaches, not presented in this paper, which involve a new class of elastic solids with implicit elasticity [212] that can also describe the strain limiting characteristics of soft tissues [START_REF] Freed | An implicit elastic theory for lung parenchyma[END_REF]. These theories are also elastic as they do not dissipate energies even though they are written in terms of strain rate and stress rate. But in this paper, we only focus on hyperelastic energy functions. These functions are expressed in terms of strain tensor components or strain invariants. The main difference between the two approaches discussed here is that the invariants formulation permits one to split the energy function into additive isotropic and anisotropic parts, even if, some constitutive equations written in invariants also link these two parts. The first constitutive equations introduced for soft tissues were isotropic. Although, for some applications, an isotropic constitutive equation is used to describe the mechanical behaviour for different soft tissues, the use of such simplified models is, in many cases, misleading and inappropriate as most soft tissues have a fibre structure that must be taken into account. To represent this structure, many constitutive equations are based on privileged directions that correspond to physical fibre orientations. In the modelling, characteristic directions are defined and they are represented by an angle that defines the orientation of the fibre compared to a specific direction. This angle can be considered as a parameter that is used to fit as well as possible the experimental data. Thus, the model is not used to mimic the physical soft tissue but it is used as a phenomenological equation to describe properly experimental data. This is not, in our opinion, a good choice, and it may mean that the energy function is not well chosen. The angle between the fibres should not be an adjustable parameter but must be imposed by the soft tissue structure. An important issue in modelling concerns the stretching resistance of fibres. Many authors consider that the fibre must reach a threshold before opposing a stress. In this way, a threshold parameter can be introduced in all the suitable constitutive equations presented in this review. For the phenomenological model, it consist in replacing (I 4 -1) by (I 4 -I 0 4 ), or ( √ I 4 -1) by ( √ I 4 -I 0 4 ) in the constitutive equations. I 0 4 corresponds to the needed deformation to generate stress, see for example [START_REF] Calvo | On modelling damage process in vaginal tissue[END_REF]107,197,219]. The advantage of such approaches is that there is a material parameter that controls the beginning of material stiffening. Nevertheless, a main difficulty is that it strongly depends on the zero state of the experimental data. Moreover, this zero state is often different between post-mortem and in-vivo specimens, and can depend on the experimenter. Anisotropic strain energy functions are difficult to fit, as it is difficult to separate the contribution between the matrix and the fibres, and to distinguish the different parts of the strain energy. Nevertheless, some strategies based on dissociating isotropic and anisotropic parts can be used [START_REF] Harb | A new parameter identification method of soft biological tissue combining genetic algorithm with analytical optimization[END_REF]. To avoid such representations, physical approaches attempt to represent the repartition of fibres in space, but two difficulties must be considered; the knowledge of the distribution function of the fibres in space and the mechanical properties of a single fibre. The choice of the best strain energy function is always a difficult point in the modelling process. A summary of the constitutive equations is presented in Fig. 1. In practise, the invariants I 2 and I 5 are often neglected. Their contribution is always difficult to determine [115] but it can be useful [START_REF] Feng | Measurements of mechanical anisotropy in brain tissue and implications for transversely isotropic material models of white matter[END_REF]. Moreover, these invariants are not independent from I 1 and I 4 in uniaxial loading tests. In this case, it is important to have also biaxial loadings to fit constitutive equations [224]. Moreover, in vivo experimental data [233] would be a benefit to obtain a good experimental fit, but there is little such data in the literature as compared to post-mortem experimental data. The constitutive equation choice will depend on the particular soft tissues under study and the conclusions will strongly depend on the experimental data that is chosen. Nevertheless, some comparisons between anisotropic strain energies have been realised in particular cases, see, for example, [START_REF] Carboni | Passive mechanical properties of porcine left circumflex artery and its mathematical description[END_REF][START_REF] Galle | A transversely isotropic constitutive model of excised guinea pig spinal cord white matter[END_REF]104,116,117,265]. In practice, a strategic point is the choice of a constitutive equation that is implemented in a finite element code to describe loading conditions that are very far from uniaxial or biaxial loadings. In this case, it is important to choose a constitutive equation that can be fitted with few experimental data that do not simulate non-physical response for any loading. Generally, it is better to limit the number of invariants and material parameters. Moreover, the simplest functions are often the best as they stand the least probability of creating nonphysical responses even if their fitting is not the best. Conclusion This paper has listed many different constitutive equations that have been developed for soft tissues. The number of constitutive equations to represent the contribution due to hyperelasticity is extensive due to the number of soft tissues and the experimental data dispersion. The paper has listed first, isotropic constitutive equations, and next anisotropic ones, and these were classed in different categories; those written with strain tensor components, those written in terms of the invariants, and those based on statistical modelling. Despite all the difficulties encountered in the modelling of the isotropic or anisotropic hyperelastic behaviour of soft tissue, these constitutive equations must be considered as only the basis of a more complex constitutive equation. Generalized equations should take into account other phenomena such as the activation of muscle [START_REF] Calvo | Passive non linear elastic behaviour of skeletal muscle: experimental results and model formulation[END_REF]158,188,189,250] or the viscoelasticity of the tissues [START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF][START_REF] Haslach | Nonlinear viscoelastic, thermodynamically consistent, models for biological soft tissue[END_REF]105,147,208] or stress softening [154,195], for example. Nevertheless, the hyperelasticity representation should remain as the starting point in a modelling program and should be described as well as possible before introducing other effects. 2 exp c 2 (I 4 - 1 ) 2241 c 2 (I 4 -1) -1Holzapfel et al. [104] 1 2 + c 5 (I 1 - 3 ) 1 + c 5 ( 1251315 Doyle et al. [64] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) + c 6 (I 4 -1) -1 Fung et al. [78] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) 2 -I 1 -3) 2 + c 6 (I 4 -1) 2 + c 7 (I 1 -3)(I 4 -1) I 4 , I 5 forms Masson et al. [160] c 1 1 ) 4 + 114 +c 2 (I 4 -1) c 3 +c 4 (I 4 -Ruter and Stein [222] W = c 2 cosh(I 4 -1) -1 Horgan and Saccomandi [111] W = -c 2 c 3 I 4 -1 + c 3 ln(1c 6 ln(I 4 ) Calvo et al. [38] W = 2c 5 √ I 4 + c 6 ln(I 4 ) + c 7Demirkoparan et al.[START_REF] Demirkoparan | Swelling of an internally pressurized nonlinearly elastic tube with fiber reinforcing[END_REF][START_REF] Demirkoparan | On dissolution and reassembly of filamentary reinforcing networks in hyperelastic materials[END_REF] Fig. 1 1 Fig. 1 Organisation of the constitutive equations in the paper Table 2 2 Principal isotropic hyperelastic constitutive equations developed for soft tissues, where c 1 , c 2 , c 3 and c 4 are material parameters. (*) The model is known as the generalised neo-Hookean model. (**) As pointed out by[109] is frequently mistakenly referred toDelfino et al. [56] Polynomial models Raghavan and Vorp [210] Table 3 3 Anisotropic constitutive equations written with strain tensors components, where A ij kl with i, j, k, l = 1...3, b i , with i = 1...12, a ij , b ij , c ij with i, j = 1...3 and c are material parameters Generic Fung functions W = C 2 (exp Q -1) Tong and Fung [252] E 11 + b 2 E 22 + b 3 E 33 + b 4 E 11 E 22 + b 5 E 11 E 33 + b 6 E 22 E 33 E 12 E 21 + b 10 E 3 11 + b 11 E 3 22 + b 12 E 2 11 E 22 + b 13 E 11 E 2 2 zz + b 7 E 3 zz Rajagopal et al. [213] Tong and Fung [252] W = b 1 + b 7 E 2 11 + b 8 E 2 22 + b 9 E 2 33 + b 10 E 2 12 + b 11 E 2 13 + b 12 E 2 23 W = 1 2 b 1 E 2 11 + b 2 E 2 22 + b 3 E 2 12 + E 2 21 + 2b 4 E 12 E 21 + b 5 2 exp b 6 E 2 11 + b 7 E 2 22 + b 8 E 2 12 + E 2 21 + 2b 9 22 -1 For example, the function defined in [175] requires a limit for each component. As a consequence, the domain limits of the function are well established. The question is different for[249] as the function is written in terms of the sum of the components in a logarithmic form and the function can be undefined[104]. Nash and Hunter [175] W = c 11 E 2 11 |a 11 -E 11 | b 11 + c 22 E 2 22 |a 22 -E 22 | b 22 + c 33 E 2 33 |a 33 -E 33 | b 33 + c 12 E 2 12 |a 12 -E 12 | b 12 + c 13 E 2 13 |a 13 -E 13 | b 13 + c 23 E 2 Table 4 4 Some constitutive equations based on truncations of the series developments where c i with i = 1...6 are material parameters I 4 forms Triantafyllidis and Abeyaratne [257] W aniso = c 2 (I 4 -1) 2 Peng et al. [199] Table 5 5 Model based on a power development, where k i , i = 1..6 are material parameters Power developments Ghaemi et al. [85] W aniso = C I k 1 /2 4 -1 k 2 Table 6 6 List of exponential constitutive equations, where c 1 , c 2 , c 3 , c 4 , c 5 and κ are material parameters Table 7 7 Other models written in invariants, where c i with i = 1...7 are material parameters 1 +c 2 (I 4 -1) c 3 +c 4 (I 4 -1)+c 5 (I 4 -1) 2 +c 6 (I 5 -2I 4 +1) General forms Horgan and Saccomandi [111] W ,4 = - c Table 8 8 Some distribution functions used in statistical approaches, where ε 0 , b, σ , M and I are statistical parameters Distribution functions Beta Table 9 9 Some fibre functions used in statistical approaches where k 1 , k 2 , r 0 , L, K, W r , γ m and m are material parameters Energy functions Holzapfel et al. function |a 23 -E 23 | b 23Moreover, the parameters of these materials are often difficult to fit as they have no physical meaning. For example, the strain energy of [259] is discussed in [104] and is not convex, this can also be the case for Fung functions if the parameters are not well chosen[104]. The limitations in material parameters are discussed in[START_REF] Federico | An energetic approach to the analysis of anisotropic hyperelastic materials[END_REF] 269] with respect to polyconvexity. In this way, developments have been made to ensure polyconvexity with a physical meaning of the material response[247]. Other conditions also must be respected for viable functions. + 2c 2 -1 . (41) Acknowledgements The authors thank Prof. Roger Fosdick for his valuable comments. This work is supported by the French National Research Agency Program ANR-12-BS09-0008-01 SAMBA (Silicone Architectured Membranes for Biomedical Applications).
69,704
[ "14058", "172344" ]
[ "398528", "398528", "398528" ]
01758982
en
[ "shs" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01758982/file/asymptoticFinalHAL.pdf
Mirna Džamonja Marco Panza Asymptotic quasi-completeness and ZFC The axioms ZFC of first order set theory are one of the best and most widely accepted, if not perfect, foundations used in mathematics. Just as the axioms of first order Peano Arithmetic, ZFC axioms form a recursively enumerable list of axioms, and are, then, subject to Gödel's Incompleteness Theorems. Hence, if they are assumed to be consistent, they are necessarily incomplete. This can be witnessed by various concrete statements, including the celebrated Continuum Hypothesis CH. The independence results about the infinite cardinals are so abundant that it often appears that ZFC can basically prove very little about such cardinals. However, we put forward a thesis that ZFC is actually very powerful at some infinite cardinals, but not at all of them. We have to move away from the first few and to look at limits of uncountable cardinals, such as ℵω. Specifically, we work with singular cardinals (which are necessarily limits) and we illustrate that at such cardinals there is a very serious limit to independence and that many statements which are known to be independent on regular cardinals become provable or refutable by ZFC at singulars. In a certain sense, which we explain, the behavior of the set-theoretic universe is asymptotically determined at singular cardinals by the behavior that the universe assumes at the smaller regular cardinals. Foundationally, ZFC provides an asymptotically univocal image of the universe of sets around the singular cardinals. We also give a philosophical view accounting for the relevance of these claims in a platonistic perspective which is different from traditional mathematical platonism. Introduction Singular cardinals have a fascinating history related to an infamous event in which one mathematician tried to discredit another and ended up being himself proved wrong. As Menachem Kojman states in his historical article on singular cardinals [START_REF] Kojman | Singular Cardinals: from Hausdorff's gaps to Shelah's pcf theory[END_REF], 'Singular cardinals appeared on the mathematical world stage two years before they were defined'. In a public lecture at the Third International Congress of Mathematics in 1904, Julius König claimed to have proved that the continuum could not be well-ordered, therefore showing that Cantor's Continuum Hypothesis does not make sense, since this would entail that 2 ℵ0 , the (putative) cardinal of the continuum, is not well defined. This was not very pleasant for Cantor, who was not alerted in advance and who was in the audience. However, shortly after, Felix Hausdorff found a mistake in König's ematical findings. These results show that many statements which are known to be independent at regular cardinals become provable or refutable by ZFC at singulars, and so indicate that the behavior of the set-theoretic universe is asymptotically determined at singular cardinals by its features at the smaller regular cardinals. We could say, then, that even though ZFC is provably incomplete, asymptotically, at singular cardinals, it becomes quasi-complete since the possible features of universes of ZFC are limited in number, relative to the size of the singular in question. These facts invite a philosophical reflection. The paper is organized as follows: Mathematical results that illustrate the mentioned facts are expounded in sections §2 and §3. The former contains results that by now are classic in set theory and it is written in a self-contained style. The latter contains results of contemporary research and is meant to reinforce the illustration offered by the former. This section is not written in a self-contained style, and it would be out of the scope of this paper to write it in this way. Section §2 also contains a historical perspective. Finally, some philosophical remarks are made in §4. Modern history of the singular cardinals One of the most famous (or infamous, depending on the point of view) problems in set theory is that of proving or refuting the Continuum Hypothesis (CH) and its generalisation to all infinite cardinals (GCH). Cantor recursively defined two hierarchies of infinite cardinals, the ℵs and the s, the first based on the successor operation and the second on the power set operation: ℵ 0 = 0 = ω, ℵ α+1 = ℵ + α , α+1 = 2 α , and for δ a non-zero limit ordinal ℵ δ = sup β<δ ℵ β , δ = sup β<δ β (here we are using the notation 'sup(A)' for a set A of cardinals to denote the first cardinal greater or equal to all cardinals in A). A simple way to state GCH is to claim that these two hierarchies are the same: ℵ α = α , for any α. Another way, merely involving the first hierarchy, is to claim that for every α we have 2 ℵα = ℵ + α . CH is the specific instance ℵ 1 = 1 or 2 ℵ0 = ℵ 1 . Insofar as 1 = |R|, CH can be reformulated as the claim that any infinite subset of the set of the real numbers admits a bijection either with the set of natural numbers or with the set of real numbers. It is well known that, frustratingly, Cantor spent at least thirty years trying to prove CH. Hilbert choose the problem of proving or disproving GCH as the first item on his list of problems presented to the International Congress of Mathematics in 1900. In 1963 ( [START_REF] Cohen | The independence of the continuum hypothesis[END_REF]), Paul Cohen proved that the negation of CH is relatively consistent with ZFC. This result, jointly with that proved by Kurt Gödel in 1940 ([20])-that GCH is also relatively consistent with ZFC-entails that neither CH nor GCH are provable or refutable from the axioms of ZFC. Cohen's result came many years after Gödel's incompleteness theorems ( [START_REF] Gödel | Über formal unentscheidbare Säztze der Principia Mathematica und verwandter Systeme, I[END_REF]), which imply that there is a sentence in the language of set theory whose truth is not decidable by ZFC. But the enormous surprise was that there are undecidable sentences which are not specifically constructed as a Gödel's sentence; in particular, there is one as simply stated and well known as CH. There are many mathematical and philosophical issues connected to this outcome. The one which interests us here concerns the consequences it has for ZFC's models: it entails that if ZFC is consistent at all, then it admits a huge variety of different models, where CH and CGH are either true or false and, more generally, the power set class-function (namely F : Reg → Reg; F (κ) = 2 κ , where Reg is the class of regular cardinals) behaves in almost arbitrary ways (see below on the results of William Easton). This means that ZFC's axioms leave the von Neumann universe of sets V -which is recursively defined by appealing to the power set operation (V = α V α , with α an ordinal and V α = β<α P (V β ))-hugely indeterminate: they are compatible, for example, both with the identification of V with Gödel's constructible universe L (which is what the axiom of constructibility 'V = L' asserts, by, then, deciding GCH in the positive), and with the admission that in V the values of 2 κ are as large as desired, which makes V hugely greater than L. The question is whether this indetermination of the size of V α versus the size of L α can be somehow limited for some sort of cardinals, i.e. for some values of α. The results we mention below show that this is so for singular cardinals, and even, as we said above, that V is asymptotically determined at singular cardinals by its features at the smaller regular cardinals. To explain this better, we begin with a result by Easton ([16]), who, shortly after Cohen's result and building on earlier results of Robert Solovay ([45]), proved that for regular cardinals the indetermination of the values of the power set function is even stronger than the Cohen's result suggests: for any non-decreasing class-function F : Reg → Reg defined in an arbitrary model of ZFC so that cf(F (κ)) > κ for all κ, there is an extension to another model that preserves both cardinals and cofinalities and in which 2 κ = F (κ), for any regular cardinal κ. This implies that in ZFC no statement about the power set (class)-function1 on the regular cardinals other than 'κ ≤ λ =⇒ 2 κ ≤ 2 λ ' and 'cf (κ) < cf (2 κ )' can be proved. It is important to notice that singular cardinals are excluded from Easton's result. Just after the result was obtained, it was felt that this restriction was due to a technical problem which could be overcome in the future. But what became clear later is that this restriction is due to deep differences between regular and singular cardinals. Indeed, many results attesting to this soon followed. In particular, what these results eventually showed is that the power set classfunction behaves much better at singular cardinals than it does at regular ones. While the above quoted results by Gödel, Cohen and Easton imply that the value of the power set function can be decided in ZFC for neither regular nor singular cardinals, as not even 2 ℵ0 has an upper bound there, it turns out that one can do the next-best thing and show in ZFC that the value of 2 κ for any singular κ is conditioned on the values of 2 λ for the regular λ less than κ. This entails that the size of V κ+1 is, in turn, conditioned by that of that of V λ for λ ≤ κ. Already by 1965 and 1973 respectively, Lev Bukovský ( [START_REF] Bukovský | The continuum problem and powers of alephs[END_REF]) and Stephen H. Hechler ( [START_REF] Stephen | Powers of singular cardinals and a strong form of the negation of the generalized continuum hypothesis[END_REF]) had proved, for example, that in ZFC if κ is singular and 2 λ is eventually constant for λ < κ, then 2 κ is equal to this constant. Therefore the value of 2 κ is entirely determined by the values of the power set function below κ. An infinite cardinal λ is said to be strong limit if for any θ < λ we have 2 θ < λ (in particular, it follows that such a cardinal is limit). Note that strong limit cardinals, and in particular, strong limit singular cardinals, exist in any universe of set theory: an example is given by ω . Solovay ( [START_REF] Solovay | Strongly compact cardinals and the GCH[END_REF]) proved that for any κ which is larger or equal to a strongly compact cardinal (a large cardinal λ characterised by having a certain algebraic property that is not essential to explain here, namely that any λ-complete filter can be extended to a λ-complete ultrafilter), we have 2 κ = κ + . In other words, GCH holds above a strongly compact cardinal. This result, of course, is only interesting if there exists a strongly compact cardinal. In fact this result was obtained as part of an investigation started earlier by Dana Scott [START_REF] Scott | Measurable cardinals and constructible sets[END_REF], who investigated the question of what kind of cardinal can be the first cardinal failing GCH, that is, what properties must have a cardinal κ such that 2 κ > κ + , but such that 2 θ = θ + , for all infinite cardinals θ < κ. What Solovay's result shows is that such a cardinal cannot be strongly compact. This result led Solovay to advance a new hypothesis, according to which, for singular cardinals, his own result does not depend on the existence of a strongly compact cardinal. In other words, the hypothesis is that in ZFC, every singular strong limit cardinal κ satisfies 2 κ = κ + . The heart of it is the following implication called the 'Singular Cardinal Hypothesis': 2 cf(κ) < κ =⇒ κ cf(κ) = κ + , (SCH) for any cardinal κ. Indeed, for definition, the antecedent implies that κ is a singular cardinal, so that SCH states that κ cf(κ) = κ + , for any singular cardinal κ for which this is not already ruled out by 2 cf(κ) being too big. On the other hand, if κ is a strong limit cardinal, then it follows from the elementary results mentioned in the previous section that κ cf(κ) = 2 κ (see [START_REF] Jech | Set Theory[END_REF], pg. 55), so that the consequent reduces to '2 κ = κ + '. Hence, SCH implies that the power set operation is entirely determined on the singular strong limit cardinals, since GCH holds for any such cardinal. In a famous paper appearing in 1975 ( [START_REF] Silver | On the singular cardinals problem[END_REF]), Jack Silver proved that if κ is a singular cardinal of uncountable cofinality, then κ cannot be the first cardinal to fail GCH. A celebrated and unexpected counterpart of this result was proved by Menachem Magidor shortly afterwards ( [START_REF] Magidor | On the singular cardinals problem[END_REF]). It asserts that in the presence of some rather large cardinals, it is consistent with ZFC to assume that ℵ ω is the first cardinal that fails GCH. This, of course, implies that the condition that κ has uncountable cofinality is a necessary condition for Silver's result to hold. But it also implies that SCH fails and that the power set function at the strong limit singular cardinals does not always behave in the easiest possible way. Another celebrated theorem proved shortly after the work of Silver is Jensen's Covering Lemma ( [START_REF] Devlin | Marginalia to a theorem of Silver[END_REF]), from which it follows that if there are no sufficiently large cardinals in the universe, then SCH holds. To be precise, this lemma implies that SCH holds if 0 does not exist. (It is probably not necessary here to define 0 , but let us say that it is a large cardinal whose existence would make V be larger than L, whereas its nonexistence would make V be closely approximated by L.) Further history of the problem up to the late 1980s is quite complex and involves notions that are out of the scope of ZFC and, a fortiori out of the scope of our paper. Details can be found, for example, in the historical introduction to [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF]. Insofar as our interest here is to focus on the results that can be proved in ZFC, we confine ourselves to mention a surprising result proved by Fred Galvin and András Hajnal in 1975 ([17]). By moving the emphasis from GCH to the power set function as such, they were the first to identify a bound in ZFC for a value of this function, namely for the value it would take on a strong limit singular cardinal with uncountable cofinality. Let κ be such a cardinal, then what Galvin and Hajnal proved is that 2 κ < ℵ γ , where γ = (2 |α| ) + for that α for which κ = ℵ α . As the comparison with the two results of Silver and Magidor mentioned above makes clear, singular cardinals with countable and uncountable cofinality behave quite differently. There were no reasons in principle, then, to think, that Galvin and Hajnal's result would extend to singular cardinals with countable cofinality and the state of the matters stood still for many years. Fast forward, and we arrive at a crowning moment in our story, namely to the proof, by Saharon Shelah in the late 1980s, of the following unexpected theorem, put forward in [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF]: [∀n n < ω =⇒ 2 ℵn < ℵ ω ] =⇒ 2 ℵω < ℵ ω4 . (1) Shelah's theorem is, in fact, more general than the instance we quoted, which nevertheless perfectly illustrates the point. If ℵ ω is a strong limit, then the value of the power set function on it is bounded. In every model of ZFC, Shelah's theorem extends to the countable cofinality the result of Galvin and Hajnal, obtains a bound in terms of just the ℵ-function (unlike the Galvin-Hajnal theorem which uses the power set function), and shows that in spite of Magidor's result (which shows that SCH can fails at singular strong limits cardinals of countable cardinality), even at such cardinals a weak form of SCH holds, namely the value of the power set function is bounded. Shelah's theorem is proved by discovering totally new operations on cardinals, called 'pcf' and 'pp', which are meaningful for singular cardinals and whose values are very difficult to change by forcing. In many instances it is not even known if they are changeable to any significant extent. It would be much too complex for us to describe these operations here but the point made is that even though ZFC axioms are quite indecisive about the power set operation in general, they are quite decisive about it at the singular cardinals and this is because they prove deep combinatorial facts about the operations pcf and pp. The field of research concerned with the operations pcf and pp is called the 'pcf theory'. Some contemporary results The foregoing results have been known to mathematicians for a while but they do not seem to have influenced the literature in philosophy very much. The purpose of this article is to suggest that they have some interest for our philosophical views about ZFC and, more generally, set theory. Before coming to it, however, let us make a short detour in the realm of some more recent results which further illustrate the point. These results, to which this section is devoted, deal with mathematical concepts which are rather advanced; it would distract from the point to present them in a self-contained manner. Those readers who are not at ease with these concepts can safely skip the present section, taking it on trust that contemporary research continues to prove that singular cardinals have quite peculiar features, and that the mathematical universe at such cardinals exhibits much less indetermination than at the regular cardinals. This is the view that we shall discuss in §4. Let us begin by observing that the emphasis of the recent research on singular cardinals has moved from cardinal arithmetic to more combinatorial questions. We could say that what recent research on singular cardinals is concerned with is combinatorial SCH: rather than just looking at the value of 2 κ for a certain cardinal κ, one considers the "combinatorics" of κ, namely the interplay of various appropriate properties ϕ(κ) of it. An example of such a property might be the existence of a certain object of size κ, such as a graph (see below on graphs) on κ with certain properties, or the existence of a topological or a measure-theoretic object of size κ, in the more complex cases. One may think of κ as a parameter here. Then the relevant instance of combinatorial SCH would say that the property ϕ(κ) depends only on the fact that ϕ(θ) holds at all θ < κ. The question can be asked more generally, what about the relevant property of κ can be proved in ZFC, knowing that the property holds all θ < κ. Concerning the former aspect of such a question, that concerned with what can be proved in ZFC, a celebrated singular compactness theorem has been proved by Shelah in [START_REF] Shelah | A compactness theorem for singular cardinals, free algebras, Whitehead problem and transversals[END_REF]. Shelah's book [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF] presents, moreover, many applications of pcf theory to deal with this aspect of the question. The latter aspect of the question-namely the forcing counterparts of the former-appeared only later, due to the enormous difficulty of doing even the simplest forcing at a singular cardinal and the necessity (by the Covering Lemma) of using large cardinals, for performing this task. One of the early examples is [START_REF] Džamonja | Universal graphs at the successor of a singular cardinal[END_REF]. To illustrate this sort of research, let us concentrate on one sample combinatorial problem, which has to do with one of the simplest but most useful notions in mathematics, that of a graph. A graph is a convenient way to represent a binary relation. Namely, a graph (V, E) consists of a set V of vertices and a set E ⊆ V ×V of edges. Both finite and infinite graphs are frequently studied in mathematics and they are also used in everyday life, for example to represent communication networks. Of particular interest in the theory of graphs is the situation when one graph G is subsumed by another one H, in the sense that one can find a copy of G inside of H. This is expressed by saying that there is an embedding from G to H. Mathematically speaking, this is defined as follows. Definition 1 Suppose that G = (V G , E G ) and H = (V H , E H ) are graphs and f : G → H is a function. We say that f is a graph homomorphism, or a homomorphic embedding if f preserves the edge relation (so a E G b implies f (a) E H f (b) for all a, b ∈ V G ) but it is not necessarily 1-1. If f is furthermore 1-1, we say that f is a weak embedding. If, in addition, f preserves the non-edge relation (so a E G b holds iff f (a) E H f (b) holds), we say that f is a strong embedding. Graph homomorphisms are of large interest in the theory of graphs and theoretical computer science (see for example [START_REF] Hell | Graphs and homomorphisms[END_REF] for a recent state-of-the-art book on graph homomorphisms). The decision problem associated to the graph homomorphism, that is, deciding if there is a graph homomorphism from one finite graph into another, is NP-complete (see Chapter 5 of [START_REF] Hell | Graphs and homomorphisms[END_REF], which makes the notion also interesting in computer sciences). Of particular interest in applications is the existence of a universal graph. If we are given a class G of graphs, we say that a certain graph G * is universal for G if every graph from G admits a homomorphic embedding into G * . Of course, variants of this relation can be obtained by replacing homomorphic embedding with weak or strong embedding, as defined in Definition 1. The combinatorial question that we shall survey is that of the existence of universal graphs of a fixed size κ in various contexts. To begin with, ZFC proves that there is a unique up to isomorphism graph G * of size ℵ 0 . This is known as a Rado graph (or, also, random or Erdös-Rényi graph), and it satisfies that for every finite graph G and every vertex v of G, every strong embedding of G\{v} into G * can be extended to a strong embedding of G into G * . As a consequence, G * strongly embeds all countable graphs. This graph was discovered independently in several contexts, starting from the work of Ackermann in [START_REF] Ackermann | Die Widerspruchsfreiheit der allgemeinen Mengenlehre[END_REF], but its universality properties were proved by Rado in [START_REF] Rado | Universal graphs and universal functions[END_REF]. Under the assumption of GCH, from the existence of saturated and special models in firstorder model theory (see [START_REF] Chung | Model Theory[END_REF]), it follows that a universal graph exists at every infinite cardinal κ. In particular, the assumption that λ < κ =⇒ κ λ = κ entails that there is a saturated, and consequently universal, graph of size κ. When we move away from GCH, the existence of universal graphs becomes a rather difficult problem. Shelah mentioned in [START_REF] Shelah | On universal graphs without instances of CH[END_REF] a result of his (for the proof see [START_REF] Kojman | Nonexistence of universal orders in many cardinals[END_REF] or [START_REF] Džamonja | The singular world of singular cardinals[END_REF]), namely that adding ℵ 2 Cohen reals to a model of CH destroys any hope of having a universal graph of size ℵ 1 . This does not only mean that there is no universal graph in this model, but also that, by defining the universality number of a family G of graphs as the smallest size of a subfamily F of G such that every element of G embeds into a member of F, we have that in the above model the universality number of the family of graphs of size ℵ 1 is the largest possible, namely 2 ℵ1 . More generally, one can state the following theorem: Theorem 2 [Shelah, see [START_REF] Kojman | Nonexistence of universal orders in many cardinals[END_REF] or [START_REF] Džamonja | The singular world of singular cardinals[END_REF]] Suppose that λ < κ =⇒ κ λ = κ and let P be the forcing to add λ many Cohen subsets to κ (with cf(λ) ≥ κ ++ and λ ≥ 2 κ + ). Then the universality number for graphs on κ + in the extension by P is λ. Using a standard argument about Easton forcing, we can see that it is equally easy to get negative universality results for graphs at a class of regular cardinals: Theorem 3 Suppose that the ground model V satisfies GCH and C is a class of regular cardinals in V , while F is a non-decreasing function on C satisfying that for each κ ∈ C we have cf(F (κ)) ≥ κ ++ . Let P be Easton's forcing to add F (κ) Cohen subsets to κ for each κ ∈ C. Then for each κ ∈ C the universality number for graphs on κ + in the extension by P is F (κ). The proofs of these results are quite easy. In [START_REF] Shelah | On universal graphs without instances of CH[END_REF], Shelah emphasizes this by claiming that "The consistency of the non-existence of a universal graph of power ℵ 1 is trivial, since, it is enough to add ℵ 2 generic Cohen reals". He focuses, indeed, on a much more complex proof, that of the consistency of the existence of a universal graph at ℵ 1 with the negation of CH. He obtained such a proof in [START_REF] Shelah | Universal graphs without instances of CH: revisited[END_REF], while Mekler obtained a different proof of the same fact in [START_REF] Mekler | Universal structures in power ℵ 1[END_REF]. Insofar as ℵ 0 is regular, ℵ 1 is the successor of a regular cardinal. Other successors of regular cardinals behave in a similar way, although neither Mekler's nor Shelah's proof seems to carry over from ℵ 1 to larger successors of regulars. A quite different proof, applicable to larger successors of regulars but proving a somewhat weaker statement, was obtained by Džamonja and Shelah in [START_REF] Džamonja | On the existence of universal models[END_REF]: they proved that assuming that it is relatively consistent with ZFC that the universality number of graphs on κ + for an arbitrary regular κ is equal to κ ++ but 2 κ is as large as desired. All these results only concern regular cardinals and their successors, and leave open the question for singular cardinals and their successors. Positive results analogous to the one just mentioned by Džamonja and Shelah were obtained by Džamonja and Shelah, again, in [START_REF] Džamonja | Universal graphs at the successor of a singular cardinal[END_REF], for the case where κ is a singular cardinal of countable cofinality, and by Cummings, Džamonja, Magidor, Morgan and Shelah in [START_REF] Cummings | A framework for forcing constructions at successors of singular cardinals[END_REF], for the case where κ is a singular cardinal of arbitrary cofinality. The most general of their results can be stated as follows: Theorem 4 [Cummings et al. [START_REF] Cummings | A framework for forcing constructions at successors of singular cardinals[END_REF]] If κ is a supercompact cardinal, λ < κ is a regular cardinal and Θ is a cardinal with cf(Θ) ≥ κ ++ and κ +3 ≤ Θ, then there is a cardinal preserving forcing extension in which cf(κ) = λ, 2 κ = 2 κ + = Θ and in which there is a universal family of graphs on κ + of size κ ++ . Further recent results of Shelah (private communication) indicate that the universality number in the above model should be exactly κ ++ . These results concern successors of singular cardinals, which themselves are, of course, regular. The situation for singular cardinals themselves is different; in particular, no forcing notion can operate on them. We do not have any general results about graphs on such cardinals, but here is a result showing that in specific classes of graphs, the existence of a universal element at singulars is simply ruled out by the axioms of ZF (not even the full ZFC is needed): Theorem 5 [Džamonja [START_REF] Džamonja | ZFC combinatorics at singular cardinals[END_REF]] (ZF) Suppose that κ is a cardinal of cofinality ω. Then, for any λ ≥ κ in ZF, there is no universal element in the class of graphs of size λ that omit a clique of size κ, under graph homomorphisms, or the weak or the strong embeddings. This survey of the graph universality problem shows in a specific example the phenomenon of the change in the combinatorial behaviour between the three kinds of cardinals: successors of regulars, successors of singulars and, finally, singular cardinals. At successors of regulars combinatorics is very independent of ZFC, so that simple forcing, without use of large cardinals, allows us to move into universes of set theory which have very distinct behaviours. At the successor of a singular cardinal, we can move from L-like universes only if we use large cardinals (as we know by Jensen's Covering, mentioned above), and this shows up in combinatorics in the necessity to use both large cardinals and forcing to obtain independence results. This independence is in fact limited (as in the example of Shelah's pcf theorem quoted above). Finally, at singular cardinals, combinatorics tends to be completely determined by ZFC, or even by ZF, as in the example of Theorem 5. In connection with this theorem, it is interesting to note that in the absence of the Axiom of Choice, it is possible that every uncountable cardinal is singular of countable cofinality. To be exact, Gitik proved in [START_REF] Gitik | All uncountable cardinals can be singular[END_REF] that from the consistency of ZFC and arbitrarily large strongly compact cardinals, it is possible to construct a model of ZF in which all cardinals have countable cofinality. Therefore, if one is happy to work with ZF only, then one has the choice to move to a model in which only singular cardinals exist and they only have countable cofinality. In such a model, combinatorics becomes easy and determined by the axioms, at least in the context of the questions that have been studied, such as the graph universality problem. Philosophical Remaks Mathematical platonism is often presented as the thesis that mathematical objects exist independently of any sort of human (cognitive, and/or epistemic) activity, and it is taken to work harmoniously with a realistic semantic view, according to which all we can say in mathematics (i.e. by using a mathematical language) is either true or false, to the effect that all that has been (unquestionably) proved is true, but not all that is true has been (unquestionably) proved or can be proved (because of various forms of incompleteness of most mathematical theories). Both claims are, however, quite difficult to support and are, in fact, very often supported only by the convenience of their consequences, or, better, by the convenient simplicity of the account of mathematics they suggest, and because they provide a simple explanation of the feeling most mathematicians (possibly all) have that something external to them resists their intuitions, ideas, programs, and conjectures, to the effect that all that they can frame by their thoughts or their imagination must have, as it were, an external, independent approval, before having its place among mathematical achievements. Hence, an interesting philosophical question is whether there can be weaker claims that have similarly convenient consequences and that can be more easily positively supported, either by evidence coming from mathematical practice, or by more satisfactory metaphysical assumptions, or, better, by both. It is our opinion that such claims can reasonably be formulated. In short, they are the following: i ) there are ways for us to have epistemic de re access to mathematical objects; ii ) we are able to prove truths about them, though others are still not proved or are unprovable within our most convenient theories (which are supposed to deal with these objects). Claim (i ) means that there are ways for us to fix intellectual contents which are suitably conceived as individuals that mathematics is dealing with, in such a way that we can afterwards (that is, after having fixed them) ascribe properties and relations to these individuals. Claim (ii ) means that some of our ascriptions of property and relations to these individuals result in truths, in the sense that they somehow comply with the content we have afterwards fixed, and, among them, some can be, and in many cases have been, provably established, though others are still not so or cannot be so within the relevant theories. The phrase 'de re' in claim (i ) belongs to the philosophical lexicon. It is currently used in opposition to 'de dicto' to point out a distinction concerning propositional attitudes, typically belief (or knowledge). Believing that something is P can mean believing either that there is at least one thing that is P or that some specific thing is P . In the former case the belief is de dicto; in the latter de re. If the relevant thing is t, a suitable way to unambiguously describe the second belief is saying that of t, it is believed that it is P . This makes clear that the subject of a de re propositional attitude is to be identified independently from ascribing to it what the relevant proposition ascribes to it. Hence, its being P cannot be part of what makes it what it is. This is not enough, however, since for the attitude to be de re, the identification has to be stable under its possible variations. If Mirna believes that t is the only thing that is Q, her believing of t that it is P is the same as her believing it of the Q. But Marco can believe that the only thing that is Q is s (distinct from t). So his believing of the Q that it is P is quite distinct from Mirna's belief that it is so. Hence neither beliefs are de re. This makes clear that the identification of the subject of a de re attitude is to be independent of the attitude itself or, even, of any sort of attitude (since different attitudes can compose each other's). This is why the most straightforward examples of de re attitudes concern empirical objects ostensively, or pre-conceptually identified in one way or another. This has not prevented philosophers from appealing to the de re vs. de dicto distinction in relation to mathematics. In particular, a rich discussion has concerned the possibility of using appropriate sorts of numerals for directly referring to natural numbers while having a de re attitude towards them. Diana Ackerman has considered that "the existence of [natural] numbers is a necessary condition for anyone's having de re propositional attitudes toward them" ([1], p. 145). Granted their existence, Tyler Burge has wondered whether we can have "a striking relation to [. . . ][a natural number] that goes beyond merely conceiving of it or forming a concept that represents it", and answered that this is so for small such numbers, since "the capacity to represent [. . . ][them] is associated with a perceptual capacity for immediate perceptual application in counting" ( [START_REF] Burge | Belief De Re[END_REF], pp. 70-71). Saul Kripke has gone far beyond this, by suggesting a way to conceive natural numbers that makes decimal numerals apt to "reveal[. . . ] their structure" ([39], p. 164; [START_REF] Kripke | Umpupished transcription of Kripke's lectures[END_REF]). For him, natural numbers smaller than 10 are the classes of all n-uples (n = 0, 1, . . . , 9), while those greater than 9 are nothing but finite sequences of those smaller than 10. This makes decimal numerals, or, at least, short enough ones, work as "buckstoppers" (i.e. they are such that it would be nonsensical asking which number is that denoted by one of them, in opposition to terms like 'the smallest perfect number', denoting the natural number whose buckstopper is 'six'), and so allow direct reference to them. By dismissing such a compositional conception of natural numbers, Jan Heylen ( [START_REF] Heylen | The epistemic significance of numerals Synthese[END_REF] and Stewart Shapiro ( [START_REF] Shapiro | Computing with numbers and other non-syntactic things: De re knowledge of abstract[END_REF]) have respectively submitted that Peano numerals (the numerals of the form '0 ... ', written using only the primitive symbols for zero and the successor relation in the language of Peano Arithmetic) and unary numerals (mere sequence of n strokes used to denote the positive natural number n) provide canonical notations allowing de re knowledge of natural numbers. Finally, Jody Azzouni ( [START_REF] Azzouni | Empty de re attitudes about numbers[END_REF]) has argued that the existence of natural numbers is not required for having "de re thought" about them, since such a thought can be "empty". Our use of 'de re' in claim (i ) differs from all these uses in that the de re vs. de dicto distinction has a much more fundamental application in our account of mathematics. Far from merely concerning our way of denoting natural numbers so to identify them in such a way to make de re propositional attitudes towards them possible, granted their existence, or our de re thought about them empty, granted their nonexistence, it concerns our way of fixing mathematical objects so as to confer existence to them. In our view these objects are, indeed, nothing but contents of (intentional) though, whose existence just depends on the way they are fixed. Here is how we see the matter. There are many ways of fixing intellectual contents, which, in appropriate contexts, are (or can be) suitably conceived as individuals. A liberal jargon can refer to these contents as abstract objects. If this jargon is adopted, the claim that mathematics deals with abstract objects becomes quite trivial, and can neither be taken as distinctive of a platonist attitude, nor can provide any characterisation of mathematics among other intellectual enterprises. In a much more restrictive jargon, for something (i.e. the putative reference of a term or description) to count as an object, it has to exist. Under this jargon, the claim that mathematics deals with abstract objects becomes much more demanding, overall if it is either required that these objects are self-standing or mind-independent, or if it is supposed that nothing can acquire existence because of any sort of intellectual (intentional) act. The problem, then, with this claim is that it becomes quite difficult to understand what 'to exist' can mean if referred to abstract contents. What we suggest is reserving the term 'abstract object' to intellectual contents suitably conceived as individuals and so fixed, in an appropriate context, so as to admit de re epistemic access, this being conceived, in turn, as the apprehension of them making de re attitudes towards them possible. We submit that, once this is granted, the claim that mathematics deals with abstract objects becomes both strong enough and distinctive, so as to provide the ground for an appropriate account of mathematics. Mathematics traditionally admits different modalities for fixing intellectual contents. The French philosopher Jean-Michel Salanskis ( [START_REF] Salanskis | L'heméneutique formelle[END_REF], [START_REF] Salanskis | Philosphie des mathématiques[END_REF]) suggested to distinguish two basic ways of doing it: constructively and correlatively. The former way has a more limited application, but can be taken, in a sense, as more fundamental. Peano's numerals can, for instance, be quite simply fixed constructively by stating that: i ) the sign '0' is a Peano's numeral; ii ) if the sign 'σ' is such a numeral, then the sign 'σ ' is such a numeral, too; iii ) nothing else is such a numeral. Similarly, unary numerals can be constructively fixed by stating that: i ) the sign '|' is such a unary numeral; ii ) if the sign 'σ' is such a numeral, then the sign 'σ |' is such a numeral, too; iii ) nothing else is such a numeral. These are numerals, not numbers, however. And is clearly unsuitable to use the same pattern to define natural numbers. Suppose it were stated that: i ) 0 is a natural number; ii ) if σ is such a number, then σ is such a number; iii ) nothing else is such a number. It would have not been established yet that there is no natural number n such that 0 = n , or n = n . To warrant that this is so, it would still be necessary to impose appropriate conditions to the successor function -, which cannot be done constructively. To overcome the difficulty, one could have recourse to a trick: stating the natural numbers are the items that Peano numerals denote, or positive such numbers the items that unary numerals denote, in such a way that distinct such numerals denote distinct such numbers. This would make Peano's numerals directly display the structure of natural numbers, and unary ones that of positive natural numbers, so providing a canonical notation for these numbers allowing direct reference to them, in agreement to Heylen's and Shapiro's proposals. But this would be dependent on the informal notion of denotation. Supposing that we have the necessary resources for handling this notion without ambiguity, this would allow us to fix natural numbers almost constructively. Once this is done, one could look at these numbers as such, and try to disclose properties they have and relations they bear to each other's. Making it in agreement with mathematical requirements of rigor asks both for further definitions and the fixation of inferential constraints or rules, typically of an appropriate codified, if not formal, language. What is relevant for illustrating our point, is, however, not this, but rather that that we can do both things in such a way to keep the reference steady to the contents previously fixed as just said: it is on them that we define the relevant properties and relations; and it is to speak of them that we establish the appropriate inferential constraints, and fashion (or adopt) the appropriate language, which allows us to say of them, or some of them, that they are so and so. This should give a rough idea of the intellectual phenomenon we want to focus on by speaking of de re epistemic access. More importantly, we could observe that once appropriate intellectual contents are fixed constructively, one can also try to capture them correlatively, that is, through an axiomatic implicit definition. This can be done somehow informally, or by immersing the definition within a formal system affording both the appropriate language and the appropriate inference rules (or, possibly, allowing to state these rules). In the case of natural numbers, we can, for instance, define them, through Peano axioms, within an appropriate system of predicate logic, and we could conceive of doing that with the purpose of characterizing correctively the same contents previously fixed constructively, so as that each of them provide the reference for a singular term appropriately introduced within the adopted language, and that they provide, when taken all together, the domain of variation and quantification of the individual variables involved in the definition. The predicate system adopted can be both first-or higher-, typically second-, order. There is, however, a well-known difference among the two cases: while Peano second-order arithmetic (or PA2, for short) is categoric (with respect to the subjacent set theory), by a modern reformulation of Dedekind's argument ( [START_REF] Dedekind | Was sind und was sollen die Zahlen? Braunschweig[END_REF]), Peano first-order arithmetic (or PA1, for short) is not, by an immediate consequence of the Löwenheim-Skolem's theorem ( [START_REF] Chung | Model Theory[END_REF]). This suggests that the verb 'to capture' is not to be understood in the same way in both cases. In the second-order case, it means that the relevant axioms determine a single structure (up to isomorphism), whose elements are intended to be the natural numbers, identified with the same objects previously fixed constructively. In the first-order case, it means that these axioms describe a class of non-isomorphic structures, all of which include individuals that behave, with respect to each other's, in the same way as the elements of this structure do, and that we can then intend, again, as the same objects previously fixed constructively. Both in the usual platonist tongue, and in our amended one, we could say that the limited expressive power of a first-order language makes it impossible to univocally describe the natural numbers by means of such a language: to do it, a second-order language is needed (and it suffices). Still, the verb 'to describe' should be understood differently in the two cases: while in the former case it implies that that these numbers are self-standing objects that are there as such, independently of any intellectual achievement, in the latter case, it merely implies that these objects have been previously fixed. Hence, if no previous definition were admitted or considered, the verb 'to fix' should be used instead. What should, then, be said is that the limited expressive power of a first-order language makes it impossible to univocally fix the natural numbers by means of such a language. (Of course, the relativisation of the categoricity of PA2 to a given model of set-theory makes the usual platonist tongue appropriate only insofar as it is admitted that this model reflects the reality of the world of mathematical objects, which, in presence of the strong non-categoricity of ZFC requires a further act of faith. But on this, later.) The difference between and first-and the second-order case is not limited to this, however. Another relevant fact is that the language of PA1 is forced to include, together with the primitive constants used to designate the number zero and the successor relation, also two other primitive constants used to designate addition and multiplication. (Though versions of PA1 often adopt a language including a further primitive constant used to designate the order relation, this can be easily defined in terms of addition by, then, reducing the number of axioms, albeit increasing the syntactical complexity of some proofs.) The only primitive constants which are required to be included in the language of PA2 are, instead, those used to designate the number zero and the successor relation: addition and multiplication (as well as order), can be recursively defined in terms of zero and successor. Hence, whereas Peano second-order axioms (implicitly) define a structure N, Peano first-order axioms define uncountably many distinct structures N, , +, × . It remains the fact, nevertheless, that the former structure is reflected within any one of the latter ones. Hence, if we admit that the axioms of PA2 capture or fix a domain of objects in an appropriate way, there is room to say that PA1 is studying these same objects by weaker logical means, by identifying them as the common elements of uncountably many possible structures N, , +, × , though being unable to provide an univocal characterisation of them. This should clarify a little better what having epistemic de re access to mathematical objects could mean: one could argue that, once natural numbers are captured or fixed by the axioms of PA2 as the elements of N, , one can, again, look at them as such and try to disclose their properties and relations, so as to recover the same property or relation already ascribed to them, and possibly more. This can be done in different ways. By staying within PA2, one can, for example, besides proving the relevant theorems statable in its primitive language, also enrich this language by means of appropriate explicit definitions, so as to introduce appropriate constants-as those designating addition multiplication and order-to be used in the relevant proofs. By leaving this theory, one can also try to describe them by using a weaker language, such as a first-order one, and be, then, forced to implicitly define addition and multiplication in them by appropriate axioms, though being unable to reach an univocal description. Other ways for studying these numbers are, of course, at hand. But, for our present purpose, we can confine ourselves to observe that in this latter case (as in many other ones), what we are doing may be appropriately accounted for by saying that, of these very numbers, we claim (by using the relevant first-order language) that they are so and so, or, better, that they form a structure N, , +, × . There is a quite natural objection one could address to these views. One could remember that, as any other second-order theory, PA2 is syntactically incomplete, to the effect that some statements that are either true or false in its unique model are neither provable nor disprovable in it, and there is, then, no way (or at least no mathematically appropriate way) for us to know whether they are true or false. Hence, one could argue, whatever a de re access to natural numbers, as defined by PA2, might be, it cannot be, properly speaking, an epistemic access, since there are not only things about these numbers that we do not know, but also things that we cannot know. We think this objection misplaced, since something analogous also occurs for genuine empirical objects. Take the chair you sit on (if any): there are many properties that we suppose (at least from a realist perspective) that it does or does not have, about which even our best theories and the information we are in place to obtain are insufficient to make a decision. This should not imply, it seems to us, that you have no knowledge of that chair. Of course, we could always change our theories or improve them if we considered that deciding some questions that we know to be undecidable within them is relevant. In the same way, if we were considering (or discovering) that there are some relevant statements about natural numbers which are provably undecidable in PA2, we could try to add axioms to the effect of provably deciding these statements. But allowing this possibility does not imply that we do not have de re epistemic access to these numbers as fixed by PA2, while working on them either within or outside it. All that is required for it is that there is a suitable sense in which we can say that on these numbers (as independently fixed) we can define some properties or relations within this theory, or of these numbers we can claim this or that outside the theory. Something similar to what happens with PA2 also happens with Frege arithmetic (or FA, for short), namely full (dyadic) second-order logic plus Hume's Principle (see Wright [START_REF] Wright | Frege's conception of numbers as objects[END_REF] or [START_REF] Boolos | Logic, logic and logic[END_REF], especially section II). The role played by natural numbers in the former case is played by the cardinal ones (understood as numbers of concepts) in the latter case. Once a particular cardinal number, typically the number of an (or the) empty concept is identified with 0, and an appropriate functional and injective relation is defined on these numbers so as to play the role of the successor relation, one can select the natural numbers among the cardinal ones, as being 0 together with all its successors. One can then capture or fix the natural numbers without appealing to addition and multiplication on them (and no more on order, at least explicitly). But now there is even more: these numbers can be captured or fixed by selecting them among items which are fixed, in turn, by appealing neither to a designated item like 0, nor to a certain dyadic relation, like the successor relation. Of the cardinal numbers, one could, then, say, that some of them are the natural ones and can be studied as such with other appropriate means. It is easy to see that, as opposed to PA2, FA is not categoric (with respect to the subjacent set theory). This merely depends on the presence in some of its models of objects other than cardinal numbers, which can be absent from others. Still, FA interprets PA2 (this is generally known as Frege's theorem: see [START_REF] Richard | Frege's Theorem[END_REF], for example), and a result of relative categoricity can also be proved for FA ([47], prop. 14; [START_REF] Walsh | Relative categoricity and abstraction principles[END_REF], pp. 573-574): any two models of it restricted to the range of the number-of operator are isomorphic (with respect to the subjacent set theory). This migh t make one think that a form of categoricity (with respect to the subjacent set theory) is essential for allowing de re epistemic access to mathematical objects, i.e. that the only intellectual contents suitably conceived as mathematical objects that we can take to have de re epistemic access to are those fixed within a theory endowed with an appropriate form of categoricity (with respect to the subjacent set theory). This is not what we want to argue for, however. The previous example of the constructive definition of positive natural numbers should already make it clear. Another, quite simple example is the following: when we define the property of being a prime number within PA1, we do it on the natural numbers in such a way that we can say that on these numbers we define this property; if the definition is omitted, many usual theorems of PA1 can no longer be proved, of course, but this does not change anything to many other theorems still concerned with natural numbers as defined within this theory. These two examples are different from each other, and both different from that given by the access to natural numbers as defined within PA2. That provided by the definition of prime numbers within PA1 is only an example of de re epistemic access internal to a given theory, which reduces, in fact, to nothing more than the possibility of performing an explicit definition within this very theory. Claiming that we have de re epistemic access to natural numbers as defined constructively, or to these very numbers as defined correlatively within PA2, when we try to study them in a different context, is quite a different story. Still, there is something similar in the three cases, and this is just what we are interested in underlining here: it is a sort of (relative) stability of intellectual contents counting as mathematical objects, a stability that is made possible by the way these contents are fixed. We do not want to venture here in the (possibly hopeless) tentative of classification of forms of de re epistemic access. Still, it seems clear to us that the phenomenon admits differences: both the stability depending on a constructive, or, more generally, informal definition, and that depending on a categorical implicit formal definition are extra-theoretic; the former is strictly intentional, as it were, the latter semantic; that depending on explicit definitions within non-categoric theories is merely syntactic (and, then, intra-theroretic) or restricted, at least, to an informally identified intended model. But the notion of independent existence of mathematical objects, which usual platonism is concerned with, is imprecise enough to make it possible to hope that all these different sorts of stability can provide an appropriate (metaphysically weak) replacement of it in many cases in which platonists use it in their accounts of mathematics. # # # But, let it be as it may. The question here is different: what does all this have to do with ZFC, and the results mentioned in § § 2 and 3, above? On the one side, it is clear not only that the categoricity of PA2 and FA is relative to the (inevitably arbitrary) choice of a model of set-theory, and, then, typically, of ZFC, but also that what has been said about PA1, PA2 and FA has a chance to be clear only if set-theory provides us with a clarification of the relevant crucial notions. This is, however, not enough for concluding that whatever philosophical position we could take on natural numbers, and other mathematical objects along the lines suggested above, is necessarily dependent on a preventive account of ZFC. On the one side, we do not need all the expressive and deductive power of ZFC, and a fortiori of whatsoever acceptable extension of it, to make the relevant notions clear. On the other side, it is exactly the high un-categoricity of ZFC that invites us to reason with respect to finite numbers under the supposition that a model of the subjacent set-theory has been chosen, or, even, independently of the preventive assumption that these numbers are sets. This suggests taking ZFC as an independent mathematical theory-one, by the way, powerful enough to be used (among other things) for studying from the outside the structures formed by the natural numbers, as well as by other mathematical objects, as objects we have a de re epistemic access to independently of (the whole of) it. One could then ask whether some sort of de re epistemic access to pure sets (conceived as sui generis objects implicitly defined by ZFC) is possible or conceivable. The high un-categoricity of ZFC seems to suggest a negative answer. Because it looks like neither this theory as such, nor any suitable extension of it (with the only exception, possibly, of ZFC + 'V = L', if this might be taken to be a suitable theory, at all) can provide a way to fix pure sets in any appropriate way for allowing de re (semantic) epistemic access to them. Upon further reflection, the case can appear, however, not to be so desperate as it seems to be at first glance, and the results mentioned above help us in seeing why this is so. To begin with, one might wonder whether, in analogy to what we have said concerning PA1 and PA2, ZFC could not be taken as studying pure sets as the objects previously fixed in a quasi-categorical way by ZF2, just like PA1 might be taken to do with the natural numbers as (captured or) fixed by PA2. The problem with this suggestion is that the relations between ZFC and ZF2 are not as illuminating as those between PA1 and PA2. For example, if we fix a level of the cumulative hierarchy of sets, say V α , then the second-order theory of V α is simply the first-order theory of P(V α ) = V α+1 , hence passing to the second-order does not seem like it has achieved much. However, it is true that formulating ZF in the full second-order logic so as getting ZF2, one achieves what is known as quasi-categoricity. The proof is basically contained in Zermelo [START_REF] Zermelo | Über Grenzzahlen und Mengenbereiche[END_REF]. We can describe the situation in more detail although informally, as follows. What Zermelo proved for ZF2 is that for any strongly inaccessible cardinal υ which is supposed to exist, there is a single model (up to isomorphism) of ZF2 provided by the structure V υ , ∈ . It follows that all theories ZF2 + 'there are exactly n strongly inaccessible cardinal' (n = 0, 1, 2, . . .), or ZF2 n , for short, are fully categorical, giving that ZF2 has, modulo isomorphism, as many (distinct) models as there are strongly inaccessible cardinals (recall that V υ can only include strongly inaccessible cardinals smaller than υ). Of course, in any of these models any statement of the language of ZF2 is either true or false (according to the Tarski's semantic). But, because of the proof-theoretical incompletess of the second-order logic, and, then, of any second-order theory, it is not necessarily decidable. As noted below, this is so also for PA2. The difference is that in these extensions of ZF2, the undecidable statements include some with a clear and unanimously perfectly recognized mathematical significance, namely CH and GCH. Now, while the problem of deciding GCH (for cardinals greater than 2 ℵ0 ) can be seen as intrinsically internal to set theory (both to ZFC and ZF2), this is not so for CG. For, if we admit that there are (necessarily not-constructive) ways to fix real numbers, so as to allow us to have de re epistemic access to them (for example within PA2, as originally suggested by Hilbert and Bernays ( [START_REF] Hilbert | Grundlagen der Mathematik[END_REF], supplement IV), the problem of deciding CH can be seen as the question of answering the very natural question of how many are such numbers, a question which should, then, be seen as having a definite answer outside set theory (both ZFC and ZF2). The difference is, then, relevant, also from the point of view we are delineating. Usually, a model V M of ZFC is diagrammatically represented this way: V L κ where V L is the model of ZFC + 'V = L', and the external triangle can coincide with the internal one (which happens if 'V = L' is true in the model), but not go up to become internal. However, insofar as nothing requires that a model of ZFC have a uniform hierarchic shape, and no significant feature of it is represented by the symmetry of the diagram, we submit that a better representation is the following V L κ where all that is required of the external curve, call it 'C', for short, is that it is everywhere increasing (with respect to the line of cardinals, taken as axe) and external or coincident to the internal half straight-line. If this picture is adopted, a model of ZF2 could be depicted in the same way, with the specification that the external curve is univocally determined by the choice of a strongly inaccessible cardinal υ, or by the supposition that there are exactly n such cardinals, which leads to our calling it 'C υ ' or 'C n '. One could, then, advance that (the axioms of) ZF2 plus the choice of a strongly inaccessible cardinal, or (those of) ZF2 n allow to univocally fix a domain of sui generis objects-call it 'the υ-sets' or 'the n-sets'-and that ZFC is studying these very objects with weaker logical means as elements of uncountably many possible structures, being unable to provide an univocal characterisation of them. This suggests that ZF2, plus the choice of a strongly inaccessible cardinal, or ZF2 n provide domains of objects we can have a de re access to, in the same way as this happens for PA2, that is, not only internally, and so providing a sort of syntactic stability, but also externally, so as to provide a sort of semantic stability: one could argue that, once pure sets are fixed by the relevant (second-order) axioms, one can look at them as such and try to tell (both using a firstor a second-order language) the properties they have or the relations they bear to each other's. Of them, we claim that they form a structure that ZF(C) and all its usual (first-order) extensions try to describe, though being unable to univocally identify. Still, the relativisation to the choice of a strongly inaccessible cardinal or the admission of the supplementary axiom 'there are exactly n strongly inaccessible cardinals' make the situation much less satisfactory than the one concerned with Peano (first-and second-order) arithmetic: taken as such, ZF2 is not only proof-theoretically incomplete; it is also unable to univocally fix the relevant objects. This relativisation or admission do not prevent us from ascribing, however, to ZF2 a form of categoricity, since from Zermelo's result "it also follows that every set-theoretical question involving only sets of accessible rank is answerable in ZF2", and, then, in particular, that "all propositions of set theory about sets of reals which are independent of ZFC", among which there is CH, are either true or false in any of its model, though no proof could allow us to establish whether the former or the latter obtains ( [START_REF] Jané | Higher-order logic reconsidered[END_REF], p. 790). This might be taken as very good news. But a strong objection is possible: it is possible to argue that the truth or falsity of CH in any model of ZF2 does not depend on the very axioms of this theory, but on the consequence relation which is determined by the use of second-order logic and the standard (or full) interpretation of it, or, in other terms, that what makes CH true or false there is not what the axioms of ZF2 genuinely say about sets, but their using second-order variables, semantically interpreted as sets of n-tuples on the fist-order domain. Clearly, this would make second-order logic so interpreted "inadequate for axiomatizing set theory" (see [START_REF] Jané | Higher-order logic reconsidered[END_REF], pp. 782 and 790-793, for details). We do not want enter such a delicate question here. We merely observe that the mathematical results we have expounded above show that there is no need to go second-order to get a limited form of quasi-categoricity. Since these results suggest that ZFC has already (and alone, that is, without any need to appeal to any supplementary axiom) the resources for fixing some of its objects in a better way than it is usually thought. Namely, if we are happy to work at a singular cardinal then much of the combinatorics is determined by what happens at the regular cardinals below, even to the point of fixing the cardinal arithmetic (see Shelah's theorem 1 quoted above). In some cases, we do not even need to know what happens at the regular cardinals below (see theorem 5). And if we are happy to be in a world with no Axiom of Choice, we can even imagine that all cardinals are singular, as in the Gitik's model and hence much of the cardinal combinatorics is completely determined by ZF. Let us look back to the second of the previous figures and suppose that κ is a singular cardinal. What these results suggest is this: if the values of the ordinates of C are fixed for all regular cardinals λ smaller than κ, i.e. if a single model of ZFC is chosen relatively to all these regular cardinals, then the value of the ordinate of C for κ is strongly constrained, in the sense that this value can only belong to a determined set (a set, not a class) of values. In other terms, things seem to happen as if the shape of a model of ZFC for the regular smaller than κ strongly conditions the shape of the possible models at κ. These results could be understood as saying that the non-categoricity of ZFC is, in fact, not as strong as it appears. Even within first-order, the behavior of the universe of sets is fixed enough at singular cardinals to give us some sort of external and semantic de re epistemic access to them and their power sets. In particular, once we have given to us all sets of size < κ and all their power sets, our choices for κ are quite limited. This offers an image of the universe of sets in which a strong lack of uniuvocality only concerns successor cardinals or uncountable regular limit cardinals, if any (remember that the existence of uncountable regular limit cardinals is unprovable in ZFC). One could say that, at singular limits, ZFC already exhibits a form of categoricity, or, better, that it does it asymptotically, since the ratio of singular cardinals over all cardinals tends to 1 as the cardinals grow. And at the price of working only in ZF we can even imagine to be in the model of Gitik, in which every uncountable cardinal is a singular limit. Under a realist semantic perspective, according to which all we could say about the universe of sets is either true or false, one could say that this shows that, though ZFC is unable to prove the full truth about this universe, it provably provides an asymptotic description where the singular cardinals are the limits of the asymptotes. This also suggests, however, an alternative and more sober picture, which is what we submit: though there is no sensible way to say what is true or false about the universe of sets, unless truth and falsity are merely conceived as provable truth and falsity, ZFC provides an asymptotically univocal image of the universe of sets around the singular cardinals: the image of a universe to which we can have an external semantic de re epistemic access. According to the common abuse of notation, we call F 'power set function', even though it is in fact a classfunction. Acknowledgement 6 The first author gratefully acknowledges the help of EPSRC through the grant EP/I00498, Leverhulme Trust through research Fellowship 2014-2015 and l'Institut d'Histoire et de Philosophie des Sciences et des Techniques, Université Paris 1, where she is an Associate Member. The second acknowledges the support of ANR through the project ObMathRe. The authors are grateful to Walter Carnielli for his instructive comments on a preliminary version of the manuscript, and to Marianna Antonutti-Marfori, Drew Moshier and Rachael Schiel for valuable suggestions.
68,405
[ "183321", "183520" ]
[ "336687", "1342" ]
01759108
en
[ "info" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01759108/file/wowmom2015.pdf
Nicolas Montavont Alberto Blanc Renzo Navas Tanguy Kerdoncuff Handover Triggering in IEEE 802.11 Networks The current and future IEEE 802.11 deployment could potentially offer wireless Internet connectivity to mobile users. The limited AP radio coverage forces mobile devices to perform frequent handovers while current operating systems lack efficient mechanisms to manage AP transition. Thus we propose an anticipationbased handover solution that uses a Kalman filter to predict the short term evolution of the received power. This mechanism allows a mobile device to proactively start scanning and executing a handover as soon as better APs are available. We implement our mechanism in Android and we show that our solution provides a better wireless connection. I. INTRODUCTION Due to the proliferation of Wifi hot-spots and community networks, we have recently observed a great evolution of IEEE 802.11 networks especially in urban scenarios. These 802.11-based networks allow mobile users to get connected to the Internet, providing a high throughput but a limited mobility due to the short coverage area of access points (APs). In our previous work [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF] we have shown that community networks appear to be highly dense in urban areas, generally providing several APs (15 in median) per scanning spot. Under this condition, a mobile user may be able to connect to community networks and compensate the low AP coverage area by transiting between APs. We call such AP transition a handover. However, two main issues currently limit mobile users from using community networks in such a mobility-aware scenario. First, operators have not deployed the necessary infrastructure to allow mobile users to perform handovers without being disconnected at the application layer, i.e., after a handover on-going application flows are interrupted. This limitation may be addressed by deploying a Mobile IP [START_REF] Perkins | IP Mobility Support for IPv4[END_REF] infrastructure, in which the application flows may be tunnelled through a Home Agent that belongs to the operator. Second, independently from the first issue, there is still a lack of mechanism to intelligently manage a layer 2 handover between two APs. In current mobile devices, when a handover occurs, we observe a degradation of on-going flows corresponding to a dramatic reduction of the TCP congestion window (CWND) and of the throughput. In this paper, we focus on this latter issue by analyzing the impact of layer 2 handovers on mobile users. We propose Kalman-filter-based HAndover Trigger algorithm (KHAT) that succeeds in intelligently triggering handovers and reducing the scanning impact on the mobile device. We propose a complete implementation of our handover mechanism in Android ICS (4.0) and show a comparative study to show that our approach outperforms the handover mechanism that is currently implemented on these devices. The paper is organized as follows. Section II presents the litterature on handover optimization and Section III analyzes the handover impact on on-going communications. Section IV introduces KHAT which is evaluated indoor and outdoor in Section V. Section VI concludes the paper. II. HANDOVER PROCESS AND RELATED WORK The IEEE 802.11 standard defines a handover as a three steps process: scanning, authentication and association. The standard proposes two different scanning algorithms namely passive and active scanning. In passive scanning, the mobile station (MS) simply tunes its radio on each channel and listens for periodic beacons sent by the APs. In active scanning, the MS proactively sends requests in each channel and waits for responses during a pre-defined timer. Once candidate APs have been found, the MS selects one of the APs and attempts authentication and association. If the association is successful, the MS can send and receive data through the new AP, if this new AP is on the same IP subnet as the previous AP. If the new AP belongs to another IP subnet, the MS needs additional processing to update its IP address and redirect data flows to its new point of attachment. Such Layer 3 handover may be handled by specific protocols like Mobile IP [START_REF] Perkins | IP Mobility Support for IPv4[END_REF]. Note that in this paper we do not address IP mobility and any layer 3 mobility management protocol can be use on top of our proposal if needed. In 2012, the IEEE has published new amendments for IEEE 802.11 handover optimization, aimed at reducing its duration and its impact on higher layers. The IEEE 802.11k amendment proposes mechanisms for radio resource measurement for seamless AP transition, including measurement reports of signal strength (RSSI) and load of nearby APs. Additionally, the IEEE 802.11r amendment contains a Fast Basic Service Set Transition (FT), which avoids exchanging 802.1X authentication signaling under special conditions by caching authentication data. While these features may enhance the handover performance, they heavily rely on a cooperation between APs, which might not always be a viable solution. In addition, users may access various networks operated by different providers. In that case, operators should share network information and performance among them, which is quite an unlikely scenario. In this paper, we focus on MS-based solutions, where the MS itself handles the handover without the help from the network. Several works have been proposed in the literature so far. In general, those studies cover different aspects of the handover mechanism. We may group them into three main categories: • Handover triggering: when to decide that a disconnection with the current AP will occur. • AP discovery: how to search for APs on different channels by minimizing the impact on the higher layers. • Best AP selection: with which AP to associate, among the discovered ones. The simplest mechanism to trigger a handover is to monitor the RSSI as an estimation of the link quality and start the handover process if the current RSSI is lower than a pre-established threshold (commonly set at -80 dBm). Fig. 1a shows the relationship between the RSSI measured on an MS and the TCP throughput that we have gathered during more than 600 connections to community networks in a urban area in Rennes, France [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF]. We observe that the TCP throughput is extremely variable for high RSSI, but starts degrading for RSSI lower than -70 dBm, and it becomes significantly low around -80, dBm. Some works focus on the anticipation of the handover triggering in order to minimize the impact on ongoing communications. Mhatre et al. [START_REF] Mhatre | Using smart triggers for improved user performance in 802.11 wireless networks[END_REF] propose a set of handover algorithms based on continuously monitoring the wireless link, i.e., listening to beacons from the current and neighboring channels. These approaches give handover latencies varying between 150 and 800 ms. However, since these approaches need to listen to beacons from neighboring channels, it is necessary to modify the firmware of the wireless card, which may not always be possible. Yoo et al. [START_REF] Yoo | LMS predictive link triggering for seamless handovers in heterogeneous wireless networks[END_REF] propose a number of handover triggering mechanisms based on predicting RSSI samples at a given future time using Least Mean Square (LMS) linear estimation. In this algorithm, the device continuously monitors the RSSI and computes the LMS prediction if the RSSI is below a certain threshold (P Pred ). Then, if the predicted RSSI value is lower than a second threshold, P Min , the MS starts a handover. Wu et al. [START_REF] Wu | Proactive scan: Fast handoff with smart triggers for 802.11 wireless LAN[END_REF] propose a handover mechanism aiming at decoupling the AP discovery phase from the AP selection and reconnection phase. The MS alternates between scanning phases and a (normal) data mode where the MS is connected to its current AP. The time interval between two scanning phases is adapted depending on the current signal level and varies between 100 and 300 ms. In each scanning phase, the sequence of channels to scan is selected based on a priority list that is built based on the results of a periodic full scanning (i.e., here all channels are scanned). As far as Android devices are concerned, Silva et al. [START_REF] Silva | Enabling heterogeneous mobility in android devices[END_REF] present a mobility management solution based on IEEE 802.21. They propose a mapping of IEEE 802.21 primitives for handover initiation, preparation, execution and completion to existent Android OS methods and functions. III. HANDOVER IMPACT During an L2 handover, the MS is not able to send or receive application flows. This is because, usually, when a MS triggers a handover, the link quality does not allow exchanging frames anymore, and because the MS is often switching operating channel. In this section we evaluate the handover and scanning impact on application flows, and determine which parameters influence the scanning latency and success rate. This testbed consists of nine Cisco Aironet 1040 APs installed in the roof of our building at the locations given in Fig. 2. All APs are connected to a dedicated wired LAN. APs broadcast a single SSID, corresponding to an open-system authentication network belonging to a single IP subnet. We also use a dedicated (fixed) server for traffic generation and tracing. iPerf is used to generate TCP downlink traffic to the MS. For each experiment, we walk from AP 1 to AP 6 and then back again to AP 1 . A. Operating Systems Benchmark To illustrate how the handover is currently impacting data flows, we have performed a set of experiments to evaluate the degradation of TCP performance for different devices and Operating Systems (OS). Table I shows the number of handovers and the average TCP As a baseline, we also show the maximum achieved throughput for each device remaining static and connected to a single AP. Using Windows, we observe the best result, since the MS performs up to four handovers, reaching an average throughput of 0.875 M B/s. Additionally we observe that for Windows, the time in which no data is downloaded (i.e., the disconnected time) is relatively short compared to the other OSs. The netbook running Ubuntu reacts slowly to changing channel conditions: in this case the MS is disconnected for more than 20 s and executes only two handovers, indicating that the MS waits until the quality of the radio link is significantly degraded. Fig. 1b shows the evolution of the downloaded data for each case. Additionally, we have observed that for the Windows device, the average round-trip time (RTT) is the lowest one (103 ms) having also a low standard deviation. This differs from the other devices that reach larger RTT values. B. Scanning Interactions with Data Traffic We focus on active scanning where an MS sends Probe Requests on each channel to discover potential APs, instead of just waiting for periodic beacons (passive scanning). We chose active scanning because it allows spending less time in each channel to determine the AP availability. If the handover phases are done one after the other, all packets that arrive during the handover process will be lost. In order to reduce the impact of handovers on applications flows, it is possible to introduce a gap between the scanning phase and the other handover steps, i.e., the decision to handover, the authentication and the association, as presented in [START_REF] Wu | Proactive scan: Fast handoff with smart triggers for 802.11 wireless LAN[END_REF]. An MS may use the power saving mode defined in IEEE 802.11 to request its current AP to buffer incoming packets during the time the MS scans other channels. This way, instead of loosing packets during the scanning phase, an MS can receive the packets after the scanning phase, albeit with an extra delay. This behavior is illustrated in Fig. 1c, where we plot the sequence number of the received packets of a TCP flow when an MS is performing one scan of the 13 channels with an active timer set at 50 ms. We can see that the scan is starting just before the time 1 s, at which no more data packets are received from the server. Once the scan is finished, around 850 ms after, the MS comes back to its current AP, and starts receiving TCP packets again. This technique can also be used to split a scanning phase into several sub-phases where only a subset of channels are scanned. For example, to scan the 13 channels, an MS could sequentially scan three times a subset of 4 (or 5) channels each time, interleaving these sub-phases with the data mode with the current AP to retrieve data packets. The impact of the number of scanned channels, and the timers used in each channel is given in the next subsection. C. Scanning Parameters We analyze the scanning performance under different values of timers used to wait for Probe Responses (from 5 ms to 100 ms) and different number of scanned channels during a sub-phase (between 1 and 13). In the standard IEEE 802.11 scanning algorithm, the MS is supposed to scan each channel using two timers We ran 60 scanning sub-phases for each AT and subset of scanned channels and measured the average number of discovered APs, the RSSI distribution of the discovered APs and the average duration of the scanning (i.e., the scanning latency). Results are presented in Table II. As a baseline, we consider that all the available APs are discovered when scanning the full channel sequence (i.e., 13 channels) using AT=100 ms. In the other cases, the MS discovers only a fraction of the APs, since it either does not wait long enough to receive all AP Probe Responses, or because only a subset of channels are scanned. We have also observed that when using a short AT, even if the MS discovers a low number of APs, those APs have a high RSSI. On the other hand, when using higher AT values, the MS discovers more APs but a large part of them have a low RSSI. This can be observed in Fig. 3a, where we see that for AT=5 ms the average RSSI of candidate APs is -67 dBm, while for AT=20 ms, this decreases up to -76 dBm. IV. KHAT: PROACTIVE HANDOVER ALGORITHM We propose a handover algorithm called Kalman Filter-based HAndover Triggering (KHAT for short) that provides link going down detection, optmized scanning strategy, and new AP selection. An MS monitors its link quality with its current AP, and when the signal strength is degrading, it starts alternating between scan periods and data communication with the current AP. The scan periodicity and the timer values are determined according to the current link quality and whether a candidate AP has already been found. Once the candidate AP becomes better than the current AP, the handover is triggered. A. RSSI modelling One way of keeping track of the changing radio condition is to track the RSSI on the MS. While far from being perfect, the RSSI has the advantage of being always available, whether the MS is exchanging data or not, as it is updated not only whenever the MS receives data frames but also when it receives beacon frames, which are typically sent every 100 ms by most APs. As the RSSI can fluctuate rapidly, especially when a user is moving, its instantaneous value is not necessarily representative. At the same time, its local average and trend are more useful in deciding whether the radio channel conditions are improving or not and whether they are reaching the point where communication is no longer possible. Using the well known Kalman filter, it is possible to extract this information from the RSSI measurements. Many authors have already use Kalman filter and other time series techniques in order to model radio channels and the received signal strength, see, for example, the works by Jiang et al. [START_REF] Jiang | Kalman filtering for power estimation in mobile communications[END_REF], by Baddour et al. [START_REF] Baddour | Autoregressive modeling for fading channel simulation[END_REF] and references therein. More formally, let X(t i ) X i be the received signal strength at time t i . In our case, we sample the RSSI roughly every 100 ms; but, as we rely on software timers, there are no guarantees that the t i 's will be equally spaced. Figure 3b shows the empirical distribution of ∆t i = t i -t i-1 for a subset of the traces we collected. The average is 96 ms and the standard deviation is 8.2 ms. Given that roughly 90% of the samples are within less than 100 ms of each other, it seems reasonable to "re-sample" the time series with a time-step of 100 ms. In all the traces we have collected, it is often the case that several consecutive samples have the same value, indicating that the received signal strength is often constant during periods that are longer than the average distance between samples. The presence of several samples with exactly the same value is an obstacle when one is trying to estimate the local trend of a signal as, in this case, the estimated slope would be exactly 0. The Kalman filter does not perform well in these circumstances. As we rely on the values reported by the 802.11 driver, we wondered whether these consecutive samples with the same values were caused by the driver q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q -100 -90 -80 -70 -60 -50 -40 ∆t /ms CDF q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q (b) Sampling interval CDF for the original time series Constant Period Length/ms CDF q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q qq q q q q qq q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q qq q q q q q q q qqq q (c) The CDF of the length of the periods where all the received power samples are equal for the community networks Fig. 3: RSSI analysis not updating the values often enough. Figure 3c shows the distribution of the length of the periods where the signal strength was constant for several traces collected using a static MS which was not sending or receiving data (note that the distribution was the same whether the test was performed using a laptop running Linux or a smartphone running Android). The median is 305 ms and the standard deviation is 306 ms. In the case of mobile MSs and/or data traffic the median values are smaller (around 110 ms for MS with data traffic, but the standard deviation is always larger). In order to mitigate the effect of these periods, we pre-process the RSSI samples, using a time-varying exponential average, before applying the Kalman filter. In order to further reduce the lag of the smoothed signal, we use a time-varying weight in the exponential smoothing. Let Y i be the re-sampled RSSI time series. We construct the smoothed series Z i as: Z i = α i Y i + (1 -α i )Z i-1 where Z 1 = Y 1 and α i = α up if the RSSI is increasing, instead whenever the RSSI starts decreasing α i = α 1 . Whenever the RSSI is constant the value of α i is determined by the last change before the beginning of the constant samples. If the last change was an increase α i = α up , otherwise α i = min(0.8 • α i-1 , α min ). This corresponds to the pseudo-code in Algorithm 1. We have used α up = 0.5, so that the smoothed time series will react quickly to upward changes, α 1 = 0.4 and α min = 0.01 so that it will, instead, react much more slowly to downward changes. The reason of this asymmetric behavior is that we are interested in having an accurate estimate of the level and, above all, of the trend, only when the signal is decreasing. By using a larger α i when the signal is increasing we assure that Z will quickly reach the value of Y reducing the lag between Y and Z. We have verified that the received power times series of our sample are indeed non-stationary by computing their autocorrelation functions, which were all slowly decreasing (as a function of the lag). We have then decided to use a state-space based model to represent the evolution of the power over time. In particular we have used the local linear trend model (see, for example, Durbin and Koopman [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF]): Z i = µ i + ε i ε i ∼ N (0, σ 2 ε ) µ i+1 = µ i + ν i + ξ i ξ i ∼ N (0, σ 2 ξ ) (1) ν i+1 = ν i + ζ i ζ i ∼ N (0, σ 2 ζ ) where Z i is the time series under scrutiny, µ i is the level at time i, ν i is the slope at time i and ε i , ξ i , ζ i are independent identically distributed (i.i.d.) Gaussian random variables with 0 mean and variance σ 2 ε , σ 2 ξ , σ 2 ζ respectively. These variances can be obtained by Maximum Likelihood Estimation from sample realizations of Z. Once the values for the variances are specified, and given a realization of Z i (i = 0, . . . , n), one can use the well known Kalman filter algorithm to compute µ i and ν i for any value of i (again, see, for example, Durbin and Koopman [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF]). To be more precise, one can solve the "filtering" problem, where the values of µ i and ν i are computed using the samples Z 0 , Z 1 , . . . , Z i . At first, we have used the dlm [10] R [START_REF]R: A Language and Environment for Statistical Computing[END_REF] package to solve the filtering problem. Note that, as the filtering problem uses only the samples between 0 and i, it can be implemented in real time as it depends only on past values of the time series. The Kalman filter can also be used to predict future values. In the case of the local linear trend model ( 1), the prediction algorithm is extremely simple: one can just model the future values of the time series using a straight line with slope ν i , starting at the value µ i at time i. We have also implemented the Kalman filter on a Samsung Nexus S smartphone, in the WiFiStateMachine module of the Android Java framework. For the sake of simplicity we have used a straightforward implementation of the Kalman recursion. The general form of the Kalman filter is: Z i = F Θ i + v v i ∼ N n (0, V i ) Θ i = GΘ i-1 + w i w i ∼ N m (0, W i ) For the local linear trend model Z is a scalar, and so is v, while: Θ i = µ i ν i , G = 1 1 0 1 , W = σ 2 ξ 0 0 σ 2 ζ , F = (1 0) . We are interested in computing the 2 × 2 vector m i = (E[µ i ] E[β i ]) T , containing the expected values of the level (µ i ) and slope (ν i ). It is known [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF] that one can compute these values using the following equations: m i = a i + R i F T i Q -1 i e i f i = F i a i C i = R i -R i F T i Q -1 i F i R i Q t = F i R i F T i + V i a i = G i m i-1 R i = G i C i-1 G T i + W i where e i = Y i -f i , and the following initial values: C 0 = σ 2 1 0 0 σ 2 2 , m 0 = (Z 0 0) T are used for C and m. The values of σ 2 1 and σ 2 2 have almost no influence on the computations as the matrices R, F, T, C quickly converge to steady state values which are independent from the initial values. The local linear trend model 1 is characterized by three parameters: σ 2 ε , σ 2 ξ , σ 2 ζ . It is possible to use Maximum Likelihood Estimation (MLE) methods to estimate the values of these parameters from sample realizations of Z. We have used the MLE functions of the dlm package to this end, but in some cases the optimization algorithm used to compute the MLE did not converge. When it converged its estimations for σ 2 ε and σ 2 ζ where not always consistent over all the samples but the order of magnitude was fairly consistent, with σ 2 ε usually smaller than σ 2 ζ and with σ 2 ε often fairly close to 0. It should be stressed that, in this case, there are no guarantees about the convexity of the optimization problem solved by the MLE procedure, which can very well converge to a local minimum instead of a global one. Also it is not uncommon to tune the model parameters in order to improve its performances. In our case we have observed that using σ 2 ε = 0.5, σ 2 ξ = 1 and σ 2 ζ = 2.5, we obtain fairly smooth level and slope values, which can be effectively used by the KHAT algorithm. B. Algorithm design KHAT adapts the scanning strategy, the scanning period and the handover trigger by comparing an estimate of the link quality and the quality of candidate AP as presented in Fig. 4. The main process consists in continuously monitoring the RSSI of the current link and detect a link going down event. To achieve this, we use the Kalman filter to obtain the current value of the RSSI (µ) and the slope (ν). After analyzing a large number of RSSI time series, we have estimated that the link going down trigger can be declared if µ < -70 dBm and ν < -0.2 dBm/s. If the link going down condition is satisfied, the MS check on its candidate AP list. If there is not a valid candidate AP, the MS will attempt scanning only if there has not been another scanning instance for the last T Scan seconds. On the other hand, if after triggering a link going down condition, the MS has a valid candidate it will attempt a handover only if the difference between the candidate AP RSSI and the current exponentially smoothed RSSI sample (µ) is greater than ∆, where ∆ is defined as follows: ∆ =          8 , if µ > -70 5 , if -70 ≤ µ < -75 3 , if -75 ≤ µ < -80 2 , if µ ≤ -80. (2) After a scan completes, an existing candidate AP would be updated with a new RSSI, or a new candidate may be selected. Additionally, in order to avoid The scanning strategy itself is also adapted depending on the current link condition. Each scanning strategy consists in determining a number of channels to scan and the time to wait on each channel (AT in Android system). Based on results presented in section III-C we fixed AT as presented in Table III: the better the current link quality is, the less time the MS will spend scanning, because it still has time to find APs before it disconnects from its current AP. When the signal quality with the current AP is low, we set aside more time for the MS to scan, in order to maximize the probability to find an AP. In order to contain the scanning duration, we propose to use AT in {5 ms, 10 ms, 20 ms}. The reason is that for smaller scan times, we only find APs with high RSSI (as shown in section III-C) and as we are in fairly good condition, the MS would only be interested in AP with high RSSI. V. EXPERIMENTATION A. Methodology and implementation We have implemented our solution on the Android ICS 4.0.3 system working on a Samsung Nexus S (GT-I9023) smartphone. It involves modifications in the Android Java Framework, the WPA Supplicant The mobile user walks at a roughly constant speed and each connection lasts for 120 s. The second set of experiments was performed in the city of Luxembourg, using the HOTCITY Wifi deployment (see [START_REF] Castignani | A study of urban ieee 802.11 hotspot networks: towards a community access network[END_REF] for more details). In all cases, we use iperf to generate the TCP traffic for both MSs and generate several connections for more than one hour. B. General results Fig. 5, 6, 7 and 8 show the RSSI and the received TCP data for the two considered environments, over one connection duration, while Table IV shows the average over all connections that we made. We can see that KHAT provides a better RSSI (-69 dBm in average) along the connections and allows the smartphone to have a better throughput than stock Android (222 kB/s versus 146 kB/s). We can also see that KHAT triggers the handover systematically before stock Android to avoid suffering from a poor quality with its current AP. Sometimes, as at Time=400s of the outdoor connection, KHAT manages to find an intermediate AP between those chosen by Stock Android which allows to signigicantly increase both the RSSI and the TCP download. Looking at the zoom of Fig. 8 Time (s.) RSSI (dBm) q q q q q q q q q q KHAT RSSI Legacy RSSI KATH Handover Legacy Handover Time (s.) RSSI (dBm) q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q KHAT RSSI Legacy RSSI KATH Handover Legacy Handover where the Legacy smartphone is not able to receive any data from the TCP server. On the other hand, KHAT is performing two handovers. The first handover at Time=233s is made prior to the Legacy handover, but still a stagnation in the received data is observed before and after the handover. However the second handover at Time=275s is smooth and does not impact the data reception. Fig. 9 shows the list of selected APs from the chosen connection in the outdoor environment captured by an MS running Wi2me [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF]. The APs are shown in their apparence order along the path. We can see that for the six first APs, the KHAT AP selection is judicious: when the signal strength of one AP is degrading, another AP becomes available. However, between scanning occurence 140 to 200, there is not ideal choice for any AP. Observing that the RSSI is low, KHAT is trying to handover to different AP at this period of time, osciatilling between AP 7 , AP 8 , AP 9 and AP 10 . These are the handovers we can see around Time=625s in Fig 7 . While KHAT avoids to trigger handover when it is not for a significantly better AP, in areas where all APs offer low RSSI, KHAT may trigger several handovers. It finally finds AP 11 at Time=721s which was providing a good coverage area. During this period, we can see that the Legacy phone did not trigger any handover and was unable to receive any data for a long period of time. VI. CONCLUSION IEEE 802.11 is one of the most popular wireless standard to offer high data rate Internet connection. With the vast number of hot-spots and community networks that are deployed today, there is a potential for users to use Wifi network in mobility scenarios. However, as the AP coverage area is usually limited to few tenths of meters, there is a strong need for optimized mobility support, when users move from one AP to another one. We have shown in this paper that current devices are able to transit between APs, but the handover performance is quite low. We proposed a handover algorithm called KHAT that anticipates the signal loss from the current AP to preemptively scan for potential APs. The prediction of the link going down is achieved with a Kalman filter which estimates the slope of the RSSI to determine the link condition. If the estimate is below a given threshold (smooth RSSI lower than -70 dBm and slope lower than 0.2), we launch a scanning. Data packets can be buffered (and retrieved later on) by the AP during the MS scanning by exploiting the power saving mode defined in 802.11. Depending on the scanning results, the MS will either handover to a new (better) candidate AP that has been found, or it will loop on the link quality prediction. The scanning period and strategy are adapted depending on the current link condition. We have implemented KHAT on Android ICS 4.0.3 system working on a Samsung Nexus S (GT-I9023). To address the tradeoff between the scanning latency and the AP discovery, the MS is scanning with AT=20 ms if a handover is imminent, AT=10 ms when the link quality is medium and AT=5 ms when the link quality is good. In two different environments (indoor and outdoor), we compared a Stock Android with a KHAT smartphone. We have shown that KHAT outperforms Stock Android by anticipating handovers, and using more APs available on the path. The average RSSI is 6dBm either in the outdoor environment, and the TCP throughout is 0.22MB/s compared to 0.12MB/s for Stock Android. The perspective of this work is to apply the link quality prediction on candidate APs in order to better choose the target AP when a handover is needed. 2 2 Fig. 1 :Fig. 2 : 12 Fig. 1: Various TCP performance Fig. 4: Algorithm Flow Chart Fig. 9 : 9 Fig. 9: Heatmap of the selected AP in the outdoor environment Fig.5, 6, 7 and 8 show the RSSI and the received TCP data for the two considered environments, over one connection duration, while TableIVshows the average over all connections that we made. We can see that KHAT provides a better RSSI (-69 dBm in average) along the connections and allows the smartphone to have a better throughput than stock Android (222 kB/s versus 146 kB/s). We can also see that KHAT triggers the handover systematically before stock Android to avoid suffering from a poor quality with its current AP. Sometimes, as at Time=400s of the outdoor connection, KHAT manages to find an intermediate AP between those chosen by Stock Android which allows to signigicantly increase both the RSSI and the TCP download. Looking at the zoom of Fig.8between Time=200s and Fig. 5 :Fig. 6 : 56 Fig. 5: RSSI for Legacy and KHAT smartphones indoor Fig. 7 :Fig. 8 : 78 Fig. 7: RSSI for Legacy and KHAT smartphones outdoor TABLE I : I Handover performance of different OS Nb. of AT=5 AT=10 AT=20 AT=50 AT=100 channels (%) (%) (%) (%) (%) 1 3.11 5.76 10.62 22.28 25.24 3 6.45 18.28 32.61 58.18 88.24 5 9.28 21.02 38.83 68.94 89.31 8 10.44 23.61 40.46 70.43 96.58 13 11.74 28.62 45.76 79.88 100.00 RSSI -67.16 -70.07 -76.02 -81.28 -83.26 TABLE II : II Percentage of discovered APs for different values of AT and number of scanned channels namely MinCT and MaxCT (see section II). However, the IEEE 802.11 Android driver uses a single timer, namely Active Timer (AT) for scanning. AT is defined as the time an MS waits for Probe Responses on a channel. Algorithm 1 The algorithm used to compute α i 1: increasing ← FALSE 2: lastV alue ← Y1 3: α1 ← α1 4: i ← 1 5: while i ≤ length(Y ) do 6: if Yi = lastV alue then 7: if YI > lastV alue then 8: increasing ← TRUE 9: αi ← αup 10: else 11: increasing ← FALSE 12: αi ← α1 13: end if 14: else 15: if increasing =FALSE then 16: if αi-1 > αmin then 17: αi ← 0.8 • αi-1 18: else 19: αi ← αi-1 20: end if 21: else 22: αi ← αup 23: end if 24: end if 25: end while TABLE III : III Scanning Strategies scanning at a very high frequency, we adapt the value of T Scan depending on the scanning results. Each time the MS triggers a link going down, if a candidate AP exists, we double the current value of T Scan (up to 1 s) since it is not necessary to scan at a high frequency if there is at least one candidate AP. On the other hand, if no candidate exists at that time, we set T Scan to its minimum value (250 ms). TABLE IV : IV Performance comparison AP 1 AP 2 AP 3 AP 4 AP 5 AP 6 AP 7 AP 8 AP 9 AP 10 AP 11 AP 12 AP 13 AP 14 AP 15 AP 16 AP 17 90 0 50 100 150 Scan Occurences 200 250 300 978-1-4799-8461-9/15/$31.00 c 2015 IEEE VII. ACKNOWLEDGMENTS This work has received a French government support under reference ANR-10-LABX-07-01 (Cominlabs).
35,958
[ "181995", "21816", "21930", "17931", "1030313" ]
[ "220137", "482801", "490899", "421667", "482801", "490899", "220137", "482801", "490899", "220137", "482801", "175629" ]
01759116
en
[ "sdv" ]
2024/03/05 22:32:10
2017
https://hal.sorbonne-universite.fr/hal-01759116/file/ARACHNIDA-15-Tityus-cisandinus%20sp.n_%20_%20sans%20marque.pdf
Wilson R Lourenço email: [email protected] Eric Ythier email: [email protected] con commenti su alcune specie correlate (Scorpiones: Buthidae) Keywords: Scorpion, Tityus, Atreus, Tityus asthenes, new species, Ecuador. Riassunto Scorpione, Tityus, Atreus, Tityus asthenes, nuova specie, Ecuador Description of Tityus (Atreus) cisandinus sp. n. Introduction The buthid scorpion Tityus asthenes was originally described by [START_REF] Pocock | Notes on the classification of scorpions, followed by some observations upon synonymy, with descriptions of new genera and species[END_REF] from Poruru in Peru, in a paper devoted to the classification of scorpions in general and including description of several new genera and species. No precision, however, about the collector of the studied specimen was supplied, situation not uncommon in the publications of Pocock (Lourenço & Ramos, 2004). The description of T. asthenes was brief, based on a single female specimen and not followed by any illustrations. The type locality of Tityus asthenes, supposedly in Peru, remains unclear since no present locality with this name, including in river systems, exist in this country. [START_REF] Francke | Escorpiones y escorpionismo en el Peru VI: Lista de especies y claves para identificar las familias y los géneros[END_REF] suggested that the correct locality is Paruro in southern Peru, however this corresponds to an arid region not compatible with many species of Tityus placed in the subgenus Atreus. Consequently, the specimen described by Pocock could have been collected in quite different regions in tropical America. Probably in relation to its general morphology, Tityus asthenes was associated to the group of Tityus americanus (= Scorpio americanus Linné, 1754) by [START_REF] Pocock | Notes on the classification of scorpions, followed by some observations upon synonymy, with descriptions of new genera and species[END_REF]. In fact, this group corresponds to scorpions defined by a large size, 80 to 110 mm in total length, presenting long and slender pedipalps, in particular in males and with an overall dark coloration. This group corresponded well to the Tityus asthenes group of species as defined by Lourenço (2002a), this until the more precise definition of subgenera within Tityus (Lourenço, 2006), which placed these large scorpions in the subgenus Atreus Gervais 1843. 2012). The subsequent discovery and description of several new species belonging to this group of scorpions in the last 30 years changed the past opinion about their models of distribution and showed that most species could have less extended and much more localised ranges of distribution (Lourenço, 1997(Lourenço, , 2002b(Lourenço, , 2011(Lourenço, , 2017)). A recent study on some scorpions from Ecuador (Ythier & Lourenço, 2017) reopened the question about the true identity of some Tityus from this country, and in particular that of Tityus asthenes. In an old paper on scorpion fauna of Ecuador, Lourenço (1988) suggested that Tityus asthenes was the most common species of the subgenus Atreus, being present in both the cisAndean and transAndean regions of the country. The type specimen of Tityus asthenes was examined by the senior author early in the 1970s, while yet a student, without any final resolution about its true identity (Figs. 13). The recent reanalysis of the female holotype of T. asthenes clearly demonstrates that this species does not belongs to the subgenus Atreus but rather to the subgenus Tityus and to the group of species Tityus bolivianus (Lourenço & Maury, 1985; of two new specimens collected in the Amazon region of Ecuador, close to the Peru border and associated to T. asthenes in Lourenço & Ythier, 2013, led us to consider this Tityus population as a new species. Until now the cisAndean and transAndean populations of Tityus (Atreus) found in Ecuador were considered as a single one (Lourenço, 1988;[START_REF] Brito | A checklist of the scorpions of Ecuador (Arachnida: Scorpiones), with notes on the distribution and medical significance of some species[END_REF], however we now consider these as possibly different and separated by the Andean mountain system. The material cited by Lourenço (1988) from the Napo province was not restudied, but most certainly corresponds to the new species described here. The transAndean populations of Tityus subgenus Atreus in Ecuador, in particular those from the Province of Esmeraldas, will require some further studies to have their status clearly redefined, since it may correspond to a not described species. The Amazon region of Ecuador, known as the Oriente, is exceptionally rich in biodiversity of both flora and fauna. Much of the Oriente is tropical rainforest (Fig. 6.), starting from the east slopes of the Andean mountains (upland rainforest) and descending into the Amazon basin (lowland rainforest). It is crossed by many rivers rising in the Andean mountains and flowing east towards the Amazon River. The lowlands in the Oriente, where the new species was found (type localities going from 250 m (Kapawi) to 310 m (Yaupi) above sea level), have a warm and humid climate year round, and typically receives more than 2000 mm (average 3500 mm) of rain each year, April through June being the wettest period. Temperatures vary little throughout the year and averages 25° C, with variation between daytime (up to 28° C) and nighttime (about 22° C). Lowland rainforests contain the tallest trees of all types of rainforest, with the largest variety of species. The tree canopy typically sits 2040 m above the ground where vegetation is sparse and comprises mainly small trees and herbs that can experience periodical flooding during heavy rains. For several sections of the lowland rainforest such as the canopy, knowledge of the scorpion fauna is still almost nonexistent (Lourenço & Pézier 2002). Consequently, the effective number of species in the Amazon region of Ecuador may be much greater than what is presently estimated. Material and Methods Illustrations and measurements were produced using a Wild M5 stereomicroscope with a drawing tube and an ocular micrometer. Measurements follow [START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF] and are given in mm. Trichobothrial notations follow [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF], while morphological terminology mostly follows [START_REF] Vachon | Etude sur les Scorpions[END_REF] and [START_REF] Hjelle | Anatomy and morphology[END_REF]. Comparative material: • Tityus obscurus (Gervais, 1843): French Guiana, Réserve de la Trinité, XII/2010 (C. Courtial), 2 males, 1 female. • Tityus apiacas Lourenço, 2002: Brazil, Pará, Itaiatuba, Bacia do Rio Jamanxim, 5/XII/2007 (J. Zuanon, 1 male; Amazonas, BR319, km 350, trilha 1, ponto 500 (5°16'11.28" S 61°55'46.8" W), 25/VII:2001, pitfall (H. Guariento & L. Pierrot), 1 male. • Tityus dinizi Lourenço, 1997: Brazil, Amazonas, Anavilhanas, X/1999 (J. Adis), 2 males. Etymology. The specific name refers to the geography of the region where the new species was found, between Amazon and Oriental Andes in Ecuador. Diagnosis. A moderate species when compared with the average size of other species in the subgenus Atreus: male 72.8 mm and female 70.1 mm in total length (see Table I). General pattern of pigmentation reddishbrown to brown overall. Basal middle lamella of female pectines dilated, but less conspicuous when compared with that of several other species of the subgenus Atreus. Subaculear tooth moderately long and spinoid. Pectinal tooth count 1919 in male and 2120 in female. Fixed and movable fingers of the pedipalp with 1516 oblique rows of granules. Ventral carinae of metasomal segments II to IV parallel in configuration. Pedipalps and in particular chela fingers with a strong chetotaxy. Trichobothriotaxy Aα orthobothriotaxic. The new species may be an endemic element to the occidental region of Amazon. Description based on male holotype and female paratype. Measurements in Table I. Coloration. Basically reddishbrown to brown overall. Prosoma: carapace reddishbrown with some dark pigment on the carinae. Mesosomal tergites reddishbrown with one darker transverse stripe on the posterior edge of tergites IVI. Metasoma: segments I to V reddishbrown; IV and V darker than the others and with some blackish regions over carinae. Vesicle: dark reddishbrown; aculeus reddish at the base and dark reddish at the tip. Venter reddishyellow; sternites with dark zones on lateral and posterior edges; sternite V with a white triangular zone on posterior edge, better marked on male; pectines pale yellow to white. Chelicerae reddishyellow with a dark thread; fingers blackish with dark reddish teeth. Pedipalps: reddishbrown; fingers dark, almost blackish with the extremities yellow. Legs reddishbrown to brown. Morphology. Carapace moderately to strongly granular; anterior margin with a moderate to strong concavity. Anterior median superciliary and posterior median carinae moderate to strong. All furrows moderately to strongly deep. Median ocular tubercle distinctly anterior to the centre of carapace. Eyes separated by more than one ocular diameter. Three pairs of lateral eyes. Sternum subtriangular. Mesosoma: tergites moderately to strongly granular. Median carina moderate in all tergites. Tergite VII pentacarinate. Venter: genital operculum divided longitudinally; each half with a semioval to semitriangular shape. Pectines: pectinal tooth count 1919 in male holotype and 2120 in female paratype; basal middle lamellae of the pectines dilated in the female and inconspicuously dilated in male. Sternites with a thin granulation and elongate spiracles; VII with four carinae better marked on female. Metasomal segments with 108885 carinae, crenulated, better marked on female. Dorsal carinae on segments I to IV with one to three spinoid granules, better marked on female. Lateral inframedian carinae on segment I complete, crenulate; represented by 13 granules on II; absent from III and IV. Ventrolateral carinae moderate to strong, crenulated on female, smooth on male. Ventral submedian carinae crenulate. Intercarinal spaces weakly granular. Segment V with dorsolateral, ventrolateral and ventromedian carinae crenulated on female, inconspicuous in male. Lateral intercarinal spaces moderately granular on female, smooth on male. Telson granular on female, smooth on male, with a long and strongly curved aculeus on both sexes. Dorsal surface smooth in both sexes; ventral surface weakly granular in females; subaculear tooth spinoid, shorter in male. Cheliceral dentition characteristic of the family Buthidae [START_REF] Vachon | De l'utilité, en systématique, d'une nomenclature des dents des chélicères chez les Scorpions[END_REF]; movable finger with two well formed, but reduced, basal teeth; ventral aspect of both fingers and manus with long dense setae. Pedipalps: femur pentacarinate; patella with seven carinae; internal face of patella with several spinoid granules; chela with nine carinae and the internal face with an intense granulation; other faces weakly granular. Femur, patella and chela fingers with a strong chetotaxy. Fixed and movable fingers with 1516 oblique rows of granules. Trichobothriotaxy; orthobothriotaxy Aα [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF][START_REF] Vachon | Sur l'utilisation de la trichobothriotaxie du bras des pédipalpes des Scorpions (Arachnides) dans le classement des genres de la famille des Buthidae Simon[END_REF]. Legs: tarsus with numerous short fine setae ventrally. Relationships. Taking into account the fact that previous populations of Tityus (Atreus) from Ecuador and, in particular, those from the Amazon region were associated to Tityus asthenes (Lourenço, 1988, Lourenço & Ythier, 2013), it would be logical to associate the new species to this one. However, the type locality of Tityus asthenes, supposedly in Peru, remains unclear, and the specimen described by Pocock could have been collected in quite different regions in tropical America. Moreover, the reanalysis of the general morphology of the female holotype brings confirmation that T. asthenes does not belongs to the subgenus Atreus, but rather to the subgenus Tityus and to the Tityus bolivianus group of species. It has a rather small size with only 53 mm in total length and a tegument strongly smooth with weakly marked carinae and granulations; the subaculear tubercule is small and very sharp (see Lourenço & Maury, 1985). These considerations led us to rather associate the new species to other Tityus (Atreus) distributed in the Amazon basin such as Tityus obscurus, Tityus dinizi, Tityus apiacas and Tityus tucurui Lourenço 1988(Figs. 2328). Tityus cisandinus sp. n. can however be distinguished from these cited species by: I) a rather smaller global size (see Table 1) with marked different morphometric values; II) better marked carinae and granulations; III) stronger chetotaxie on pedipalps. Moreover the geographical range of distribution appears as quite different (see Lourenço, 2011Lourenço, , 2017)). The new species is a possible endemic element to the Andean/Amazon region of Ecuador and Peru. In our opinion, the material listed by Teruel (2011) from Loreto in Peru and associated to Tityus asthenes, corresponds in fact to the new species described here. No confirmation is however possible since the cited material is deposited in the private collection of this author and not accessible. Figs. 13 . 13 Figs. 13. Tityus asthenes, female holotype. 12. Habitus, dorsal and ventral aspects. 3. Detail of ventral aspect, showing coxapophysis, sternum, genital operculum and pectines. Black & white photos taken in 1972. Figs. 45 . 45 Figs. 45. Tityus asthenes, female holotype. Habitus, dorsal and ventral aspects. Recent colour photos taken in 2017. Labels attest that the type was examined by F. Matthiesen, while in the Muséum in Paris during 1972. (Scale bar = 1 cm). All specimens are now deposited in the collections of the Muséum national d'Histoire naturelle, Paris, France. Fig. 6 . 6 Fig. 6. The natural habitat of Tityus cisandinus sp. n. (rio Capahuari, Pastaza province), covered by rainforest. Figs. 710 . 710 Figs. 710. Tityus cisandinus sp. n., male holotype and female paratype. Habitus, dorsal and ventral aspects. Figs. 1114 . 1114 Figs. 1114. Tityus cisandinus sp. n. Male holotype (1113) and female paratype (14). 11. Chelicera, dorsal aspect. 12. Cutting edge of movable finger showing rows of granules. 1314. Metasomal segment V and telson, lateral aspect. Figs. 1520 . 1520 Figs. 1520. Tityus cisandinus sp. n. Male holotype (1519) and female paratype (20). Trichobothrial pattern. 1516. Chela, dorsoexternal and ventral aspects. 1718. Patella, dorsal and ventral aspects. 1920. Femur, dorsal aspect. Fig. 21 . 21 Fig. 21. Tityus cisandinus sp. n. Female paratype alive. Fig. 2324 . 2324 Fig. 2324. Tityus obscurus from French Guiana. Habitus of male, dorsal and ventral aspects. Fig. 2526 . 2526 Fig. 2526. Tityus dinizi from Brazil. Habitus of male, dorsal and ventral aspects. Fig. 2728 . 2728 Fig. 2728. Tityus apiacas from Brazil. Habitus of male, dorsal and ventral aspects. see taxonomic section; Figs 45). Moreover, the study Table I . I Measurements (in mm) of the male holotype and female paratype of Tityus cisandinus sp. n. and males of Tityus dinizi (Brazil), Tityus apiacas (Brazil) and Tityus obscurus (French Guiana). Tityus cisandinus sp. n. Tityus dinizi T. apiacas T. obscurus ♂ ♀ ♂ ♂ ♂ Total length: 72.8 70.1 96.1 81.8 90.5 Carapace: Length 7.6 8.1 9.1 8.4 8.1 Anterior width 5.7 5.6 6.7 6.2 6.3 Posterior width 7.9 9.0 9.5 9.1 8.7 Mesosoma length 16.9 19.2 19.8 20.2 24.4 Metasomal segment I. length 5.8 5.2 8.7 6.6 7.6 width 3.8 4.2 4.1 4.2 4.2 Metasomal segment II. Length 7.8 6.3 10.8 8.5 9.2 Width 3.6 4.1 3.8 4.3 4.0 Metasomal segment III. Length 8.7 7.1 12.2 9.6 10.1 Width 3.6 4.2 3.9 4.4 4.1 Metasomal segment IV Length 9.4 8.2 13.6 10.5 10.6 Width 3.8 4.1 4.0 4.7 4.4 Metasoma, segment V. length 9.9 8.8 13.5 10.7 10.9 width 4.0 4.1 4.1 4.8 4.6 depth 3.8 3.9 4.1 4.4 3.9 Telson length 6.7 7.2 8.4 7.3 9.6 Vesicle: width 3.2 2.9 3.3 3.2 3.2 depth 3.0 2.9 3.2 3.2 3.1 Femur: length 9.8 8.4 13.3 13.2 13.3 width 2.2 2.4 2.4 2.2 2.3 Patella: length 10.1 9.1 13.8 13.8 14.0 width 2.6 3.0 3.1 2.8 2.8 Chela: length 17.4 15.7 21.5 21.8 23.1 width 2.5 2.9 2.8 2.6 2.5 depth 2.3 2.8 2.6 2.5 2.3 Movable finger: length 11.3 10.8 13.2 13.4 13.5 Acknowledgements We are most grateful to Janet Beccaloni (NHM, London) for useful information on the type specimen of Tityus asthenes and for providing colour photos of the type. We are also grateful to EliseAnne Leguin (MNHN, Paris) for the preparation of several photos and plates. List of the Ecuadorian species of Tityus Genus Tityus C. L. Koch, 1836
17,191
[ "1011345", "1030314" ]
[ "519585" ]
01759162
en
[ "shs" ]
2024/03/05 22:32:10
1994
https://hal.science/cel-01759162/file/CRANERBC.pdf
Paul Carmignani CRANE: THE RED BADGE OF COURAGE. Master Paul Carmignani S Stephen Crane STEPHEN CRANE: THE RED BADGE OF COURAGE établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. 1 PR. PAUL CARMIGNANI Université de Perpignan-Via Domitia STEPHEN CRANE THE RED BADGE OF COURAGE A SLICE OF LIFE When Crane published The Red Badge of Courage in 1895, his second attempt at novelwriting met vith triumphant success. His first novel or rather novelette, Maggie : A Girl of the Streets, dealt with the life of the underprivileged in their ordinary setting; it was published under a pseudonym with little or no success in 1893. With his Civil War story, Crane became famous almost overnight; at the time of publication he was chiefly a picturesque figure of the world of the press, the breeding-ground of many American literary talents. Crane was born in Newark, New Jersey, on November 1st, 1871. He was the 14th child of Jonathan Townley Crane (D. D.), a clergyman who died in 1880 leaving the young Stephen under the care of his mother and elder brothers. Crane was brought up in a religious atmosphere but he did not follow in his father's footsteps and kept himself aloof from religion. However, he remained dominated by fundamental religious patterns and preceptscharity, fraternity, redemption and salvation which he kept at an earthly level. In 1892, he became a free-lance journalist in New York after attending many schools and colleges where he evinced little taste for academic life. In New York he began his apprenticeship in bohemianism and learnt the value of careful on-the-spot study and observation of what he wanted to write about. His experience with life in the slums found expression in Maggie. In this novel, Crane showed himself to be impelled by the spirit of religious and social rebellion. Two years later the RBC saw print first in serial form and then as a book in New York and London where it was warmly received. Celebrity did not divest him of his feeling for the "underdog" for in that same year he ran into trouble with the metropolitan police force on account of a prostitute unjustly accused and bullied. Crane had to leave New York and accepted a commission to report the insurrection in Cuba against Spanish rule. In 1897 he met Cora Howorth the proprietress of a house of ill fame. In spite of the scandal raised by such an association, they were to live together for the rest of his life. Crane deliberately sought experiences where he could pit himself against danger; he covered the Greco-Turkish War for the New York Journal and the Westminster Gazette and decided to stay on in England at the end of his commission. In 1898, he published "The Open Boat and Other Tales of Adventure". In England he associated with such writers as Conrad, Wells, James etc., but soon got tired of country life and itched for action again. He left England to report the Spanish-American conflict but this new experience seriously impaired his health. He was back in England one year after and turned out an impressive body of fiction to avoid bankruptcy (one volume of verse: War is kind; Active Service, etc.). Meanwhile his body and brain gradually weakened but he went on writing to the end. S. Crane died on June, 5, 1900, in Badenweiler (Germany) where Cora had taken him in the hope he would miraculously recover from tuberculosis. COMPOSITION AND PUBLICATION The novel was composed between April 1893 and the Fall of 1894; it relates a three day battle which took place at Chancellorsville (Va) from May 1st to May 3rd, 1863. The name is never mentioned in the story but critics and historians have been able to detect numerous allusions to the actual troop movements and setting. The Red Badge of Courage was 1st published in an abbreviated form as a newspaper serial in 1894. The text was shortened to 16 chapters out of 24. In June 1895 D. Appleton and Company agreed to publish the complete text and thus The Red Badge of Courage came out in book form on October 1 st , 1895. From the start it became a bestseller owing to the favorable reviews published in the British press. Legend has it that Crane wrote the RBC on the dare of a friend to do better than Zola's La Débacle. This is a moot point; Crane never referred to such a wager though he stated in a letter: "I deliberately started in to do a potboiler [...] something that would take the boarding-school elementyou know the kind. Well I got interested in the thing in spite of myself. [...] I had to do it my own way." Though Crane was later to declare that he had got all his knowledge of war on the football pitch as a boy, he read all the factual material he could get about the Civil War before composing the RBC. The most obvious source was the important series of military reminiscences, Battles and Leaders of the Civil War used ever since it was published by all writers on the Civil War. Crane's history teacher at Claverack, General John B. Van Petten, a veteran, is often mentioned as a possible source of information on the conflict between the States. Crane had also read Civil War periodicals and studied Winslow Homer's drawings and Matthew Brady's photographs. Crane's novel was published at a time when America was undergoing great changes. The most prominent feature on the intellectual landscape of the age was disillusion and Crane's RBC is a faithful reflection of that crisis of ideal, the failure to recover from which marks the end of an era as well as a century and the beginning of modernity. The RBC represents a deliberate departure from the conventions of the "genteel tradition" of culture (the expression was coined by the American philosopher George Santayana and has been used since the late 19 th century to refer to a group of writers who had established literary, social and moral norms insisting on respect for forms and con-ventions). Crane falls in with the conventions of his time and ruthlessly debunks all the traditional views about heroism or the popular glorification of military courage. As a Crane scholar said "Crane meant to smash icons" in writing the RBC. Emerson is often cited to define Crane's purpose in art: "Congratulate yourself if you have done something strange and extravagant and broken the monotony of a decorous age". Although essentially American in his stance, Crane was also a rebel against many things American and the RBC may be considered as a tale of deflation aiming at the reduction of idealism. STRUCTURE OF THE NOVEL The novel consists of 24 chapters relating, as the original title for the novel pointed out -Private Fleming : His Various Battles -, the war experiences of a young boy leaving his mother's farm to enlist in the Union forces. The whole story mainly deals with the protagonist's evolution from a country boy to a veteran-like soldier. It is a chronicle of the boy's anxieties, moods and impressions as he progresses through the series of episodes making up the novel. The formal structure of the book is rather simply Aristotelian. It has a beginning (Chapters I-IV) which gets the youth to real battle; a middle (Ch. V-XIII) which witnesses his runaway and return; and an end (Ch. XIV-XXIII) which displays his achievement of "heroism" at climax, followed by a certain understanding of it in a coda-like final chapter. The middle and end sections are replete with notations of Fleming's psychological responses to fear, stress and courage. The narrative is conducted on two levels, the physical setting, the outer world where the action takes place, as opposed to the emotional plane or the mind of Henry Fleming. As the story is told from H. Fleming's point of view, the two planes are fused into one unified impression. The action can be divided into five parts: 1. Henry before the battle: Chapters I-IV 2. The youth sees real fighting: Chapters V-VI 3. Chapters VII-XI relate his flight 4. Chapters XII-XIV relate how H. F. got wounded and his return the regiment 5. Chapters XV-XXIII depict his achievement of heroism and are followed by a coda-like final chapter XXIV where H. F. reaches a certain degree of self-awareness. The general pattern of the book is that of a journey of initiation or a spiritual journey reflecting the mental condition of one of the men in the ranks. Crane's aimas stated beforewas to carry out a detailed observation of "the nervous system under fire". A Summary: Chapt. I-II: the first two chapters are merely introductory chapters presenting the main characters in an anonymous way. We make the acquaintance of "the tall soldier" (2) later to be given his full identity, Jim Conklin (11); "the blatant soldier" (16), we do not learn his full name -Wilson until later (18) and "a youthful private" or "the youth" whose name is also revealed later on (72). The young soldier hearing that the regiment is about to be sent into battle puts the central question: "How do you think the reg'ment'll do?" (11). Both the hero and the regiment are untried and thus will have to face the same fate. Ch. III: The regiment crosses the river. Henry is "about to be measured" (23). He experiences mixed feelings: he sometimes feels trapped by the regiment ("he was in a moving box", 23); the landscape seems hostile to him but his curiosity about war is stronger than his fears. Yet after his first encounter with the "red animal" (25) and a dead soldier (24), the hero begings to realize that war is not kind and more like "a trap" (25). The loud soldier gives Henry a "little packet" of letters (29). Ch. IV: Rumors of an imminent advance run through the ranks. H and his comrades witness the retreat ("stampede", "chaos")of a first wave of Union soldiers; H. resolves "to run better than the best of them" (33) if things turn out badly. Ch. V: This is the first picture of an actual battle in the book ("We're in for it", 35) and the protagonist goes through his baptism of fire in a kind of "battle sleep" (37). H. "becomes not a man but a member" (36) and experiences a "subtle battle brotherhood" (36) though he is somewhat disappointed by "a singular absence of heroic poses" (37). H. is compared to a "pestered animal" and "a babe" (37). Indifference of Nature (40). Ch. VI: H. feels he has passed the test but when the enemy launches a 2 nd attack "self-satisfaction" (40) gives way to self-preservation. This is a turning-point in the novel; when H. is cut off from the community which the regiment symbolizes, he feels alone and panics. He flees from the battlefield and pities his comrades as he runs ("Methodical idiots! Machine-like fools", 45). Ch. VII: H.'s disappointment: "By heavens, they had won after all! The imbecile line had remained and become victors" (47). Feeling "self-pity" (48) and "animal-like rebellion" Henry "buries himself into a thick woods" (48) and in the chapel-like gloom of the forest there takes place the 2 nd encounter with a dead man ("the dead man and the living exchanged a long look" 50). He believes that Nature can justify his running away (episode of the suirrel, 49). Ch. VIII: H. meets "the spectral soldier" (54) who turns out to be Jim Conklin and "the tattered soldier" who plagues him with embarrassing questions ("Where yeh hit?" 56). The tattered soldier seems to be the embodiment of H's early idea of himself as a war hero. At this stage, Nature seems no longer comforting but hostile. Ch. IX: Essentially concerned with the death of J. Conklin and its effect on Henry ("letters of guilt"/"He wished that he, too, had a red badge of courage", 57). Crane's realism is at its best in this chapter which comes to a close with one of the most famous images in all American literature: "The red sun was pasted in the sky like a wafer". Ch. X: H. deserts the tattered soldier who keeps "raising the ghost of shem" (64) and comes closer to the battlefield. H. envies the dead soldiers. Ch. XI: The men eventually retreat, which Henry interprets as a vindication of "his superior powers of perception" (70) but he nonetheless feels excluded from "the procession of chosen beings" (67) and stigmatized by "the marks of his flight" (68), hence "self-hate" (69) = "he was their murderer" (71). Ch. XII: During the helter-skelter retreat of his comrades, H. gets struck with a rifle-butt and finally gets a wound not from the enemy but ironically from one of his fellow soldiers who is also fleeing in a panic. Ch. XIII: Traces H's journey back to his regiment. Appearnace of "the cheerful soldier" who takes him back into the fold of the community. H. who lies: "I got separated from th' reg'ment" ( 80) is taken care of by Wilson. Ch. XIV: H. wakes up to "an unexpected world" (85). Wilson has meanwhile undergone a most spectacular change: he is "no more a loud young soldier" (87). Ch. XV: H toys with the idea of knocking his friend "on the head with the misguided packet" of letters but refrains from doing so ("It was a generous thing", 93). He builds up "a faith in himself" (92); "he had fled with discretion and dignity" and fancies himself "telling tales to listeners" (93). Ch. XVI: H. assumes the rôle of Wilson, the former "loud soldier"; he rants and raves about the officers and the army and soons gains a measure of self-confidence. But the words of a "sarcastic man" turn him into "a modest person" again (97). Ch. XVII: The enemy adavances. H. in a state of frenzy fires his gun like an automaton; an officer has to stop him and his comrades look upon him as a "war-devil" (103). Yet his achievement is diminished by the fact that "he had not been aware of the process. He had slept and awakening, found himself a knight". Ch. XVIII: A short pause. H and his friend on an eminence overhear a conversation between two officers who call the regiment "mule-drivers" (106). Ch. XIX: Put on their mettle by this sarcasm, H and Wilson lead the charge and even take hold of the regiment's flag. H. eventually experiences "a temporary absence of selfishness" (110). Ch. XX: The regiment hesitates and retreats ("the retreat was a march of shame to him", 115); the enemy is as confused as the attackers (118). H. takes a stand and his comrades eventually drive back the enemy. Enthusiasm: "And they were men" (119). Ch. XXI: The veterans make fun of the "fresh fish" who "stopped about a hundred feet this side of a very pretty success" (122). H. and Wilson are congratulated for their gallantry, hence "the past held no pictures of error and disappointment" (125). Ch. XXII: Self-confidence; H. bears the colors and resolves "not to budge whatever should happen" (129). Ch. XXIII: H. and the other men charge "in a state of frenzy" (132). Wilson captures the enemy flag. Ch. XXIV: Ironic overtones / the men must go back to where they started from; all the fighting has been to no avail. H's "brain emerges" (137) after a momentray eclipse. H. reviews "the delightful images of memory" (138), "puts the sin at a distance" (139) and feels "he was a man". By the end of the novel he seems to have acquired a new awareness of his own powers and limitations though Crane's irony makes it hard to form a definitive opinion on the hero's progress Once brought down to its essentials the plot appears as a mere journey of initiation following the ternary pattern of any basic narrative process. Some sort of formalist approach could be applied to the story: According to T. Todorov quoting Alan Dundes: "le processus narratif de base consiste dans une action qui se developpe en 3 temps: état de départ, processus proprement dit, résultat" (p. 85). Claude Brémond dans son ouvrage Logique du récit substitue à cette séquence une triade assez voisine correspondant aux trois temps marquant le développement d'un processus: virtualité, passage à l'acte, achèvement. Il précise de plus que "dans cette triade, le terme postérieur implique l'antérieur: il ne peut y avoir achèvement s'il n'y a eu passage à l'acte, il ne peut y avoir passage à l'acte s'il n'y a eu virtualité. 2° le passage à l'acte de cette virtualité (par exemple le comportement qui répond à 1'incitation contenue dans la situation "ouvrante"); 3° l'aboutissement de cette action qui "clôt" le processus par un succès ou un échec. Nous pourrions proposer pour le RBC le schéma suivantmême s'il ne rend compte que d'un aspect de l'oeuvre: MOTHER- ----------------------------------------------------------------------FLAG  _Impunité  BOY/YOUTH MAN IGNORANCE/INNOCENCE----------------------------------------KNOWLEDGE/GUILT THE CHARACTERS The characters in the RBC fall into 2 categories based on the opposition between the group or community on the one hand -say the Army and Regiment -and the individual or rank and file on the other. As Henry's mother tells him: "Yer jest one little feller amongst a hull lot of others..." (6) The Army/The Regiment The image of the "monster" is the most frequently associated with the army which is first described as a "crawling reptile" (15) and compared to "one of those moving monsters wending with many feet" (15). This "composite monster" (33) is given "eyes", "feet" (1) and "joints" (42). The army is indeed an anonymous mass of men, "a vast blue demonstration" (8), a kind of arrested or ordered chaos ready to turn loose at the 1 st word of command. This accounts for the depiction of the regiment as a "mighty blue machine" (71) which of course works "mechanically" and sometimes "runs down" (116). This idea is also conveyed by the image of the box cropping up p. 23: "But he instantly saw that it would be impossible for him to escape from the regiment. It enclosed him. And there were iron laws of tradition and law on four sides. He was in a moving box". It is in its fold that Henry experiences a mysterious fraternity "the subtle battle brotherhood" that comes from sharing danger with other people (36): "He suddenly lost concern for himself and forgot to look at a menacing fate. He became not a man but a member. He felt that something of which he was a part -a regiment, an army, a cause or a country -was in a crisis. He was welded into a common personality which was dominated by a single desire. For some moments he could not flee no more than a little finger can commit a revolution from a hand" The annihilation of personality in the course of a trance-like submersion in the group-will is to be opposed to the image of separation or even amputation from the same group: "If he had thought the regiment was about to be annihilated perhaps he could have amputated himself from it" (36) "He felt that he was regarding a procession of chosen beings. The separation was a threat to him as if they had marched with weapons of flame and banners of sunlight. He could never be like them." In fact, the regiment sometimes appears as a sort of mother-substitute ("His [the tattered soldier's] homely face was suffused with a light of love for the army which was to him all things beautiful and powerful" 56) and Henry's return to the regiment may be likened to a return to the womb (see ch. 13 last section). THE PRIVATES The RBC focuses only on 3 soldiers: Henry Fleming, Jim Conklin and Wilson. Although the theme of the novel is the baptism of fire of a Union private the tone is psychological rather than military. Its main characters are neither fully representative nor yet particularly individual and are, most of the time designated as figures in an allegory. Crane aims at a certain depersonalization of his characterss they represent "Everyman". He tried to turn the battle he described into a type and to that effect he erased all names in the process of revision so as to give an allegorical tone to his war tale. According to M. H. Abrams's A Glossary of Literary Terms, an allegory "is a narrative in which the agents and action, and sometimes setting as well, are contrived not only to make sense in themselves but also to signify a 2 nd , correlated order of persons, things, concepts or events. There are two main types: (1) Historical and political allegory, in which the characters and the action represent or allegorize, historical personages and events. [...] (2) The allegory of ideas, in which the characters represent abstract concepts and the plot serves to communicate a doctrine or thesis. In its development the RBC bears strong resemblance to The Pilgrim's Progress from which it borrowed one of the oldest and most universal plots in literature: that of the Journey, by land or water. In the case of the RBC the pattern is that of a journey leading through various experiences to a moral or spiritual change. This explains why the personages are at first mere labels, archetypal characters slowly acquiring some sort of identity. Thus the novel refers to "the youth" or "youthful private" (25); "the tall soldier", (J.C 11); "the blatant soldier" (Wilson; 165; there also appears "a piratical private" (17); "a sarcastic man" (96) and lastly "the man of the cheery voice" (78) Our analysis will be limited to the first three ones. HENRY FLEMING/THE YOUTH When Henry Fleming, motivated by dreams of glory and stirred by voices of the past and visions of future deeds of bravery, enlists in the Union forces, he severs the bonds tying him to his mother and his rural life. Once in the army he is left to himself, stranded in a new environment where he must find new bearings: "Whatever he had learned of himself was here of no avail. He was an unknown quantity. He saw that he would again be obliged to experiment as he had in early youth. He must accumulate information of himself..." (9). From the outset, Henry is confronted with the question of self-knowledge; all he cares to know is whether he is the stuff that heroes are made from: "He was forced to admit that as far as war was concerned he knew nothing of himself" (9). Henry must in other words prove himself in his own eyes as well as in other people's and he sets about it accountant-fashion; he is forever trying to square accounts with himself and to prove by dint of calculations that he can't be a coward: "For days he made ceaseless calculations, but they were all wondrously unsatisfactory. He found that he could establish nothing" ( 12) "He reluctantly admitted that he could not sit still and with a mental slate and pencil derive an answer." (13) Henry is holding an eternal debate with himself and this excruciating self-examination bespeaks a Puritan conscience, beset with self-doubts and the uncertainty of Salvation: "the youth had been taught that a man becomes another thing in a battle. He saw salvation in such a change" (29). Fleming is afraid of being weighed in the balance and found wanting and ironically enough the more he compares himself with his comrades the more puzzled he is: "The youth would have liked to have discovered another who suspected himself. A sympathetic comparison of mental notes would have been a joy to him. [...] All attempts failed to bring forth any statement which looked in any way like a confession to those doubts which he privately acknowledged in himself" (13) Soon Henry comes to experience a feeling of seclusion: "He was a mental outcast" (20) and suffers from lack of understanding on the part of the others: "He would die; he would go to some place where he would be understood" (29). In fact Henry has only himself to blame; he is not kept apart, he does keep himself apart from the others. Characteristically, Henry turns the tables and rationalizes his situation by convincing himself that he is endowed with superior powers: "It was useless to expect appreciation of his profound and fine senses from such men as the lieutenant" (29) or again: "He, the enlightened man who looks afar in the dark, had fled because of his superior perceptions and knowledge". This accounts for the repeated theme of Fleming's prophetic rôle towards his comrades and the world: "How could they kill him who was the chosen of gods and doomed to greatness?" (92). One is often under the impression that Henry suffers from delusions of grandeur and becomes a prey to his imaginings; Henry's imagination is both an asset and a liability; it sometimes goads him into a spurious sort of heroism: "Swift pictures of himself apart, yet in himself, came to him -a blue desperate figure leading lurid charges with one knee forward and a broken blade high a blue determined figure standing before a crimson and steel assault, getting calmly killed on a high place before the eyes of all" (67-68) oron the contraryparalyses him with groundless fears: "A little panic-fear grew in his mind. As his imagination went forward to a fight, he saw hideous possibilities. He contemplated the lurking menaces of the future, and failed in an effort to see himself standing stoutly in the midst of them" (95). Henry is actually stalemated by his imagination; the only way out is for him to take the plunge i.e. to find out in action what he cannot settle in his imagination: "He finally concluded that the only way to prove himself was to go into the blaze, and then figuratively to watch his legs and discover their merits and faults" (12) After a trying period of waiting Henry sees action for the 1 st time in his life; in the furnace of battles the youth loses all concern for himself and for a brief moment dismisses all his qualms: "He became not a man but a member" [...] "He felt the subtle battle brotherhood more potent even than the cause for which they were fighting." (36). After his short brush with the enemy, the youth goes into an ecstasy of self-satisfaction; his appreciation of his own behaviour is out of proportion with the importance of the encounter: "So it was all over at last! The supreme trial had been passed. The red, formidable difficulties of war had been vanquished. [...] He had the most delightful sensations of his life. Standing as if apart from himself, he viewed the last scene. He perceived that the man who had fought thus was magnificent." His self-confidence however is short-lived; a second attack launched against his line causes his sudden panic and flight: "There was a revelation. He, too, threw down his gun and fled. There was no shame in his face. He ran like a rabbit. (41). As befits a country boy, Henry seeks refuge in Nature and even tries to enlist her sympathy: "Nature had given him a sign. The squirrel, immediately upon recognizing danger, had taken to his legs without ado. [...] The youth wended, feeling that Nature was of his mind. She re-enforced his argument with proofs that lived where the sun shone". ( 49) The dialogue he carries on with his own conscience often contains overtones of legalistic chicanery: it is a constant search for excuses to justify his cowardly conduct: "His actions had been sagacious things. They had been full of strategy. They were the work of a master's legs." (48) "He felt a great anger against his comrades. He knew it could be proved that they had been fools." Fleming even goes to the length of wishing the army were defeated; that would provide him with "a means of escape from the consequences of his fall." (71) The use of the term "fall" bears witness to the religious undertone of his experience and the protagonist's preoccupation with moral responsibility ("He denounced himself as a villain" [...] he was their [the soldiers'] murderer" 71) and personal redemption has led many criticts to compare The Red Badge of Courage with Hawthorne's The Scarlet Letter. The same image lies at the core of the two novels; we find the "scarlet letter" on the one hand and reference to "a red badge" or "the letters of guilt [...] burned into [the hero's] brow" (57) on the other hand. Both novels deal with the discovery through sin, of true and authentic self, the isolation which is the consequence of that discovery, and the struggle to endure which is the consequence of that isolation. Henry knows "he will be compelled to doom himself to isolation" (71) because the community represents a possible danger from which he must take refuge ("The simple questions of the tattered man had been knife thrusts to him. They asserted a society that probes pitilessly at secrets until all is apparent" 65). The price Henry has to pay to be counted again in the ranks of his comrades is that of public acknowledgement of private sin. The price is too high and the youth will go on wandering about the battlefield. The episode related in chapter 12 is one of the turning-points in Henry's progress. After being ironically awarded the red badge of courage he had been longing for Henry can return to his regiment and try to achieve salvation not through introspection or public confession but simply through action. From then on the story assumes the form of a journey of expiation. Henry is described as a daredevil fighting at the head of his unit during a charge and even catching hold of the colours of the regiment. Yet these deeds of bravery are motivated by his feeling of guilt and his spite over having been called a mule driver (106). Moreover his actions are performed in a sort of battlefrenzy which leaves little room for the expression of any conscious will and determination. War teaches Henry how to be brave but also blunts his moral sense; Henry becomes an efficient soldier but one may wonder if this is a desirable achievement for a country boy. Crane's irony also casts some doubts on the very value of Henry's heroism which appears not so much as a predictable possessionsomething one can just fight forbut as an impersonal gift thrust upon him by a capricious if not absurd Providence. Why was Henry, of all soldiers, spared and given the opportunity to "[...] rid himself of the red sickness of battle" (140) and to turn "with a lover's thirst to images of tranquil skies, fresh meadows, cool brooks -an existence of soft and eternal peace" ? There is some irony in considering that the vistas opening up before Henry are little different from what they would have been had he stayed on his mother's farm. Is that to say that after so many tribulations Henry is back to where he started from ? One may think so. Fleming went off to war entertaining delusions about its very nature and his own capacity for heroism and ends up with a naive vision of harmony with nature. Henry has undoubtedly grown more mature but he can still fool himself about the cause of his wound or the fact that he has fled from battle as much as he fooled other people. ("He had performed his mistakes in the dark, so he was still a man" 91). The fundamental question at issue is the following: does Henry Fleming develop in the course of the novel ? A quotation from p. 139 will provide us with a valuable clue: "Yet gradually he mustered force to put the sin at a distance. And at last his eyes seemed to open to some new ways. He found that he could look back upon the brass and bombast of his earlier gospels and see them truly. He was gleeful when he discovered that he now despised them. With this conviction came a store of assurance. He felt a quiet manhood, nonassertive but of sturdy and strong blood. He knew that he would no more quail before his guides wherever they should point. He had been to touch the great death. He was a man." The youth in his baptism of fire has acquired self-kwowledge and experience but a radical change has not taken place within him; he remains in his heroic pose at the close of the novel just as grotesque as the fearful "little man" he was at the beginning. As J. Cazemajou puts it in his study of the novel: "his itinerary led him not to eternal salvation but to a blissful impasse". Henry has become a man, true enough, but a man with a guilty conscience; "the ghost of his flight" and "the specter of reproach" born of the desertion of the "tattered man" in his sore need keep haunting Fleming. His manliness has been acquired at the expense of his humanity. The Red Badge of Courage contains the account of a half-completed redemption. It is only in a satellite story entitled "The Veteran" that Henry drinks the bitter cup to the dregs and purges himself of his former lie by confessing his lack of courage on the battlefield. The other two characters the "tall soldier" and the "blatant soldier" are mere foils for Henry Fleming. THE TALL SOLDIER This character is doubtless one of the most controversial figures in the novel. A small critical warfare has been waged over his rôle and function and many interpretations have been put forward since the book saw print. One hardly expects to see the blustering, self-opinionated, smug news- Another interpreter of the novel, Jean Cazemajou rejects Stallman's thesis and suggests a more matter-of-fact reading. The "tall soldier" he argues, does not undergo any significant change as the story unfolds. Unlike Henry, Jim Conklin is not tormented by fears and self-doubts; he adjusts to his soldier's life with much philosophy: "He accepted new environment and circumstance with great coolness, eating from his haversack at every opportunity. On the march he went along with the stride of a hunter, objecting to neither gait, nor distance." (28) Even if Henry often looks to him for advice and guidance J. Conklin is no embodiment of the Redeemer. The French critic sees in this backwoodsman, led by instinct and immune to unpleasantness, a kind of ironic version of the traditional hero, some sort of natural man plunged into the furnace of war. Jim Conklin's messianic rôle is rather played down in Cazemajou's interpretation; J. Conklin's example has little impact on Henry's behaviour. Far from confessing his cowardice after Jim's death, Henry merely tries to conceal it from his comrades and the sight of the gruesome danse macabre of the spectral soldier triggers off in the youth a certain aggressiveness which is but fear or panic in a different guise. As for the wafer image, it is just an evocation of primitive rites to a blood thirsty God reminiscent of the Aztec religion. Whatever the significance of "the tall soldier" may be, The Red Badge of Courage would lose much of its uniqueness without the haunting image of the "spectral soldier"' stalking stonily to the "rendez-vous" where death is awaiting him. THE LOUD SOLDIER : WILSON Wilson is a much less difficult character to deal with. His evolution from a bragging swashbuckler to a clear-sighted, modest combatant is almost the exact opposite of Henry's progress. In chapter 3 Wilson appears as a soldier spoiling for a fight: "I don't mind marching, if there's going to be fighting at the end of it. What I hate is this getting moved here and moved there, with no good coming of its as far as I can see, excepting sore feet and damned short rations." (185). He evinces great confidence in himself: "[...] I'm not going to skedaddle. The man that bets on my running will lose his money that's all." (19) Yet on hearing the first noises of battle, the loud soldier's confidence crumbles away and in a moment of weakness and self-pity he confesses his forebodings to Henry and entrusts him with a packet of letters: "It's my first and last battle, old boy. [...] I'm a gone coon this first time and-and I w-want you to take these here things-to-my-folks". ( 29) Unlike Henry, however, Wilson bears up well under the strain and passes his initiation successfully and acquires in the process of time a quiet manhood which Fleming is to attain only at the end of the battle: "He was no more a loud young soldier. There was about him now a fine reliance. He showed a quiet belief in his purpose and his abilities. And this inward confidence evidently enabled him to be indifferent to little words of other men aimed at him" (87). In the same way, the realisation of his own insignificance is sooner borne in upon him than is the case with Henry: "Apparently, the other had now climbed a peak of wisdom from which he could perceive himself as a very wee thing" 87). Such transformation is emphasized by the reference to the chasm separating Wilson's former self from his present condition ("He spoke as after a lapse of years" 88). In spite of the fact that Henry takes a kind of sadistic pleasure in his friend's embarrassment over getting back his letters, the two characters are drawn together and share the same experiences. They form one of those countless couples of males which are to be found in American Literature (Huck and Jim; George Milton and Lennie Small, etc.); their relations are always slightly tinged with homosexuality (cf. 83). Wilson's bombast and fiery spirits vanish for a time and when he is seen to make pacific motions with his arms and to assume the rôle of a peacemaker among his comrades (89) it comes as something of a surprise. But that peaceful mood does not last long and just like Henry Wilson is carried away by the battle-fury that seizes the regiment. His daring reaches a climax with the capture of the enemy's flag ("The youth's friend went over the obstruction in a tumbling heap and sprang at the flag as a panther at prey" 134). The admiration of their comrades after their courageous attitude under enemy fire kindles the same feeling of elation in both of them: "they knew that their faces were deeply flushing from thrills of pleasure. They exchanged a secret glance of joy and congratulation". (125) Although they are running in opposite directions Henry's and Wilson's itineraries lead up to the same ambiguities, for their heroism is nothing but the ordinary stock of oourage among fighting men and is therefore of uncertain value or meaning. The ease with which they forget the conditions under which they acquired their courage diminishes them: "They speedily forgot many things. The past held no pictures of error and disappointment. They were very happy, and their hearts swelled with grateful affection for the colonel and the youthful lieutenant." (125) As is often the case with Crane a strong ironic coloring can easily be detected here and he remains faithful to his intention never to point out any moral or lesson in his stories. THEMES OF THE RED BADGE OF COURAGE On the surface the RBC is a simple tale of warfare yet the scope of the story is greatly enlarged by the themes which run through the narrative and deepen its implications. This attempted survey of the themes underlying the novel is far from being comprehensive and will be limited to four of them, namely: the theme of War, the theme of Nature, the theme of the Sacred and the motif/ theme of Vision. 1) THE THEME OF WAR The novel describes not only the war waged by the Yankees and the Rebs but above all the "the self-combat of a youth" who must prove himself in battle. Henry the protagonist is involved in more than one battle; he fights against the enemy true enough, but as the original title for the book suggests, he also fights private battles with himself. In the author's own words: "Doubts and he were struggling." (68) Henry's fight sometimes achieves cosmic reaches when he comes to feel that "he was engaged in combating the universe" (73) or again ("Yesterday, when he had imagined the universe to be against him, he had hated it" 100). Thus we have war at every level; even Nature proves to be a vast field of battle where one has to fight for survival and this is quite in keeping with Crane's deeprooted convictionwhich lies at the heart of all his fictionthat the essence of life is warfare. The RBC is holding no brief for war; Crane wanted not only to give an account of an actual battle but also to deal with war in the abstract i.e. to debunk the concept of war that had gained currency in his time. War was then considered as a kind of social phenomenon partaking of the Sacred, a ritual game vith fixed rules and much glamour. Whether one was waging or depicting war decorum was always to be preserved. By constantly resorting to irony or the grotesque Crane exposes the seamy side of war and deglamorizes it. The soldiers don't even know why the war is fought; they are led into battle like a flock of sheep to the slaughterhouse, the leaders treat their men like animals. In short, the soldiers are just material that can be used without regard for their lives, mere cannonfodder. The main feature in this study of war is disillusionment; after two days' battle the men march back to their starting-place, all their sacrifices have been to no avail. War is meaningless even if it sometimes appears as a testing-ground of man's courage and stamina but it is not purposeless. Crane anticipated modern theories on the nature of war when he described it as a periodically organized wastage, a process meant to dispose of surplus goods and human lives; in a former version of the text Henry declared: "War, he said bitterly to the sky, was a makeshift created because ordinary processes would not furnish deaths enough" (Cady 128) Despite these strictures, the narrative sometimes lifts war to the plane of the cosmic and the mythic: the battle sometimes appears as a re-enactment of Greek or Biblical struggles. Thus Crane reactivates the myth of Cadmos through references to Homeric struggles and images of dragons or of men springing fully armed from the earth: "The sun spread disclosing rays, and, one by one, regiments burst into view like armed men just born of the earth" ( 23). The "red and green monster" looming up (43) bears close kinship to Appollyon the Biblical archetype described in Revelations. Yet, in spite of those references to bygone battles and heroic times, Henry's adventure ends neither in victory nor even in the sacrifice of his life; Henry's stature never exceeds that of a man. In fact, it might be argued that Henry develops into some sort of ironic anti-hero figure since the spirit of rebellion he evinces at the beginning of the story is by and by replaced by a more pliant disposition, a readiness to accept things as they are: "He knew that he would no more quail before his guides wherever they should point" (139). Linked to the theme of war are other attendant motifs such as the theme of courage; the theme of fear and last but not least the wound-motif. As these have already been dealt with, we'll just skip them here and proceed to a discussion of the other three. 2) THE THEME OF NATURE Nature images pervade the narrative and a reference to Nature ends the book as one had begun it. Throughout the story, Nature appears as a kind of "objective correlative" of Henry's emotions. All that Henry can find in Nature is a reflection of his own psychological state continually wavering betveen enthusiasm and anxiety. Henry will be slow in realising the indifference or even hostility of Nature to Man. Meanwhile he constantly appeals to her for comfort, proofs and justifications of his behaviour: "This landscape gave him assurance. A fair field holding life. It was the religion of peace. [...] He conceived Nature to be a woman with a deep aversion to tragedy." (49) Yet even if Nature "is of his mind" or "gives him a sign", there is no affinity, let alone complicity, between Man and Nature. Whatever Henry learns from Nature at one moment is contradicted in the next by the selfsame Nature (cf. the squirrel and gleaming fish episode 50). Much as he should like to become a prophet of that "religion of peace" that Nature apparently advocates, Henry is forced to admit that he's a victim of false appearances -Nature does not care about Man's fate: "As he gazed around him the youth felt in a flash of astonishment at the blue, pure sky and the sun gleamings on the trees and fields. It was surprising that Nature had gone tranquilly on with her golden process in the midst of so much devilment" (40) This astonishing revelation culminates in Henry's realization of his own littleness and insignificance: "New eyes were given to him. And the most startling thing was to learn suddenly that he was sery insignificant" (107). Nature can do without man even if the opposite isn't true; this startling discovery is most vividly expressed by one of the characters in "The Open Boat" (An Omnibus 439): "When it occurs to a man that nature does not regard him as important, and that she feels she would not maim the universe by disposing of him, he at first wishes to throw bricks at the temple, and he hates deeply the fact that there are no bricks and no temples." Man is thus made the plaything of Nature; his actions are performed under the pressure of the natural environment and the physical forces surrounding him. Nature is no veiled entity holding any revelation for man; she does not manifest any divine Presence. In its midst man is just a pawn to multiple compulsions; this aspect justifies the reference to naturalism (a point we will discuss later on) which has been made in connection with Crane's novel. 3) THE THEME OF THE SACRED Though the protagonist never seeks comfort in religion, the RBC can't be read outside a religious tradition. The narrative abounding in Biblical references and religious symbols describes the sufferings of an unquiet soul assailed by the classic doubt of the Calvinist: "Am I really among the elect ?" That question is of course transposed in different terms but one is struck by the parallelism between Henry's situation and his Puritan forebears': "He felt that he was regarding a procession of chosen beings. The separation was as great to him as if they had marched with weapons of flame and banners of sunlight. He could never be like them. He could have wept in his longings." (67) This similarity is also reinforced by the fact that, since the Middle Ages, the spiritual life of man has often been likened to a pilgrimage or a battle ("[...] a man becomes another thing in a battle. He saw his salvation in such a change". 29 / "[...] with a long wailful cry the dilapidated regiment surged forward and began its new journey !" 113) As is seen in the last quotation the progress of the group parallels that of the solitary hero, H. Fleming. The RBC presents a new version of the Fall of man entrapped by his curiosity and vanity and a new process of Election through suffering and wounds. Henry's moral debate also points up his religious upbringing. His conscience is racked by remorse and he considers himself as a murderer because he deserted his friends and thus holds himself responsible for the death of his comrades: "His mind pictured the soldiers who would place their defiant bodies before the spear of the yelling battle fiend and as he saw their dripping corpses on an imagined field he said that he was their murderer." 71 Henry awaits retribution because if he sometimes finds some comfort in the fact that he did his misdeed in the dark ("He had performed his mistakes in the dark, so he was still a man." 91), he knows in his heart of hearts that all his actions are performed under the gaze of a dreadful God whose attention nothing escapes. This supreme Judge is symbolized by the haunting memory of Henry's mother ("I don't want yeh to ever do anything, Henry that yeh would be ashamed to let me know about. Jest think as if I was a-watching yeh." 6) and the collective consciousness of the regiment ("an encounter with the eyes of judges" 91). Henry's experiences are essentially expressed in terms of vision and one might say quoting Emerson that in the RBC the hero undergoes "a general education of the eye"/"I". The novel traces the protagonist's evolution from a certain way of world-watching to a more active stance; the visions of a naive, innocent eye will be replaced by those of a more conscious eye. Henry goes to war out of curiosity, because in the author's own words "he ha[s] longed to see it all." (4) and he is stirred by visions of himself in heroic poses. The motif of the "eye" is stressed from the outset; the opening description of the novel sets going a train of images that will run through the narrative and stresses a fundamental opposition between "seeing" and "being seen"; the active and passive poles: "From across the river the red eyes were still peering." ( 14) "The mournful current moved slowly on, and from the water, shaded black, some white bubble eyes looked at the men." ( 23) Henry is at first some sort of onlooker. After the short encounter with the enemy, he sees himself in a new light (41) and comes to believe he is endowed with superior powers of perception naturally translated in visual terms: "There was but one pair of eyes in the corps" (25)/"This would demonstrate that he was indeed a seer." (70) This accounts for the prophetic rôle which he believes to be his. In the chapel-like forest the perspective changes; there is a reversal of situations: the onlooker is now looked at by somebody else: "He was being looked at by a dead man" (50) From now on Henry will have to face his comrades' probing eyes: "In imagination he felt the scrutiny of his companions as he painfully laboured through some lies" (68) / "Wherever he went in camp, he would encounter insolent and lingeringly-cruel stares." (72) Quite characteristically, Henry puts all his fears out of sight and indulges in daydreaming and gratifying pictures of himself (69) where he is seen to advantage: "Swift pictures of himself, apart, yet in himself, came to him.[...] a blue determined figure [...] getting calmly killed on a high place before the eyes of all." (68) Henry's hour of glory comes after his capture of the flag; he is then the cynosure of every eye: "[...] they seemed all to be engaged in staring with astonishment at him. They had become spectators." (102): "He lay and basked in the oocasional stares of his comrades" (103). The roles are now reversed; the spectator has himself become an object of admiration; Henry is now looked upon as a hero. By going through fire and water Henry gains a new awareness, undergoes a radical change which is of course expressed in a new angle of vision: "New eyes were given to him. And the most startling thing was to learn suddenly that he was very insignificant" (106). The youth develops a new way of observing reality; he becomes a spectator again but can see sverything in the right perspective, in its true light: "From this present view point he was enabled to look upon them (his deeds, failures, achievements) in spectator fashion and to criticise them with some correctness, for his new condition had already defeated certain sympathies" (137) So, the wheel has come full circle: "He found that he could look back upon the brass and bombast of his earlier gospels and see them truly." (139) Nevertheless the hero remains in the end as susceptible as ever to the lure of images and the circular structure of the novel is brought out by the last vision Henry deludes himself with: "He turned now with a lover's thirst to images of tranquil skies, fresh meadows, coo1 brooks-an existence of soft and eternal peace." IMAGES AND SYMBOLS IN THE RED BADGE OP COURAGE One of the most original facets of Crane's war novel is its consistent use of manifold images and symbols which sometimes give this story the style of a prose poem. The overall tonality of the novel owes much of its uniqueness to the patterns of natural, religious or even mechanistic imagery created by the author. Our study sill be limited to these three sets of images. Nature and animal images The protagonist's rural background is made manifest by the impressive number of scenes or similes referring to animals and forming a conventional bestiary by the side of a Christian demonology swarming with monsters directly borrowed from Greek or Biblical literatures. The RBC evinces a truly apocalyptic quality in the description of War which is compared to "a red animal" (75) "gulping [thousands of men] into its infernal mouth" (45). The two armies are, more often than not, associated with images of "monster" (14/33), "serpent" (15) and "dragons" (44); their encounter is likened to that of two panthers ("He conceived the two armies to be at each other panther fashion." 51). The panther and the eagle seem to be the only two animals suggestive of some war-like qualities; the other animals appearing in the novel are more tame (mainly farm animals) and as a general rule carry different connotations: the soldiers are led into battle like "sheep" (108-111), "they are chased around like cats" (98), to protect themselves from the bullets "they dig at the ground like terriers" (26) and after charging like "terrified buffaloes" (72) they are eventually "killed like pigs" (25). The hero's lack of courageand staturein the face of danger is suggested by comparisons referring to even smaller animals: the rabbit (43), the chicken (44) and the worm, the most despicable of all ( "He would truly be a worm if any of his comrades should see him returning thus, the marks of his flight upon him.", 68). Henry's companions are also likened to animals: Wilson to a "coon" (29) and then to a "panther" (134); Jim Conklin is once associated to the "lamb" during the death scene -a fitting symbol of peacefulness for the most philosophical of characters ("[...] the bloody and grim figure with its lamblike eyes" 55). Most of these images illustrate the fact that war brings out the most primitive impulses and instincts in man and reduces him to the level of a beast ("he [the youth] had been an animal blistered and sweating in the heat and pain of war." 140) or again, (103) "he had been a barbarian, a beast". In the midst of the fight Henry experiences "an animal like rebellion" (48) and in a regressive response to fear and shame he develops "teeth and claws" to save his skin. Death is even described as an animal lurking within the soldiers; when Jim Conklin dies "it was as if an animal was within and was kicking and tumbling furiously to be free." (61) The flag, of all things pertaining to the army, is the only one to be endowed with positive connotations; it is evocative of a "bird" (40), of a "woman" (113) and of a "craved treasure of mythology" (133). The same feminine qualities are sometimes attributed to Nature but this is not consistent throughout the novel as we've already seen. The main characteristic of Nature whether in the shape of the Sun or the Sky, the Forest or the Fields is indifference pure and simple. The only revelation in store for Henry when he yields to Nature's "beckoning signs" is that of a "charnel place" (85). Religious imagery As a P. K. (preacher's kid) brought up in a religious atmosphere, Stephen Crane commanded an impressive stock of religious and Biblical references. As J. Cazemajou points out in his short but thought-provoking study of Stephen Crane: He was deeply conscious of man's littleness and of God's overbearing power. Man's wandering on the earth were pictured by him as those of a lonely pilgrim in a pathless universe. Crane's phraseology comes directly from the Bible, the sermons, and the hymns whioh had shaped his language during his youth. The topography of his stories where hills, mountains, rivers, and meadows appear under symbolie suns or moons is, to a large extent, an abstraction fraught with religious or moral significance. [...] In Crane's best work the imagery of the journey of initiation occupies a central position and reaches a climactic stage with some experience of conversion. He did not accept, it is true, the traditional interpretation of the riddle of the universe offered by the Methodist church. Nevertheless he constantly used a Christian terminology, and the thoughts of sin inspired his characters with guilty fears and stirred up within them such frequent debates with a troubled conscience that it is impossible to study his achievement outside a religious tradition (37) This goes a long way toward explaining the following images: in terms of shooting (camera-work) e.g. after the panning shot setting the stage for future action (beginning of the novel) we have a close-up (Henry is picked out in the mass of soldiers p. 2: "There was a youthful private, etc.") and then a flashback when the story shifts from the present to an earlier part of the story (parting scene with Henry's mother, 4). Numerous examples are to be found throughout the narrative. The above remarks refer to the narrator's point of view but a second point of view functions in the narrative: the author's (four points of view interact in a novel: the author's, the narrator's, the character's or characters" and last but not least the reader's; their importance varies from one novel to another). The author's point of view manifests itself in the use of irony which constantly betrays an ambiguous presence. Who are the following appreciations ascribable to ?: "The music of the trampling feet, the sharp voices, the clanking arms of the column near him made him soar on the red wings of war. For a few moments he was sublime." ( 68) "He drew back his lips and whistled through his teeth when his fingers came in contact with the splashed blood and the rare wound," ( "And for this he took unto himself considerable credit. It was a generous thing," (93) As one of Crane critics rightly points out: "Irony is not an ideal but a tactic. It's a way of taking the world slantwise, on the flank" (Cady, 90) in order to better expose its weaknesses and its flaws. Crane's irony was the basis of his technique; he levelled it at all abstract conventions and pomposities; at God, then at Man and at the Nation. There are of course, several types of irony but according to M. H. Abrams (A Glossary of Literary Terms): "In most of the diverse critical uses of the term "irony", there remains the root sense of dissimulation, or of a difference between what is asserted and what is actually the case". The duplicity of meaning is the central feature of irony The RBC sometimes exhibits structural irony i.e. uses a naive hero or narrator "whose invincible simplicity leads him to persist in putting an interpretation on affairs which the knowing reader-who penetrates to and shares, the implicit point of view of the authorial presence behind the naive person-just as persistently is able to alter and correct." Cf. page 64: "Yeh look pretty peek-ed yerself," said the tattered man at last. "I bet yeh "ve got a worser one than yeh think. Ye'd better take keer of yer burt. It don't do t'let sech things go. It might be inside mostly, an" them plays thunder. Where is it located ?" The reader knows at this stage that the hero has escaped unscathed and does not deserve such eager care. Cosmic irony is naturally at work in the RBC; it involves a situation "in which God, destiny, or the universal process, is represented as though deliberately manipulating events to frustrate and mock the protagonist." p. 83). Cf. 47: The youth cringed as if discovered in a crime. By heavens, they had won after all ! The imbecile line had remained and become victors. He could hear the cheering [...] He turned away amazed and angry. He felt that he had been wronged. [...] It seemed that the blind ignorance and stupidity of those little pieces had betrayed him. He had been overturned and crushed by their lack of sense in holding the position, when intelligent deliberation would have convinced them that it was impossible Irony is inherent in the very structure of the narrative since at the end of the story after threedays' battle the army must recross the river from where the attack was launched at the beginning. The whole tumult has resulted in no gain of ground for the Union forces: "'I bet we're goin't'git along out of this an'back over th'river,' said he." p. 136 As is customary with Crane, man is involved in an absurb and tragic situation which highlights his insignificance and the ridiculousness of his efforts. Impressionism Although J. Conrad defined criticism as "very much a matter of vocabulary very consciously used", one is hard put to it to give a satisfactory definition of the terms "impressionism" and "impressionistic". It's nevertheless a fact that the epithet "impressionistic" was often applied to Crane's work in the author's lifetime. The term comes from the French impressionist painters who rebelled against the conventions and the official conservatism of their time. They determined to paint things as they appeared to the painter at the moment, not as they are commonly thought or known to be. Such tendency was of course at variance with the tenets of realism for realism demanded responsibility to the common view whereas impressionism demanded responsibility only to what the unique eye of the painter saw. As one critic pointed out "the emphasis on the importance of vision is perhaps the only common denominator between this style of painting and Crane's art" (Cady). The credo of literary impressionists is, if we are to believe Conrad, "by the power of the written word to make you hear, to make you feelit is, before all, to make you see." The best account of the socalled impressionistic technique of the author has been given by the critic Robert H. Stallman whom I'll quote extensively: Crane's style is prose pointillism. It is composed of disconnected images, which coalesce like the blobs of colour in French impressionist paintings, every word-group having a cross-referenee relationship, every seemingly disconnected detail having interrelationship to the configurated whole. The intensity of a Crane work is owing to this patterned coalescence of disconnected things, everything at once fluid and precise. A striking analogy is sstablished between Crane's use of colors and the method employed by the impressionists and the neo-impressionists or divisionists, and it is as if he had known about their theory of contrasts and had composed his own prose paintings by the same principle. [...] Crane's perspectives, almost without exception, are fashioned by contrastsblack masses juxtaposed against brightness, colored light set against gray mists. At dawn the army glows with a purple hue, and "In the eastern sky there was a yellow patch like a rug laid for the feet of the coming sun; and against it, black and pattern-like, loomed the gigantic figure of the colonel on a gigantic horse'"(239). (An Omnibus p. 185-86). Crane's striving after effects of light and shade and his concern for the effect of light on color are evidenced by the way he uses adjectives (or nouns used adjectivally). Crane actually paints with words and adjectives just as artists paint with pigments and this brings about most astounding effects and associations. Thus "red", "blue", "black" and "yellow" are the dominant hues of the novel: the regiment is "a vast blue demonstration" (8); the youth must confront "the red formidable difficulties of war" (41); he labours under "the black weight of his woe" (67) or is carried away "on the red wings of war" (68) then he walks "into the purple darkness" (76) before traversing "red regions" (134) resonant with "crimson oaths" (138). Some descriptions are saturated with visual impressions: There was a great gleaming of yellow and patent leather about the saddle and bridle, The quiet man astride looked mouse-colored upon such a splendid charger. ( 46) The clouds were tinged an earthlike yellow in the sunrays and in the shadow were a sorry blue. The flag was sometimes eaten and lost in this mass of vapour, but more often it projected sun-touched, resplendent. ( 42) The reader also comes across a few examples of synaesthesia (association of sensations of different kinds; description of one kind of sensation in terms of another; color is attributed to sounds, odor to colors, sound to odors, and so on): "shells explodins redly" (31)/"a crimson roar" (51)/"red cheers" (53) Pathetic fallacy Another feature of Crane's stylea possible outgrowth of his interest in visionis the use of pathetic fallacy or ascription of human traits to inanimate nature. There's in Crane's work a strong tendency towards personification or anthropomorphism. The narrative teems with notations giving a dynamising, anthropomorphic quality to the object described. Here are a few telling examples: "The guns squatted in a row like savage chiefs. They argued with abrupt violence." (40) "The cannon with their noses poked slantingly at the ground grunted and grumbled like stout men, brave but with objections to hurry." (46) "Trees confronting him, stretched out their arms and forbade him to pass" (52) "Some lazy and ignorant smoke curled slowly." (118) The youth listens to "the courageous words of the artillery and the spiteful sentences of the musketry" (53); the waggons are compared to "fat sheep" (66) and the bugles "call to each other like brazen gamecocks." (86). Crane rarely aims at giving the reader a direct transcript of experience; reality is always pervaded by a strong subjective coloring which gives it unexpected and sometimes disquieting dimensions. Realism and naturalism There now remains for us to examine, however briefly, the realistic and naturalistic interpretations of Crane's work. Realism is perhaps of all the terms applied to Crane, the most suitable. In a letter to the editor of Leslie's Weekly Crane wrote that: "I decided that the nearer a writer gets to life the greater he becomes as an artist, and most of my prose writings have been toward the goal partially described by that misunderstood and abused word, realism." What is meant by realism ? According to M. H. Abrams it is a special literary manner aiming at giving: "the illusion that it reflects life as it seems to the common reader. [...] The realist, in other words, is deliberately selective in his material and prefers the average, the commonplace, and the everyday over the rarer aspects of the contemporary scene. His characters therefore, are usually of the middle olass or (less frequently) the working-class people without highly exceptional endowments, who live through ordinary experiences of childhood, adolescence, love, marriage, parenthood, infidelity, and death; who find life rather dull and often unhappy, though it may be brightened by touches of beauty and joy; but who may, under special circumstances, display something akin to heroism." The RBC is obviously not consistently realistic yet one cannot read far into it without discovering a few distinctly realistic touches whether it is in the depiction of life in the army with its constant drilling and reviewing or in the emphasis on the commonplace and unglamorous actions. Realism is also to be found in the style of the book when S. Crane reproduces the language heard at the time with its numerous elisions and colloquial or even corrupt expressions: "'What's up, Jim?' 'Th' army's goin' t'move.' 'Ah, what yeh talkin' about ? How yeh know it is ?' 'Well, yeh kin b'lieve me er not, jest as yeh like,. I don't care a hang.'"(2) This is quite different from the style used for the narrative and the descriptions; this second style is characterized by its original figures of speech, its use of adjectives and its impressionistic technique (see examples in the above section). In fact, Crane's work manages a transition from traditional realism (with the extended and massive specification of detail with which the realist seeks to impose upon one an illusion of life) to an increasingly psychological realism. The emphasis on detail is characteristic of the realistic method of writing as was demonstrated by R. Jakobson in Fundamentals of Language (77-78): "Folloving the path of contiguous relationships, the realistic author metonymically digresses from the plot to atmosphere and from the characters to the setting in space and time." Crane's originality hovever lies in the fact that he held realism to be a matter of seeing, a question of vision. The novelist, he wrote, "must be true to himself and to things as he sees them." What about Crane's alleged naturalism ? Naturalism is "a mode of fiction that was developed by a school of writers in accordance with a special philosophical thesis. This thesis, a product of post-Darwinian biology in the mid-nineteenth century, held that man belongs entirely in the order of nature and does not have a soul or any other connection with a religious or spiritual world beyond nature; that man is therefore merely a higher-order animal whose character and fortunes are determined by two kinds of natural forces, heredity and environment." (Abrams, 142) True, Crane makes use of animal imagery (especially to describe Henry Fleming's panic syndrome) but he ignores the last two factors mentioned in the above definition. The only naturalistic element studied by Crane is aggressive behaviour and a certain form of primitiveness but his utilization of such material is quite different from Zola's, Crane's naturalism is more descriptive than illustrative and conjures up a moral landscape where the preternatural prevails. The least one can say at the close of this survey of the many and various responses The RBC has aroused in critics is that the novel is much more complex than one would imagine at first glance. Such a multiplicity of widely diverging critical appreciations points to one obvious conclusion, namely, that "in art, in life, in thought, [Crane] remained an experimenter, a seeker of rare, wonderful gifts, an apprentice sorcerer." (Cady,46). -Q/W/E/R/T/Y, n° 4, octobre 1994 (PU Pau) BIBLIOGRAPHY [...] Le schéma suivant résume ce jeu d'options: ACHÈVEMENT PASSAGE A L'ACTE ÉVENTUALITÉ INACHÈVEMENT NON PASSAGE A l'ACTE Ainsi, pour qu'un segment temporel quelconque (un événement, une relation, un comportement, etc.) puisse être donné in extenso dans un récit, il faut et il suffit que soient données les modalités de son origine, celles de son développement, celles de son achèvement. De plus il s'agit d'un processus orienté, d'une virtualite qui s'actualise et tend vers un certain terme connu d'avance. La séquence élémentaire, qui reproduit ce processus s'articulera typiquement en trois moments principaux, chacun donnant lieu à une alternative: 1° une situation qui ouvre la possibilité d'un comportement ou d'un événement (sous réserve que cette virtualité s'actualise); [...] A well-known example of allegory is Bunyan's The Pilgrim's Progress (1678) which personifies such abstract entities as virtues, vices, states of mind, and types of character. This book allegorizes the doctrines of Christian salvation by telling how Christian, warned by Evangelist, flees the City of Destruction and nakes his way laboriously to the Celestial City; on his way he encounters such characters as Faithful, Hopeful and the Giant Despair and passes through places like the Slough of Despond, the Valley of the Shadow of Death and Vanity Fair." monger of the introductory chapter turn into the tragic, poignant sacrificial victim of Chapters 8-9.Two critics Robert W. Stallman and Daniel Hoffman contend that Jim Conklin whose initials are the same as those of Jesus Christ is intended to represent the Saviour whose death in a symbolic novel richly laden with Christian references somehow redeems Henry, the sinner. The key to the religious symbolism of the whole novel is according to the former the reference to "the red sun pasted in the sky like a wafer" (62). For Stallman, the "wafer" means the host, the round piece of bread used in the celebration of the Eucharist, and thus Jim Conklin becomes a Christ-figure. The likeness is re-inforced by the fact thc Jimts wounds conjure up the vision of the stigmata; he is wounded in the side ("the side looked as if it had been chewed by wolves," 62) and his hands are bloody ("His spare figure was erect; his bloody hands were quiet at his side," Ibid.). During the solemn ceremony of his death agony, Henry "partakes of the sacramental blood and body of Christ, and the process of his spiritual rebirth begins at the moment when the waferlike sun appears in the sky. It is a symbol of salvation through death"(An Omnibus, 200). It goes without saying that such a highly personal Christian-symbolist reading of the novel is far from being unanimously accepted by other Crane scholars. Edwin H. Cady, among others, regards Conklin as a mere "representation of the sacrified soldier [...] occupying in the novel a place equivalent to that of the Unknown Soldier in the national pantheon." (140). His death simply provides Crane with a new occasion to expose the savagery of war and furnishes a dramatic counterpoint to Henry's absorption in his own personal problems. - CADY, Edwin H. Stephen Crane. New Haven: College & University Press, 1962. -CAZEMAJOU, Jean. Stephen Crane. University of Minnesota Pamphlets on American Writers, n° 76, Minneapolis, 1969 -STALLMAN, Robert W. Stephen Crane : An Omnibus. New York: Alfred A. Knopf, 1970. -GONNAUD, M., J. M. Santraud et J. Cazemajou. Stephen Crane : Maggie, A Girl of the Streets & The Red Badge of Courage. Paris: Armand Colin, 1969 -PROFILS AMERICAINS: Les Écrivains américains face à la Guerre de Sécession, n°3, 1992, CERCLA, Université P. Valéry, Montpellier III. -Nature = Church The narrator mentions "the cathedral light of the forest" (26) then its "aisles" (27); "a mystic gloom" (14) enshrouds the landscape; the trees begin "softly to sing a hymn of twilight" (51); the hush after the battle becomes "solemn and churchlike" (127) and "the high arching boughs made a chapel [...] where there was a religious half-light (50). -Henry's experiences = a spiritual itinerary Henry is "bowed down by the weight of a great problem" (14) later he is crushed by "the black weight of his woe" (67). The change that war is going to bring about in him is tantamownt to salvation ("He saw his salvation in such a change" 27). The reader often comes across such telling words as: "revelation" (43) "guilt" (139); "sin" (Ibid.) "retribution" (91); "letters of guilt" (57); "shame" (64); "crime" (65); "ceremony" (61); "chosen beings" (67); "prophet" (70) and so on. -War = A blood-swollen God "The wargod" ( 45) is one of the central figures in the RBC and the soldiers often appear as "devotee(s) of a mad religion, blood-sucking, muscle-wrenching, bone-crushing" (60). The list could be lengthened at will; it is however abundantly clear that the religious vein is one of the most important in the novel. It is worth noting yet that there is no revelation in The RBC, no "epiphany" in its usual sense of a manifestation of God's presence in the world. Mechanistic images The RBC is a faithful, though oblique, reflection of the era in which it was written; it expresses certain doubts about the meaning of individeal virtue in a world that has suddenly become cruel and mechanical. The Civil War brought about in both armies "a substitution of mechanical soldierly efficiency for an imaginary chivalric prowess" (Crews, XVIII). This evolution is made manifest by the numerous mechanistic images interspersed in the narrative; there is first the image of the "box" (23), then the "din of battle" is compared to "the roar of an oncoming train" (29); the enemy is described in terms of "machines of steel. It was very gloomy struggling against such affairs, wound up perhaps to fight until sundown" (43). One sometimes gets the impression that war is described as if it were a huge factory; this is apparent (127): [...] an interminable roar developed. To those in the midst of it it became a din fitted to the universe. It was the whirring and thumping of gigantic machinery." Or again p. 53: "The battle was like the grinding of an immense and terrible machine to him. Its complexities and powers, its grim processes, fascinated him. He must go close and see it produce corpses." Even the bullets buff into the soldiers "with serene regularity, as if controlled by a schedule" (117), In "the furnace roar of battle" men are turned into "methodical idiots ! Machine-like fools !" (45) and the regiment after the first flush of enthusiasm is like "a machine run down" (116). All these images contribute to a thorough debunking of war which no longer appears kind and love1y. STYLE AND TECHNIQUE The RBC is notable for its bold innovations in technique and the fact that its method is all and none. Never before had a war tale been told in this way; as a consequence, Crane's art has stirred up endless debates about whether the author was a realist, a naturalist or an impressionist. All possible labels have, at one time or another, been stamped upon Crane as an artist; we'll use them as so many valuable clues in appraising Crane's style and technique. Point of view According to the famous critic Percy Lubbock (The Craft of Fiction) "point of view" means "the relation in which the narrator stands to the story". In the RBC, the narrator's point of view is through Fleming's eyes i.e. the reader sees what and as Henry does though he is never invited to identify wholly with him. In fact, the narrative in the novel is conducted in a paradoxica1 way because if the voice we hear is that of some sort of third-person objective narrator, the point of view is located at almost the same place as if this were a first-person narrative: just behind the eyes of Henry Fleming. The description of the hut on page 3 is a case in point: "He lay down on a wide bunk that stretched across the end of the room. In the other end, cracker boxes were made to serve as furniture. They were grouped about the fireplace. A picture from an illustrated weekly was upon the log walls and three rifles were paralleled on pegs." The scene is described as if Henry were taking it in whereas the panorama depicted in the opening section of the novel is attributable to a detached narrator standing a godlike position above the landscape: "The cold passed reluctantly from the earth, and the retiring fogs revealed an army stretched out on the hills, resting. As the landscape changed from brown to green, the army awakened, and began to tremble with eagerness at the noise of rumors... etc." (1) The reader, hovever, is not just shown things from the outside as they impinge upon the senses of an unknown observer; he sometimes goes "behind the scenes" and even penetrates into the inner life of the protagonist: "A little panic-fear grew in his mind. As his imagination went forward to a fight, he saw hideous possibilities. He contemplated the lurking menaces of the future, and failed in an effort to see himself standing stoutly in the midst of them. He recalled his visions of broken-bladed glory, but in the shadow of the impending tumult he suspected them to be impossible pictures." (9) Here the character is interiorized; the author allows himself the liberty of knowing and revealing what the hero is thinking and feeling. This is quite in keeping with the author's intention to study the reactions of "a nervous system under fire". Another interesting and arresting aspect of Crane's utilization of point of view is his "cinematic" technique. Crane sometimes handles point of view as if it were a movie camera thus anticipating the devices of this new art form. A certain number of particular effects can be best described
79,125
[ "17905" ]
[ "178707", "420086" ]
00175917
en
[ "shs", "sde" ]
2024/03/05 22:32:10
2006
https://shs.hal.science/halshs-00175917/file/Flachaire_Hollard_05.pdf
Emmanuel Flachaire Guillaume Hollard Jason Shogren Stéphane Luchini Controlling starting-point bias in double-bounded contingent valuation surveys by Keywords: starting point bias, contingent valuation JEL Classification: Q26, C81 In this paper, we study starting point bias in double-bounded contingent valuation surveys. This phenomenon arises in applications that use multiple valuation questions. Indeed, response to follow-up valuation questions may be influenced by the bid proposed in the initial valuation question. Previous researches have been conducted in order to control for such an effect. However, they find that efficiency gains are lost when we control for undesirable response effects, relative to a single dichotomous choice question. Contrary to these results, we propose a way to control for starting point bias in double-bounded questions with gains in efficiency. Introduction There exist several ways to elicit individuals' willingness to pay for a given object or policy. Contingent valuation, or CV, is a survey-based method to measure nonmarket values, among the important literature see [START_REF] Mitchell | Using Surveys to Value Public Goods : The contingent Valuation Method[END_REF], [START_REF] Hausman | Contingent valuation : A critical assessment[END_REF], [START_REF] Bateman | Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EU, and Developing Countries[END_REF]. To elicit the individual maximum willingness to pay, participants are given a scenario that describes a policy to be implemented. They are then asked to report the amount they are ready to pay for it. In order to elicit WTPs, the use of discrete choice format in contingent valuation surveys is strongly recommended by the work of the NOAA panel [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. It consists of asking a bid to the respondent with a question like if it costs $x to obtain . . . , would you be willing to pay that amount? Indeed, one advantage of the discrete choice format is that it mimics the decision making task that individuals face in everyday life since the respondent accepts or refuses the bid proposed. However, one drawback of this format is that it leads to a qualitative dependent variable (the respondent answers yes or no) which reveals little about individuals' WTP. In order to gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF] proposed to add a follow-up discrete choice question to improve efficiency of discrete choice questionnaires. This mechanism is known as the double-bounded model. This basically consists of asking a second bid to the respondent, greater than the first bid if the respondent answers yes to the first bid and lower otherwise. A key disadvantage of the double-bounded model is that subject's responses to the second bid may be influenced by the first bid proposed. This is the so called starting-point bias. Several studies document that iterative question formats produce anomalies in respondent behavior. Empirical results show that inconsistent results may appear, that is, the mean WTP may differ significantly if it is implied by the first question only or by the follow-up question. Different interpretations have been proposed -the first bid can be interpreted as an anchor, a reference point1 or as providing information about the cost -as well as different models to control for these anomalies (see [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF]. However, these studies suggest that when we control for such undesirable response effects, efficiency gains can be lost relative to a single dichotomous choice question. At the moment, it is still difficult to control for such effects in an effective manner. The purpose of this paper is to address this issue. We present and compare different models previously proposed in the literature. We also develop new econometric models that combine the main feature of existing models. Our empirical results provide strong evidence that we can obtain a gain in efficiency by taking into account the follow-up question. They give a better understanding of how subjects form their responses to the payment questions. The paper is organized as follows. In section 2, we review the econometric models proposed in the literature and we propose new models. In section 3, we compare these different models with an application. Conclusions are drawn in section 4. Econometric models In this section, we review different models proposed in the literature to control for the anchoring effect, shift effect and framing effect. Then, we propose new models that combine all these effects. Let us first consider W 0i , the true willingness to pay of individual i, which is defined as follows W 0i = x i (β) + u i u i ∼ N (0, σ 2 ) (1) where the unknown parameters β and σ 2 are respectively a k × 1 vector and a scalar, where x i is a non-linear function depending on k independent explanatory variables. The number of observations is equal to n and the error terms u i are normally distributed with mean zero and variance σ 2 . The willingness to pay (WTP) of the respondent i is not observed but his answer to a bid b i is. The subject's answers are defined as r i = 1 if W 0i > b i and r i = 0 if W 0i ≤ b i (2) where r i = 1 if the respondent i answers yes to the first question and r i = 0 if the respondent i answers no to the first question. The double bounded model, proposed by [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF], consists of asking a second bid (follow-up question) to the respondent. If the respondent i answers yes to the first bid, b 1i , the second bid b 2i is higher and lower otherwise. The standard procedure, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF], assumes that respondents' WTPs are independent of the bids and deals with the second response in the same manner as the first discrete choice question: W 1i = W 0i and W 2i = W 0i . (3) An individual answers yes to the first bid if W 1i > b 1i and yes to the second bid if W 2i > b 2i . Thus, the double bounded model assumes that the same random utility model generates both responses to the first and the second bid. However, introduction of follow-up questioning can generate inconsistency between answers to the second and first bids. To deal with inconsistency of responses, several models have been proposed in the literature. Anchoring effect Herriges and Shogren (1996)'s approach considers a model in which the follow-up question can modify the willingness to pay. According to them, respondents combine their prior WTP with the value provided by the first bid, this anchoring effect is then defined as follows W 1i = W 0i and W 2i = (1 -γ) W 1i + γ b 1i (4) where 0 ≤ γ ≤ 1. An individual answers yes to the first bid if W 1i > b 1i and yes to the second bid if W 2i > b 2i . From (4), it follows that, r 1i = 1 ⇔ W 0i > b 1i and r 2i = 1 ⇔ W 2i > b 2i (5) The economic interpretation is rather simple. Individuals are supposed to adjust their initial WTP by doing a weighted average of this WTP with the proposed amount. Thus, γ measures the importance of anchoring. It ranges from γ = 0 which means that no anchoring is at work, to γ = 1 which means that subjects ignore their prior WTP and replace it with the proposed bid. This model is thus a simple and efficient manner to test the importance of anchoring. The wider is the anchoring effect, the less information provided by the follow-up question. A more general model This last model assumes that only the follow-up question gives rise to anchoring effects and only the first bid has an influence on the second answer. These two last hypotheses are quite restrictive and we can show that the model is still valid if we consider a more general anchoring effect, that is, both bids can influence subject's responses. Let us assume that individuals can combine their prior WTP with the values provided by the current and by the past bids offer. It leads us to consider the following model W 1i = (1 -γ) W 0i + γ b 1i and W 2i = (1 -δ) W 1i + δ b 2i (6) where 0 ≤ γ ≤ 1 and 0 ≤ δ ≤ 1. An individual answers yes to the first bid offer if: r 1i = 1 ⇔ W 1i > b 1i ⇔ (1 -γ) W 0i + γ b 1i > b 1i ⇔ W 0i > b 1i (7) This last condition suggests that a potential anchoring effect of the first bid offer does not influence the subject's response to the initial question. An individual answers yes to the second bid offer if: r 1i = 1 ⇔ W 2i > b 2i ⇔ (1 -δ) W 1i + δ b 2i > b 2i ⇔ W 1i > b 2i (8) This last condition suggests that a potential anchoring effect of the second bid offer does not influence the subject's response to the follow-up question. Moreover, we can see that the first bid offer can influence the second answer, because W 1i is a combination of the prior WTP and of the value provided by the first bid. Finally, these results show that the current bid offer can have an impact on the WTP but does not affect the subject's responses. Only the first bid offer can influence the answer to the follow-up question. It follows that the parameter δ cannot be estimated. This suggest the remarkable conclusion that when we use the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] in practice, we can assume a potential anchoring effect of both bids. Shift effect Alberini, Kanninen, and [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] assume that the proposition of two bids may cause individual WTP amount to systematically shift between the two responses. Thus, the two answers are not based on the same WTP and this can explain inconsistency of responses. A shift in WTP is defined as follows: W 1i = W 0i and W 2i = W 1i + δ (9) where the parameter δ represents the structural shift. Such a model is inspired by the following intuition. The first bid may be interpreted as providing information about the cost of the object. Thus, an individual who accept the first bid offer may understand the second bid as a proposition to pay an additional amount for the same object. It follows that this individual may cut down their answers to take that phenomenon into account. Symmetrically, when an individual rejects the first bid offer, the follow-up question could be interpreted as a proposition for a lower quality level of the object. Again, it may lead individual to cut down their answers. In such case, the parameter δ is expected to be negative. A positive δ is however possible and could be interpreted as "yea saying" behavior: an individual overestimate its WTP in order to acknowledge the interviewer's proposition. But, we are not aware of data supporting this interpretation, i.e. estimated values of δ are negative. Note that a model with shift effects assumes that only the follow-up question gives rise to shift effect and the shift is independent of the bids proposed. These two last hypothesis are quite restrictive. Indeed, it could be difficult to believe that the respondent answers the first question truthfully, and that the behavioral responses is not the same if the proposed bid is close to the individual's true WTP or if it is far from it. However, these hypotheses are required by an identification condition and we cannot relax them as we have done in the anchoring model. [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF] modifies the Herriges and Shogren anchoring model to allow both anchoring and shift effects, Anchoring and Shift effects W 1i = W 0i and W 2i = (1 -γ) W 1i + γ b 1i + δ ( 10 ) The interpretation is simply a certain combination of both the anchoring and the shift effect explanations. Indeed, we can rewrite W 2i = W 1i + γ (b 1i -W 1i ) + δ, that is, an individual may update its prior WTP with a constant term (shift) and a multiplicative factor of the distance between the prior WTP and the first bid offer (anchoring). See [START_REF] Aadland | Incentive incompatibility and starting-point bias in iterative valuation questions: comment[END_REF] and [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions: reply[END_REF] for estimation details. (2002) proposes decomposing iterative questions into their ascending and descending sequences. His empirical results suggest that inconsistency of responses occur only in ascending sequences. It leads him to recommend using in practice the doublebounded model with only decreasing follow-up questions. This last model can be written, Framing effect DeShazo W 1i = W 0i and W 2i = W 0i if r 1i = 0 (11) The distinction between ascending and descending sequences leads Deshazo to attribute the parameter inconsistency to framing effect rather than anchoring effect. Indeed, using prospect theory [START_REF] Kahneman | Prospect theory: an analysis of decisions under risk[END_REF], he argues that if the first subject's response is "yes", the first bid offer is interpreted as a reference point: compared to it, the follow-up question is framed as a loss and thus, individuals are more likely to answer " no" to the second offer. In contrast, if the first subject's response is "no", the first bid offer is not interpreted as a reference point. Thus, the behavioral responses to ascending versus descending iterative questions are different. New models Empirical results based on all the previous models show that in the presence of anchoring effect, shift effect or framing effect, the estimated mean and the estimated dispersion of WTP can be significantly biased. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] conclude that the efficiency gains from the follow-up question are lost once we controlled for the anchoring effect. They suggest to use the single-bounded model in the presence of significant anchoring effect and thus, to remove the follow-up questions. [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF] shows that most of the biases are more likely to occur in the ascending sequence. It leads him to recommend to keep the follow-up questions from the descending sequences and to remove the follow-up questions from the ascending sequences only. In order to get information from the ascending sequences back, we propose to correct biases in the follow-up questions from the ascending sequences2 . We consider three different models, with W 1i = W 0i : Framing & Anchoring effects - W 2i = W 1i + γ (b 1i -W 1i ) r 1i (12) If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with an anchoring effect, as defined in the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. Framing & Shift effects - W 2i = W 1i + δ r 1i (13) If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with a shift effect, as defined in the model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. Framing & Anchoring & Shift effects - W 2i = W 1i + γ (b 1i -W 1i ) r 1i + δ r 1i (14) If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with both anchoring and shift effects, as defined in the model proposed by [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF]. Implementation of this model can be based on a probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to: P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j r 1i + λ(D j r 1i ) (15) where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to: β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (16) The models proposed in ( 12) and ( 13) are two special cases of the model proposed in (14), respectively with δ = 0 and γ = 0. It follows that, they can be implemented based on the probability (15), respectively with λ = 0 and θ = 0. Application To preserve the natural reserve using an entrance fee. The survey was administered to 218 recreational visitors during the spring 1997, using face to face interviews. Recreational Visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up4 . There was a high response rate (92.6 %). For a complete description of the contingent valuation survey, see [START_REF] Claeys-Mekdade | Quelle valeur attribuer à la camargue? une perspective interdisciplinaire économie et sociologie[END_REF]. Means of the WTPs were estimated using a linear model [START_REF] Mcfadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. Indeed, [START_REF] Crooker | Parametric and semi-nonparametric estimation of willingness-to-pay in the dichotomous choice contingent valuation framework[END_REF] show that the simple linear probit model is often more robust in estimating the mean WTP than others parametric and semiparametric models. Table 1 presents estimated means μ and estimated dispersions σ of the WTPs for all models, with standard errors given in parentheses. The mean of WTPs is a function of parameters: its standard error and its confidence interval cannot be obtained directly from the estimation results. Confidence intervals of μ are presented in brackets, they are obtained by simulation with the Krinsky and Robb procedure, see Haab and McConnell (2003, pp 106-113) for more details. From table 1, it is clear that the standard errors (in parentheses) and the confidence intervals (in brackets) decrease considerably when one uses the usual double-bounded model (Double) instead of the single bounded model (Single). This result confirms the expected efficiency gains provided when the second bid is taken into account [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF]. However, estimates of the mean of WTPs in both models are very different (113.52 vs. 81.78). Moreover, the mean of WTPs of the single bounded model, μ = 113.52, does not belong to the 95% confidence interval of the mean of WTPs in the double bounded model, [78.2; 85.5]. Such inconsistent results lead us to consider other models, as presented in the previous section. At first, we estimate a model with anchoring effect (Anchoring), as defined in (4) by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. From table 1, we can see that the anchoring parameter, γ = 0.52, is significant. Indeed, a Likelihood Ratio test, equals to twice the difference of the loglikelihood estimates (LR=5.78, P -value = 0.016), rejects the null hypothesis γ = 0. This test confirms the presence of an anchoring effect in the respondents' answers. When correcting for anchoring effect, results are consistent, in the sense that, the mean of WTPs of the single bounded model, μ = 113.52, belongs to the 95% confidence interval of the anchoring model, [98.2; 155.4]. However, standard errors and confidence intervals increase significantly, so that, even if follow-up questioning increases precision of parameter estimates (see Double), efficiency gains are completely lost once the anchoring effect is taken into account (see Anchoring). According to this result, "the single-bounded approach may be preferred when the degree of anchoring is substantial" (Herriges and Shogren, 1996, p 124). Then, we estimate a model with shift effect (Shift), as defined in (9) by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. Results lead to similar conclusions than the double bounded model. Indeed, we can see a large gain in efficiency: standard errors and confidence intervals are more precise. Moreover, results are inconsistent: the mean of WTPs of the single bounded model μ = 113.52 does not belong to the 95% confidence interval of the shift model, [85.6; 93.8]. Parameter estimates of a model with both anchoring bias and shift effects, as defined in (10) by [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF], are given in the line named Anchoring & Shift. Based on the criterion of maximum likelihood, this model is better than the others ( l = -172.82). Results are consistent, in the sense that, μ = 113.52 belongs to the 95% confidence interval of the model with anchoring and shift effects [107.3; 176.0]. However, we can see a loss of precision compared to the single bounded model. The only one model, previously presented in the literature, which give consistent results with the single bounded model and a gain in efficiency is the model proposed by [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF], as defined in (11). Results are presented in the line named Framing. The 95% confidence interval [90.9; 121.8] includes the mean of WTPs μ = 113.52 and is narrower than the 95% confidence interval of the single bounded model [87.9; 138.7]. In his conclusion, Deshazo recommends to remove all the answers which could be influenced by framing effect, that is, the answers to the second bids if the respondents answer yes to the first bids. From the previous results, it is clear that there is no way to handle the problem of starting point bias in an effective manner. This suggests that the best we can do in practice is to remove the answers which could be subject to starting point bias. Nevertheless, the use of iterative questions should provide more information about the distribution of WTPs. Then, better results should be expected if all the answers to iterative questions are used and if a correct model of starting-point bias is used. To go further, we consider the three new models proposed in the last section, which consider all the answers to the second bids: • Line Fram & Anch presents estimation results of the model defined in (12), that is, a model with an anchoring bias in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to [93.6; 119.3]. It is clear that for these three models, results are consistent with the single bounded model: the mean of WTPs μ = 113.52 belongs to the three confidence intervals. Furthermore, results are more precise: the standard errors (in parentheses) are smaller and the confidence intervals (in brackets) are narrower than those of the single-bounded model. In addition, we can remark that the two models Fram & Anch and Fram & Shift are special cases of the model Fram & Anch & Shift, respectively with δ = 0 and γ = 0. From this last more general model, we cannot reject γ = 0 (LR= 0.004, P -value= 0.99), but we can reject δ = 0 (LR=10.31, P -value=0.001). These results lead us to select the model Fram & Shift as the one which fit better our contingent valuation data, that is, a model with shift effect in the ascending sequences only. Table 2 presents full econometric results of several models with consistent results: the single-bounded model (Single), the model of Deshazo (Framing) and our selected model (Fram & Shift). The estimates of the vector of coefficients β (rather than β/σ), the standard deviation σ and the shift parameter δ are directly presented, see equations ( 1), ( 11) and (13). It is clear from this table that the standard errors in the Fram & Shift model are nearly always significantly reduced compared to the standard errors in the other models. Indeed, only one parameter is significant in the Single model when eight parameters are significant in the Fram & Shift model. In other words, efficiency gains are still present in our selected model (which take into account all the answers) compared to the other models (which remove answers that could be influenced by the first bid offer). Conclusion Follow-up questions in double bounded model are expected to give more information on the willingness-to-pay of respondents. Then, many economists have favored this last model to obtain gains in efficiency over the single bounded model. However, recent studies show that this model can be inadequate and can give inconsistent results. Many different models have been considered in the literature to correct anomalies in respondent behavior that appear in dichotomous choice contingent valuation data. However, the corrections proposed by these models show that efficiency given by the iterative questions are lost when inconsistency of responses is controlled. The main contribution of this paper is to propose a model to control for startingbias in double bounded model, and, contrary to previous research, still have gains in efficiency relative to a single dichotomous choice question. [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF] shows that descending and ascending sequences have different behavioral responses and recommend to restrict the follow-up questions only if the answers to the first bids are no. To benefit from more information, rather than not taking into account the answers which could be influenced by the first bid offer, we propose different models of starting-point bias in ascending iterative questions only. Our empirical results show that a model with shift effects in the ascending questions gives consistent results with the single bounded model and provides large efficiency gains. This support the idea that framing, anchoring and shift effects can be combined in an efficient manner. Table 1 : 1 test our model empirically, we use the main results of a contingent valuation survey which was carried out within a research program that the French Ministry in charge of environmental affairs started in 1995. Is is based on a contingent valuation survey which involves a sample of users of the natural reserve of Camargue 3 . The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay to Mean and dipersion of WTPs in French Francs Model µ σ γ δ ℓ Single 113.52 [87.9;138.7] 45.38 (23.61) - - -53.3 Double 81.78 [78.2;85.5] 42.74 (5.23) - - -179.6 Anchoring 126.38 [98.2;155.4] 82.11 (40.83) 0.52 (0.23) - -176.7 Shift 89.69 [85.6;93.8] 44.74 (5.77) - -8.10 (2.90) -175.3 Anchoring & Shift 141.38 [107.3;176.0] 85.50 (43.78) 0.52 (0.24) -7.81 (2.91) -172.8 Framing 106.72 [90.9;121.8] 40.39 (11.91) - - -68.8 Fram & Anch 106.71 [93.6;119.3] 60.19 (14.77) 0.40 (0.16) - -176.9 Fram & Shift 116.98 [103.9;129.7] 65.03 (14.40) - -30.67 (14.33) -171.8 Fram & Anch & Shift 116.39 [101.4;131.1] 64.63 (16.34) -0.02 (0.42) -31.60 (21.77) -171.8 Table 2 : 2 Parameter estimates, standard errors in parentheses (⋆: significant at 95%) • Line Fram & Shift presents estimation results of the model defined (13), that is, a model with a shift effect in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to [103.9; 129.7]. • Line Fram & Anch & Shift presents estimation results of the model defined in ( 14 ), that is, a model with an anchoring bias and a shift effect in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to [101.4; 131.1] . [START_REF] Kahneman | Reference points, anchor norms, and mixed feelings[END_REF] proposes clear definitions of anchoring and framing effects and emphasizes the difference in the underlying mental processes. As long as the model(11), proposed by DeShazo (2002), provides consistent results with the singlebounded model, biases occur in ascending sequences only. Thus, there is no need to consider more complicated models where biases occur in both ascending and descending sequences. The Camargue is a wetland in the south of France covering 75 000 hectares. The Camargue is a major wetland in France and is host to many fragile ecosystems. The exceptional biological diversity is the result of water and salt in an "amphibious" area inhabited by numerous species. The Camargue is the result of an endless struggle between the river, the sea and man. During the last century, while the construction of dikes and embankments salvaged more land for farming to meet economic needs, it cut off the Camargue region from its environment, depriving it of regular supplies of fresh water and silt previously provided by flooding. Because of this problem and to preserve the wildlife, the water resources are now managed strictly. There are pumping, irrigation and draining stations and a dense network of channels throughout the river delta. However, the costs of such installations are quite large. The first bid b 1i was drawn randomly from{5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100}. The second bid b 2i was drawn randomly from the same set of values with b 2i < b 1i and with the additional amount 3 (resp. b 2i > b 1i and 120) if the answer to the first bid was no (resp. yes). The number of answers (no,no), (no,yes), (yes,no) and (yes,yes) are respectively equal to 20, 12, 44 and 121.
30,549
[ "843051", "1331865" ]
[ "15080", "45168" ]
00175925
en
[ "shs", "sde" ]
2024/03/05 22:32:10
2007
https://shs.hal.science/halshs-00175925/file/Flachaire_Hollard_07b.pdf
Keywords: starting point bias, preference uncertainty, contingent valuation JEL Classification: Q26, C81 Introduction The NOAA panel recommends the use of a dichotomous choice format in contingent valuation (CV) surveys. This format has several advantages: it is incentive-compatible, simple and cognitively manageable. Furthermore, respondents face a familiar task, similar to real referenda. The use of a single valuation question, however, presents the inconvenience of providing the researcher with only limited information. To gather more information, [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF] proposed adding a follow-up question. This is the double-bounded model. This format, however, has been proved to be sensitive to starting point bias, that is, respondents anchor their willingness-to-pay (WTP) to the bids. It implies that WTP estimates may vary as a function of the bids. Many authors propose some specific models to handle this problem [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF]. The behavioral assumption behind these models is that respondents hold a unique and precise willingness-to-pay prior to the survey. Observed biases are interpreted as a distortion of this initial willingness-to-pay during the survey. Independently, several studies document the fact that individuals are rather unsure of their own willingness-to-pay [START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF][START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], 2001[START_REF] Welsh | Elicitation effects in contingent valuation: comparisons to a multiple bounded discrete choice approach[END_REF][START_REF] Van Kooten | Preference uncertainty in non-market valuation: a fuzzy approach[END_REF][START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF][START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF]. To account for such uncertainty, these studies allow respondents to use additional answers to valuation questions. Rather than the usual "yes", "no" and "don't know" alternatives, intermediate responses, such as "probably yes" or "probably no", are allowed. Alternatively, an additional question asks respondents how certain they are of their answers and provides a graduated scale. In contingent valuation, starting-point bias and respondent's uncertainty have been handled in separate studies. In this article we develop a dichotomous choice modelhereafter called the Range model -in which individuals hold a range of acceptable values, rather than a precisely defined value of their willingness-to-pay. The range model is drawn from the principle of coherent arbitrariness, suggested by Ariely et al. (2003b). Prior to the survey, the true willingness to pay is assumed to be uncertain in an interval with upper and lower bounds. Confronted with the first valuation question, respondents select a value and then act on the basis of that selected value. Because of this initial uncertainty, the initial choice is subject to starting point bias. In contrast, the subsequent choices are no longer sensitive to the bid offers. A clear-cut prediction follows: biases occur within a given range and affect the first answer only. The Range model thus provides an alternative interpretation of the starting point bias in the dichotomous choice valuation surveys. An empirical study is presented to compare various models, using the well-known Exxon Valdez contingent valuation survey. Results show that a special case of the proposed Range model, in which a "yes" response is given when the bid value falls within the range of acceptable values, is supported by the data, i.e. when uncertain, individuals tend to say "yes". The article is organized as follows. The following section presents the Range model and the respondent's decision process. The subsequent sections provide estimation details, give further interpretation and present an application. Conclusions appear in the final section. The Range model The Range model derives from the principle of "coherent arbitrariness" (Ariely et al. 2003b). These authors conducted a series of valuation experiments (i.e. experiments in which the subjects have to set values for objects they are not familiar with). They observed that "preferences are initially malleable but become imprinted (i.e. precisely defined and largely invariant) after the individual is called upon to make an initial decision". But, prior to imprinting, preferences are "arbitrary, meaning that they are highly responsive to both positive and normative influences". In a double-bounded CV survey, two questions are presented to respondents. The first question is "Would you agree to pay x$?". The second, or follow-up, question is similar but asks for a higher bid offer if the initial answer is yes and a lower bid offer otherwise. Confronted with these iterative questions, with two successive bids proposed, the principle of coherent arbitrariness leads us to consider a three-step decision process: 1. Prior to a valuation question, the respondent holds a range of acceptable values 2. Confronted with a first valuation question, the respondent selects a value inside that range 3. The respondent answers the questions according to the selected value. The following subsections detail each step. A range of acceptable values At first, let us assume that a respondent i does not hold a precise willingness-to-pay but rather an interval of acceptable values: wtp i ∈ W i , W i with W i -W i = δ. (1) The lower bound and the upper bound are different for each respondent, but we assume the width of the range δ to be constant across individuals.1 Several psychological and economic applications support this idea. For instance, [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF] and Ariely et al. (2003bAriely et al. ( , 2003a) ) suggest the existence of such an interval. In addition, several studies in contingent valuation explore response formats that allow for the expression of uncertainty, among others see [START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF], [START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], Welsh andPoe (1998), van Kooten et al. (2001), [START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF] and [START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF]. These studies also conclude that there is a range of values for which respondents are uncertain. Selection of a particular value Confronted with a first bid offer b 1i a respondent i selects a specific value inside his range of acceptable values W i , W i . The selection rule can take different forms. We propose a selection rule in which the respondent selects a value so as to minimize the distance between his range of willingness-to-pay and the proposed bid: W i = Min wtp i |wtp i -b 1i | with wtp i ∈ W i , W i . (2) This selection rule has attractive features. It is very simple and tractable. It is also in accordance with the literature on anchoring, which states that the proposed bid induces subject to revise their willingness to pay as if the proposed bid conveyed some information about the "right" value [START_REF] Chapman | Anchoring, activation, and the construction of values[END_REF]. At a more general level, the literature on cognitive dissonance suggests that subjects act so as to minimize the gap between their own opinion and the one conveyed by new information. In this range model the first bid plays the role of an anchor: it attracts the willingnessto-pay. A different b 1i results in the selection of a different value W i . Thus, this selection rule should exhibit a sensitivity of the first answer to the first bid, that is, an anchoring effect. Consequently, it is expected to produce anomalies such as starting point bias. Answers to questions The last step of the decision process deals with the respondent's answer to questions. It is straightforward that a respondent will answer yes if the bid is less than the lower bound of his range of acceptable value W i . And he will answer no if the bid is higher than the upper bound of his range W i . However, it is less clear what is happening when the bid belongs to the interval of acceptable values. -Answers to the first question - A respondent i will agree to pay any amount below W i and refuse to pay any amount that exceeds W i . When the first bid belongs to his interval of acceptable values, he may accept or refuse the bid offer. Here, we do not impose a precise rule: respondents can answer yes or no with any probability when the bid offer belongs to the interval. If the bid belongs to the range of acceptable values, respondents answer yes to the first question with a probability ξ and no with a probability 1 -ξ. Thus, the probability that a respondent i will answer yes to the first question is equal to:2 P (yes) = P (b 1i < W i ) + ξ P (W i < b 1i < W i ) with ξ ∈ [0, 1]. (3) In other words, a respondent's first answer is yes with a probability 1 if the bid is below his range of acceptable values and with a probability ξ if the bid belongs to his range. A ξ close enough to 1 (resp. 0) means that the respondent tends to answer yes (resp. no) when the bid belongs to the range of acceptable values. Estimation of the model will provide an estimate of ξ. -Answers to follow-up questions - The uncertainty that arises in the first answer disappears in the follow-up answers. A respondent answers yes to the follow-up question if the bid b 2i is below his willingness-topay, W i > b 2i ; and no if the bid is above his willingness-to-pay, W i < b 2i (by definition, the follow-up bid is higher or smaller than the first bid, that is b 2i = b 1i ). Estimation In this section, we present in detail how to estimate the Range model. It is assumed that if the first bid b 1i belongs to the interval of acceptable values of respondent i, [W i ; W i ], he will answer yes with a probability ξ and no with a probability 1 -ξ. We can write these two probabilities as follows: ξ = P (W i < b 1i < W ξ i ) P (W i < b 1i < W i ) and 1 -ξ = P (W ξ i < b 1i < W i ) P (W i < b 1i < W i ) , (4) with W ξ i ∈ [W i ; W i ]. Note that, when ξ = 0 we have W ξ i = W i , and when ξ = 1 we have W ξ i = W i . From ( 4) and (3), the respondent i answers yes or no to the first question with the following probabilities P (yes) = P (W ξ i > b 1i ) and P (no) = P (W ξ i ≤ b 1i ). ( 5 ) It is worth noting that these probabilities are similar to the probabilities derived from a single-bounded model with W ξ i assumed to be the willingness-to-pay of respondent i. It follows that the mean value of WTPs obtained with a single-bounded model would correspond to the mean of the W ξ i in our model, for i = 1, . . . , n. The use of follow-up questions will lead us to identify and estimate ξ and to provide a range of values rather than a single mean of WTPs. If the initial bid belongs to his range of acceptable values, respondent i selects the value W i = b 1i , see (2). If his first answer is yes, a follow-up higher bid b h 2i > b 1i is proposed and his second answer is necessarily no, because W i < b h 2i . Conversely, if his first answer is no, a follow-up lower bid b l 2i < b 1i is proposed and his second answer is necessarily yes, because W i > b l 2i . It follows that, if the first and the second answers are similar, the first bid is necessarily outside the interval [W i ; W i ] and the probabilities of answering no-no and yes-yes are respectively equal to P (no, no) = P (W i < b l 2i ) and P (yes, yes) = P (W i > b h 2i ). ( 6 ) If the answers to the initial and the follow-up questions are respectively yes and no, two cases are possible: the first bid is below the range of acceptable values and the second bid is higher than the selected value W i = W i , otherwise the first bid belongs to the range of values. We have P (yes, no) = P (b 1i < W i < b h 2i ) + ξ P (W i < b 1i < W i ) (7) = P (b 1i < W i < b h 2i ) + P (W i < b 1i < W ξ i ) (8) = P (W i < b h 2i ) -P (W ξ i < b 1i ). (9) Similarly, the probability that respondent i will answer successively no and yes is: P (no, yes) = P (b l 2i < W i < b 1i ) + (1 -ξ) P (W i < b 1i < W i ) (10) = P (W ξ i < b 1i ) -P (W i < b l 2i ). (11) To make the estimation possible, a solution would be to rewrite all the probabilities in terms of W ξ i . In our model, we assume that the range of acceptable values has a width which is the same for all respondents. It allows us to define two parameters: δ 1 = W i -W ξ i and δ 2 = W i -W ξ i . ( 12 ) Note that δ 1 ≤ 0 and δ 2 ≥ 0 because 12) in ( 6), ( 9) and ( 11), we have W ξ i ∈ [W i ; W i ]. Using ( P (no, no) = P (W ξ i < b l 2i -δ 2 ), P (no, yes) = P (b l 2i -δ 2 < W ξ i < b 1i ), (13) P (yes, yes) = P (W ξ i > b h 2i -δ 1 ), P (yes, no) = P (b 1i < W ξ i < b h 2i -δ 1 ). ( 14 ) Let us consider that the willingness-to-pay is defined as, W ξ i = α + X i β + u i , u i ∼ N (0, σ 2 ), ( 15 ) where the unknown parameters β, α and σ 2 are respectively a k × 1 vector and two scalars, X i is a 1 × k vector of explanatory variables. The number of observations is equal to n and the error term u i is Normally distributed with a mean of zero and a variance of σ 2 . This model can easily be estimated by maximum likelihood, using the log-likelihood function l(y, β) = n i=1 r 1i r 2i log P (yes, yes) + r 1i (1 -r 2i ) log P (yes, no) + (1 -r 1i ) r 2i log P (no, yes) + (1 -r 1i ) (1 -r 2i ) log P (no, no) , (16) where r 1 (resp. r 2 ) is a dummy variable which is equal to 1 if the answer to the first bid (resp. to the second) is yes, and is equal to 0 if the answer is no. To estimate our model, we can derive from ( 13) and ( 14) the probabilities that should be used: P (no, no) = Φ[(b l 2i -δ 2 -α -X i β)/σ], (17) P (no, yes) = Φ[(b 1i -α -X i β)/σ] -Φ[(b l 2i -δ 2 -α -X i β)/σ], (18) P (yes, no) = Φ[(b h 2i -δ 1 -α -X i β)/σ] -Φ[(b 1i -α -X i β)/σ], (19) P (yes, yes) = 1 -Φ[(b h 2i -δ 1 -α -X i β)/σ]. (20) Non-negativity of the probabilities ( 18) and ( 19) require respectively b 1i > b l 2i -δ 2 and b h 2i + δ 1 > b 1i . We have defined δ 1 ≤ 0 and δ 2 ≥ 0, see ( 12): in such cases the probabilities ( 18) and ( 19) are necessarily positive. However, the restrictions δ 1 ≤ 0 and δ 2 ≥ 0 are not automatically satisfied in the estimation. To overcome this problem, we can consider a more general model, for which our Range model becomes a special case. Interrelation with the Shift model It is worth noting that the probabilities ( 13) and ( 14) are quite similar to the probabilities derived from a Shift model [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], but in which we consider two different shifts. Indeed, in a Shift model, respondents are supposed to answer the first question with a prior willingness-to-pay W i and the second question with an updated willingnessto-pay defined as: W ′ i = W i + δ. ( 21 ) The probability of answering successively yes and no is: P (yes, no) = P (b 1i < W i ∩ W ′ i < b h 2i ) = P (b 1i < W i < b h 2i -δ), (22) which is equal to the corresponding probability in ( 14) with δ = δ 1 . Similar calculations can be made for the other probabilities, to show that the Range model can be estimated as a model with two different shifts in ascending/descending sequences. The underlying decision process is very different from the one developed in the Range model. In the Shift model, respondents answer questions according to two different values of WTP, W i and W ′ i . The first bid offer is interpreted as providing information about the cost or the quality of the object. Indeed, a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object. Alternatively, a higher bid can make no sense to the individual, if delivery was promised at the lower bid. [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF] propose taking into account the dynamic aspect of followup questions: they suggest specification allowing the initial and follow-up answers to be based on two different WTP values. The WTP is broken down in two parts, a fixed component and a varying component over repeated questions. The random effect model can be written: Random-effect model W 1i = W ⋆ i + ε 1i W 2i = W ⋆ i + ε 2i where W ⋆ i = α + X i β + ν i . ( 23 ) The difference between the two WTP values is due to the random shocks ε 1i and ε 2i , assumed to be independent. The fixed component W ⋆ i can be split into two parts. X i β represent the part of the willingness-to-pay due to observed individual specific characteristics. ν i varies with the individual, but remains fixed over the indivual's responses: it relates unobserved individual heterogeneity and introduces a correlation between W 1i and W 2i . The correlation is high (resp. low) if the variance of the fixed component is large (resp. small) relative to the variance of the varying component, see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] for more details. At the limit, if the two WTP values are identical, W 1i = W 2i , the correlation coefficient is equal to one, ρ = 1. [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] have modified this random-effect model to the case of the Shift model. Since the Range model can be estimated as a model with two different shifts in ascending/descending sequences (see above), the use of a random-effect model in the case of the Range model is straightforward. From equations ( 17), ( 18), ( 19) and ( 20), we can write the probability that the individual i answers yes to the j th question, j = 1, 2: P (W ji > b ji ) = Φ [(α + X i β -b ji + δ 1 D j r 1i + δ 2 D j (1 -r 1i )) /σ] , (24) where D 1 = 0, D 2 = 1, and r 1i equals 1 if the answer to the first question is yes and 0 otherwise. Consequently, the Range model can be estimated from the following bivariate probit model: P (yes, yes) = Φ [α 1 + X i θ + γ b 1i ; α 2 + X i θ + γ b 2i + λ r 1i ; ρ ] . (25) The parameters are interrelated according to: α = -α 1 /γ, β = -θ/γ, σ = -1/γ, δ 1 = -λ/γ and δ 2 = (α 1 -α 2 )/γ. ( 26 ) Estimation with a bivariate probit model based on equation ( 25) does not impose any restriction on the parameters. The Range model is obtained if δ 1 ≤ 0 and δ 2 ≥ 0; the Shift model is obtained if δ 1 = δ 2 . It is clear that the Range model and the Shift model are non-nested; they can be tested through (25). Interpretation We have seen above that the estimation of the Range model derives from a general model, that also encompasses the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], see ( 25) and ( 26). Estimation of model ( 15), based on equation ( 25), provides estimates of α, β, σ, δ 1 and δ 2 , from which we can estimate a mean µ ξ and a dispersion σ of the willingness-to-pay - [START_REF] Hanemann | The statistical analysis of discrete response CV data[END_REF] by µ ξ = n -1 n i=1 W ξ i = n -1 n i=1 (α + X i β). ( 27 ) This last mean of WTPs would be similar to the mean of WTPs estimated using the first questions only, that is, based on the single-bounded model. Additional information can be obtained from the use of follow-up questions: estimates of δ 1 and δ 2 allows us to estimate a range of means of WTPs. The mean value of WTPs estimated from our model µ ξ is the mean of the estimates of W ξ i for all the respondents, i = 1, . . . , n. From ( 12), we can derive the lower bounds of the range of acceptable values for all respondents and a mean of WTPs associated with it: µ 0 = n -1 n i=1 W i = n -1 n i=1 (W ξ i + δ 1 ) = µ ξ + δ 1 , δ 1 ≤ 0. ( 28 ) It would be the mean of WTPs when respondents always answer no if the bid belongs to their range of acceptable value. Similarly, we can derive the upper bounds of their range, µ 1 = n -1 n i=1 W i = n -1 n i=1 (W ξ i + δ 2 ) = µ ξ + δ 2 , δ 2 ≥ 0. (29) It follows that we can provide a range of means of WTPs [µ 0 ; µ 1 ] = [µ ξ + δ 1 ; µ ξ + δ 2 ] with δ 1 ≤ 0, and δ 2 ≥ 0. This range can be estimated with μξ , δ1 and δ2 . The lower bound µ 0 corresponds to the case where respondents always answer no if the bid belongs to the range of acceptable values (ξ = 0). Conversely, the upper bound µ 1 corresponds to the case where respondents always answer yes if the bid belongs to the range of acceptable values (ξ = 1). How respondents answer the question when the bid belongs to the range of acceptable values can be tested as follows: • respondents always answer no corresponds to the null hypothesis H 0 : δ 1 = 0, • respondents always answer yes corresponds to the null hypothesis H 0 : δ 2 = 0. Finally, an estimation of the probability ξ would be useful. For instance, we could conclude that when the first bid belongs to the range of acceptable values, respondents answer yes in (100 ξ) % of cases. If the first bids are drawn randomly from a probability distribution, ξ can be rewritten ξ = P (µ 0 < b 1i < µ ξ ) P (µ 0 < b 1i < µ 1 ) . (31) In addition, if the set of first bids are drawn from a uniform distribution by the surveyors, it can be estimated by ξ = δ1 /( δ1 -δ2 ). Application Since independent variables other than the bid are not needed to estimate the Range model, we can use data from previously published papers on this topic. In this application, we use data from the well-known Exxon Valdez contingent valuation survey. 3 The willingness-to-pay question asked how the respondent would vote on a plan to prevent another oil spill similar in magnitude to the Exxon Valdez spill. Details about the Exxon Valdez oil spill and the contingent valuation survey can be found in [START_REF] Carson | Contingent valuation and lost passive use: Damages from the Exxon Valdez oil spill[END_REF] Results With the assumption that the distribution of WTP is lognormal, results in Alberini et al. show evidence of a downward shift. Here, we consider the more general model given in ( 25) from which the Double-bounded, Shift and Range models are special cases. Estimation results are given in Table 1. We use the same model as in Alberini et al.: there are no covariates and the distribution of the WTP is assumed lognormal (θ = 0 and b ij are replaced by log b ij in ( 25)). The mean of log WTP is given by α = -α 1 γ and the median of WTP is given by exp(α). Estimation results of the Single-bounded model are obtained from a probit model. Estimation results obtained from a bivariate probit model with no restrictions in (25) are presented in column M ; the Double-bounded model is obtained with δ 1 = δ 2 = 0 ; the Shift model is obtained with δ 1 = δ 2 and the Range yes model is obtained with δ 2 = 0. -1345.70 -1303.36 -1301.32 -1301.45 Note: standard errors are in italics; n.c.: no constraints. From Table 1, we can see that the estimates of the mean of log WTP in the Singlebounded and Double-bounded models are very different (3.727 vs. 3.080). Such incon-sistent results lead us to consider the Shift model to control for such effects. It is clear that the estimates of the mean of log WTP in the Single-bounded model and in the Shift model are very close (3.727 vs. 3.754), and that the Double-bounded model does not fit the data as well as the Shift model. Indeed, we reject the null hypothesis δ 1 = 0 from a likelihood-ratio test (LR = 84.68 and P < 0.0001). 4 To go further, we consider estimation results obtained from the model defined in (25) with no restrictions (column M). On the one hand, we reject the null hypothesis δ 1 = δ 2 from a likelihood-ratio test (LR = 4.08 and P = 0.043). It suggests that the Shift model does not fit the data as well as model M. On the another hand, we cannot reject the null hypothesis δ 2 = 0 (LR = 0.26 and P = 0.61). It leads us to select the Range yes model hereafter. The estimated values of the parameters δ 1 and δ 2 allow us to interpret the model as a Range model (δ 1 ≤ 0, δ 2 = 0). Respondents are unsure of their willingness-to-pay in an interval; they answer yes if the initial bid offer belongs to their interval of acceptable values. We compute an interval of the median WTP: [exp(α -δ1 ); exp(α -δ2 )] = [9.45; 44.21]. (32) This interval suggests that, if the respondents answer no if the initial bid belongs to their range of acceptable values, the median WTP is equal to 9.45; if the respondents answer yes if the initial bid belongs to their range of acceptable values, the median WTP is equal to 44.21 (see Section 4). Main findings From these empirical results, we select the Range yes model, with an interval of the median WTP [9.45;44.21]. Previous researchers have also found that, when uncertain, individuals tend to say yes [START_REF] Ready | How do respondents with uncertain willingness to pay answer contingent valuation questions?[END_REF]. New with the Range model is the fact that no additional question such as "how certain are you of your answer?" is required. From our results, several conclusions can be drawn: 1. From the Range yes model, we cannot reject the null hypothesis ρ = 1.5 This result has an important implication. It suggests that the underlying decision process defined in the Range model is supported by the data. Confronted with an initial bid, respondents select a value, then they answer both the first and the second questions according to the same value (see Sections 2 and 3.2). This is in sharp contrast to the existing literature that explains anomalies by the fact that respondents use two different values to answer the first and follow-up questions. 6 The Range model supports the view that anomalies can be explained by a specific respondent's behavior prior to the first question, rather than by a change between the first and the second questions. 2. As long as the Range yes model is selected, the Single bounded model is expected to elicit the upper bound of the individual's range of acceptable WTP values. Indeed, in the case of Exxon Valdez, the estimated median WTP is equal to exp(α) = 41.55. This value is very close to the upper bound provided by the interval of the median WTP in the Range yes model, i.e. 44.21. The discrete choice format is then likely to overestimate means or medians compared to other surveys' formats. It confirms previous research showing that, with the same underlying assumptions, the discrete choice format leads to a systematically higher estimated mean WTP than the openended format [START_REF] Green | Referendum contingent valuation, anchoring, and willingness to pay for public goods[END_REF] or the payment card format [START_REF] Ready | How do respondents with uncertain willingness to pay answer contingent valuation questions?[END_REF]. 3. Existing results suggest that anomalies occur in ascending sequences only (i.e. after a yes to the initial bid).7 DeShazo ( 2002) offers a prospect-theory explanation, interpreting the first bid as playing the role of a reference point. The Range model offers an alternative explanation: anomalies come from the fact that, when uncertain, respondents tend to answer yes. Indeed, if the bid belongs to his range of acceptable values, a respondent answers yes to the first question and necessarily no to the second question (see Section 2). This specific behavior occurs in ascending sequences only. Such asymmetry can be viewed from the estimation of the model too, since the Range model can be estimated as a model with two different shift parameters in ascending/descending sequences (see Section 3.1). All in all, based on Exxon Valdez data, the Range model: (1) confirms existing findings on the effect of respondent uncertainty; (2) offers an alternative explanation to anomalies in CV surveys. Conclusion In this article, we develop a model that allows us to deal with respondent uncertainty and starting-point bias in the same framework. This model is based on the principle of coherent arbitrariness, put forward by Ariely et al. (2003b). It allows for respondent uncertainty without having to rely on follow-up questions explicitly designed to measure the degree of that uncertainty (e.g., "How certain are you of your response?"). It provides an alternative interpretation of the fact the some of the responses to the second bid may be inconsistent with the responses to the first bid. This anomaly is explained by respondents' uncertainty, rather than anomalies in respondent behavior. Using the well-known Exxon Valdez survey, our empirical results suggest that, when uncertain, respondents tend to answer yes. Table 1 : 1 Exxon Valdez Oil Spill Survey: Random-effect models Parameter Single Double Shift M Range yes (δ 1 = δ 2 = 0) (δ 1 = δ 2 ) (n.c.) (δ 2 = 0) α 3.727 3.080 3.754 3.797 3.789 (0.124) (0.145) (0.127) (0.129) (0.134) σ 3.149 3.594 3.236 3.298 3.459 (0.432) (0.493) (0.421) (0.387) (0.272) δ 1 -1.108 -1.424 -1.583 (0.212) (0.356) (0.222) δ 2 -0.062 (0.114) ρ 0.694 0.770 0.997 0.998 (0.047) (0.045) (0.010) (0.014) ℓ -695.51 It would be interesting to consider a model in which δ varies across individuals. Some variables that are proved to play a role in individual value assessment (such as repeated exposure to the good or representation of the good(Flachaire and Hollard 2007)) may also influence the length of the range. This requires a particular treatment which is beyond the scope of this paper. P (yes) = P (yes|b 1i < W i )P (b 1i < W i ) + P (yes|W i < b 1i < W i )P (W i < b 1i < W i ) + P (yes|b 1i > W i )P (b 1i > W i )where the conditional probabilities are respectively equal to 1, ξ and 0. The bid values are given inAlberini, Kanninen, and Carson (1997, Table 1). A LR test is equal to twice the difference between the maximized value of the loglikelihood functions (given in the last line ℓ); it is asymptotically distributed as a Chi-squared distribution. Estimation results of the Range and of the Range yes models obtained by using the constraint ρ = 1 are not reported: they are similar to those obtained without imposing this constraint and the estimates of the loglikelihood functions are identical. [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Kanninen | Bias in discrete response contingent valuation[END_REF][START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF] [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF]
32,328
[ "843051", "1331865" ]
[ "15080", "45168" ]
sic_01759274
en
[ "shs" ]
2024/03/05 22:32:10
2012
https://archivesic.ccsd.cnrs.fr/sic_01759274/file/new%20musical%20organology%20-%20videogames.pdf
Hervé Zénouda email: [email protected] NEW MUSICAL ORGANOLOGY: THE AUDIO GAMES THE QUESTION OF "A-MUSICOLOGICAL" INTERFACES Keywords: Audio-games, video games, computer generated music, gameplay, interactivity, synesthesia, sound interfaces, relationships image/sound, audiovisual music This article aims to shed light on a new and emerging creative field: "Audio Games," a crossroad between video games and computer music. Today, a plethora of tiny applications, which propose entertaining audio-visual experiences with a preponderant sound dimension, are available for game consoles, computers, and mobile phones. These experiences represent a new universe where the gameplay of video games is applied to musical composition, hence creating new links between these two fields. In proposing to manipulate what we refer to as "a-musicological" 1 representations (i.e. using symbols not normally associated with traditional musicology to manipulate and produce sound), this creative aspect raises new questions about representations of sound and musical structures and requires new instrumental gestures and approaches to composing music which will be examined. Furthermore, these objects play a role in the rise of a new amateur profile, already put forth by authors like Vilèm Flusser (Flusser, 1996), with regards to photography. After having defined the characteristics and the limits of this field and highlighting a few of the historical milestones (abstract cinema, gaming theory in music, graphic actions and scores), we will study a few examples of musical games and propose areas for further research to devise analytical tools for these new objects. 1/ Introduction In this article, we would like to shed some light on an emerging creative field : "Audio Games," a crossroad between video games and computer music. Today, a plethora of tiny applications, which propose entertaining audio-visual experiences with a preponderant sound dimension, are available not only for game consoles and computers, but mobile phones as well. These experiences represent a new universe where the notion of gameplay derived from video games can facilitate the link with musical composition. In proposing to manipulate what we refer to as "a-musicological"2 representations (i.e. using symbols not normally associated with traditional musicology to manipulate and produce sound), this creative aspect raises new questions about representations of sound and musical structures and requires new instrumental gestures and approaches to composing music which will be examined. In an original way, he thus questions the issue of usability with respect to creativity. Indeed, a new organology is emerging, profoundly renewing musical relationships (abstract synesthesiatic representations, 3D manipulations, simulation of physical events like rebounds or elasticity…). After having defined the characteristics and the limits of this field and highlighting a few of the historical milestones (abstract cinema, gaming theory in music, graphic actions and scores), we will study a few examples of musical games and propose areas for further research to devise analytical tools for these new objects. 2/ Definition A quick search on the Internet gives us many terms describing this new field and its multiple sub-genders : audio-games, music video-games, music memory-games, rhythm-games, pitch-games, volume-games, eidetic music-games, generative music-games… To avoid contention, we prefer to accept the broadest of terms which takes into account all of the situations where games and musical production are combined with the main particularity of the field being the layering of these two universes with points of contact, but also different cognitive objectives. Thus, it is useful to set aside, from the outset, any and all ambiguity by specifying that the object of our research does not directly deal with the field of sound in video games (a field which deals primarily with interactive sound design), but rather a new organology which uses fun video activities to produce sound and extensively uses digital interactivity, creating a new relationship with images. The « audio game », with respect to the dimension of sound, is rooted in a realm between a musical instrument, a music box, and musical automatons. Moreover, the question of balance between the freedoms and constraints of players is crucial to the success of a entertaining sound-based video. Therefore, the richer and more complex the rules are, the more musically interesting the game is, even if it requires a lot of time to learn how to use. On the contrary, the more the constraints are preset, the easier it is for the user to use, managing almost immediately, but with the undeniable risk of producing either mediocre or repetitive sound. A second important characteristic of this new cultural object that we are trying to define is the development of a new relation between image and sound. Indeed, the overlapping of two universes (games and sound production) is materializing in the interface and interactive manipulation which often do not refer to musicology, but rather to rules specific to games (fighting opponents, manipulating objects…). It is above all, and in this case, the use of interfaces that we refer to as "amusicological", as does Francis Rousseau and Alain Bonardi, which appear to be the main originality of this new organology. 3/ Gameplay and Instrumental games A link exists between video games and sound in the notion of "playing", a notion which deserves our attention. In the world of video games, the terms playability and gameplay are the most often used to describe this dimension. For some (and in Canada in particular), the two terms are considered as synonyms and "jouabilité" is used as the French Translation of "game-play". For others, the two terms refer to different realities, despite being very closely related. The first is considered as a subgroup of the second. The first term refers to the pleasure of playing while the second term represents the principles of the game. Both notions being similar (and each directly depending on the other), we prefer to use the French term of "jouabilité" to describe all of the elements in the ludic video experience: the rules, interface, and maneuverability as well as player appropriation (the pleasure procured and the ease of appropriation). What does the playability of video games and instrumental games have in common? If we define playability as the characteristic of that which is playable, which can be played, we can therefore find a musical link in the notion of musicality, that which is "well composed" for a given instrument and which "sounds" good. We can then speak of "playability" for a piano score in reference to the pleasure procured by the pianist when playing this composition and his or her ability to incorporate, personify, express his or her interpretation. As regards the aspect of applying the rules of the game, there is no direct equivalent of the player that can be found in the traditional roles associated with music : neither a true composer, nor solely an instrumentalist, nor simple listener. Yet, one composer has remarkably shifted how these roles were assigned and enabled the development of a ludic dimension. Indeed, by the end of the 1950s, John Cage (1912-1992) made headway in his research on chance and indetermination by transforming the score into a game with rules that the musician had to incorporate and above all personalize. The score thus changed from notations of musical parameters to that of actions. For example, in "Variations I" (1958), "Variations II" (1961) or "Cartridge Music" (1960), the notes were written as relative values (the size of the points reflect the intensity and length) on transparent sheets the musician placed over other transparent sheets containing the rules of the game (geometric shapes, a clock face…). The overlapping of sheets containing information on different levels (notes and rules) produces a score which must be interpreted. As shown below, the musician must implement a plan to produce the musical information before playing a single note, a "task" normally reserved for composers themselves. This new role of instrumentalists embodies the two English terms which describe games: "game" and "play". The term "game" refers to the notion of structure (a system of rules which the player must respect to successfully carry out an action), while the term "play" denotes a ludic attitude the player adopts ("actions of the player") (Genvo, 2002). In modifying the role of the musician, John Cage considerably increased the amount of freedom and creativity of the musician, two fundamental elements of a ludic attitude. The link between composition and the theory of games can be more explicitly found in the famous performance "Reunion" (1968), featuring Marcel Duchamp playing a game of chess where each move produces musical events more or less randomly. The game theory can be found among other composers such as Iannis Xenakis ("Duel" (1959), "Stratégie" (1962) and "Linaia Agon" (1972)), Mauricio Kagel ("Match" (1964) inspired by a game of tennis) or John Zorn ("Cobra" (1984)). If these approaches involve a dialogue between two specialists (the composer and the musician), the ludic video objects are particular in nature in that they mainly address a more general amateur public. This mainstream ludic attitude embodies the work of Vilèm Flusser (Flusser, 1996), on photography, writings which particularly shed light on the particular nature of our research: Flusser in fact defines the camera as a structurally complex toy but functionally simple, enabling you to take pictures without any particular skills, making this activity accessible to a vast amateur public : "The camera is not a tool, but a toy and the photographer is not a worker, but a player: not "homo faber" but "homo ludens". 3 And furthermore: "While the camera is based on complex scientific and technical principles, it is very easy to operate. It's a structurally complex toy, but functionally simple. In that way, it is the contrary of a game of chess, which is structurally simple, but functionally complex: Its rules are simple, but it is very difficult to play chess well. A person who operates a camera can very well take excellent pictures without necessarily having the slightest inclination of the complexity of the processes that take place when he or she pushes the button". 4 This transfer of specialized skills (the photographer or the composer) to an amateur appears to be the founding paradigm of this new class of cultural objects. A paradigm which is part of a broader movement linked to new information technologies. Jean Louis Weissberg (Weissberg, 2001), in his article on the emerging amateur figure clearly shows new intermediary spaces which appear between production and reception, between professionals and amateurs. The author puts forth the notion of graduated skills and highlights the political dimension of these new autonomy improving postures. In transferring this capacity, the user-friendliness of the interfaces is clearly essential. If it facilitates the transfer of skills and endeavors to make complex operations more intuitive, we argue that in the field application of musical composition we are studying, the question of user-friendliness is directly linked to the key issue of the representation of sound and musical structures in its ability to make concrete and visible that which is abstract and sound based. 4/ A few historical milestones We can comprehend the recent emergence of "audio-games" by linking it to the closing gap between graphic arts and musical arts which arose toward the end of the nineteenth century and experienced rapid growth throughout the twentieth century (Bosseur, 1998). The scale of such historical dynamics exceeding the framework of this article (Zénouda, 2008), we shall very briefly introduce three : -"Audiovisual music": The expression of audiovisual music progressed in the beginning of the 1940s5 thanks to John Withney6 , who defined the characteristics by studying the question of tempo in relation to specific films (combinations of time and space as well as shape and color). Nevertheless, audiovisual music existed long before, as early as the 1920s, with directors like Oskar Fishinger7 or Len Lye8 and is a part of the circle of influence regarding questions of connections between the arts and synesthesia so dear to abstract painters like Kandinsky or Paul Klee. The will of these directors was to find a specific film language "free" of any narration and forms inspired by novels or plays. In doing so, filmmakers strayed from cinema's primary function of recording reality by favoring to use non figurative forms and searching for construction models for their work in music which is the most abstract of arts. With duration and rhythm as a common denominator, these filmmakers sought to produce "completely" synesthetic works playing with the subtle connections between different senses. In the 1970s, the composer Iannis Xenakis, also interested in the connection between image and audio, proposed a graphic interface for the composition of music dubbed UPIC 10 11 . Thanks to a graphic palette, the composer can draw waveforms and amplitude envelopes and control both the structure of the sound and the general form of the work. More recently, Golan Levin 12 , a researcher at MIT 13 , a series of instruments 14 which closely combined image, sound, and gestures. A developer of graphic palettes to which he adds real-time computer generated sounds, he associates movement parameters, like the direction and speed of movement or the pressure of an electronic pencil, with sound parameters, like timbre, pitch, panning, and with graphic parameters, like color, the thickness of a stroke or direction. For Golan Levin, the notions of interactivity and generativity are closely linked: Images and sound are produced in real time as the result of a user's movement, hence creating a "generative and interactive malleable audiovisual substance". 15 In France, the company Blue Yeti 16 proposes a dual screen musical drawing "Grapholine" system based on the transformation of sound samples (via a standard but also customizable sound database) by granular synthesis and offers a large range of manipulations and relations between image and sound (speed of the stroke, transparency, luminosity, color, pencil pressure…). At the UTC de Compiègne, two students 17 in their final year of computer science studies, proposed a virtual reality application (Immersive Music Painter, 2010) 18 in which the user, with his or her gestures, draws curves of different colors and thinknesses to which sound or melodies of controllable pitch, panning, and volume are associated. These three fields are a part of the aspects of representing sound and interaction with sound and concerns the specialized experimental cinema audience for the first field, and composers and musicians for the latter fields. "Audio-games" add a ludic dimension to these two aspects and in doing so, the question of the user-friendliness and the user-friendliness of the interface; thus making this type of application available to a larger amateur audience, regardless of their knowledge of music. 5/ Typology of "audio-games" We aim to develop an extremely simple and general typology that incorporates all types of "audio-games" as well as all types of music produced regardless of the style and level user expertis. In doing so, we concentrate on the question of representation (set or dynamic) of both types of sound : the basic elements (paradigmatic axis) and elements created through manipulations (playability) proposed via games (syntagmatic axis). -The vertical paradigmatic axis pertains to the representation of basic sound elements that the audiogame provides the player. The graphic representation of this basic element range from the classic sol-fa to graphical creations without any direct link to sound in using abstract representations related to synesthesia (and any type of connection with different sound parameters). Note that the distinction between an elementary sound and one that is created depends on the type of game and interaction proposed. Thus, what is considered as composed in one audio game may be considered as elementary in another. The items on this axis may therefore be modified depending on the audio-game that is studied and placed according to their distance or proximity to the traditional sol-fa. - The horizontal syntagmatic axis pertains to the representation of sound objects created through the game's playability. It regards the representations of second level sound creations. These representations may be set and close to classic musical manipulations (representations of repetitions, transposition, reversals…) or dynamic of simulations of physical phenomena like rebounding, explosions, accumulations) or describe the dynamics of "musicological" or "a-musicological" playability. The first can find direct musical analogies (moving around within space, mirroring….) the second have no musical analogy whatsoever and imply arbitrary and ad-hoc relations linked to the diegesis of the game (fighting games…). As for the paradigmatic axis, the items on the syntagmatic axis can be modified depending on the audio-game studied and will be positioned according to their distance or proximity to the traditional sol-fa. -"Aura" (on iPhone and iPad) 19 [2,1]20 , very close to the aesthetics of a work of Oskar Fishinger as "Allegretto" (1936) or Jordan Belson's "Allures" (1961), allows users to create their own melodies over a computer generated musical background. Always in harmony with music in the background, the user can select the notes, timbre and volume using simple colored shapes on the screen. The audio-visual creation process is fluid and uninterrupted. The musical and visual background changes simply by shaking your iPhone. The user produces music and abstract shapes which are presented as an interactive generator of background music (and images) with the touch of a button. An application like "SynthPond"21 (iPhone et iPad) [2,1] allows you to place different shapes on circles of varying diameters. Notes are played when they collide with waves generated by the user or with other nodes. Contrary to "Aura", "SythPond" produces repetitive musical loops making it possible to visually anticipate the output by following the movement of the nodes as they get closer to different junctions. The player can select the pitch, timbre, and tempo of the loops, which thus allows you to produce potentially complex melodies. -"Rez" (Tetsuya Mizuguchi, 2001, Sega)22 and "Child of Eden" (Tetsuya Mizuguchi, 2010, Xbox360)23 [2,4] are more similar to traditional games in that they preserve a significant sound dimension. "Rez" is like a fight in which the musical elements are dynamically created by the player's acts. Each time the player's or enemy's vessel shoots, a sound is produced which adapts in rhythm and in harmony. The game experience is therefore quite original, the ludic mission (to destroy the enemy vessels) is the source of the images and sound that is produced. The arguments of the designer referring to Kandinsky and synaesthesia link this game to those previously mentioned. To increase the quest for synaesthesia, the multiple simultaneous sensations are brought on by the vibration of the joystick when the images and sound synchronise. In the "Child of Eden", Tetsuya Mizuguchi continues to improve playability by generating sound using state of the art technological innovations (HD images, 5.1 surround sound, Kinect motion sensors which enable you to play without a mouse or joystick). Motion detectors enable direct interaction with images as well as new types of interactivity (for example, clapping you hands enables you to change weapons during the game). The visual aesthetics are far more dazzling and psychedelic that in Rez and the ludic mission is to eradicate a virus which jeopardizes the project to recreate a human within Eden (the archive of humanity). Each level corresponds to a step in this archive of humanity and the last stage incorporates the personal contributions of players with pictures of their happiest moments. -"Metris"24 (Mark Havryliv, Terumi Narushima, Java Applet) [2,3] adds a musical creation dimension to the famous game "Tetris". A synthetic bell sound is generated in real time each time a block is moved. When the player moves or rotates blocks the pitch changes. They way in which the blocks fit together produces different chords. All of the different possibilities enable players to produce sophisticated micro-tonal compositions without losing the interest of the original "Tetris" game. -"Pasy02"25 (iPhone and iPad) [2,2] is laid out as a grid that you can reshape and stretch as much as you would like and which comes back to normal with an elastic effect. These physical modifications influence the pitch, tempo, timbre of the musical loop played by a synthesizer with waveforms (sine, triangles, squares, sawtooth) that can be chosen by the user to produce diverse melodies. The simple application offers an original and live production of sound. -"Elektro-Plankton"26 (Toshio Iwai, Nintendo) is an emblematic game which offers ten different plankton themed interfaces with various musical situations enhanced with ludic graphics. Certain interfaces emphasize the strong link between musical gestures and pictorial gestures. Furthermore, with "Tracys," [3,3], the player draws lines (straight or curved) the plankton follow while playing piano notes in rhythm with the graphic shapes created. Yet, others display a labyrinthine and rhizomic dimension of music (using a complete range of possible notes): with "Luminaria" [3,3], several plankton move around on a grid of nodes and links, each link following the arrows. The player can change the connections between the nodes and hence change the direction the plankton take. In doing so, the player modifies the notes that are played. Others use a device which is close to what could be referred to as a sound installation. The "Hanenbrows" [3,2], for example, are projected on the screen and produce musical notes when they bounce off leaves. In changing the angle of the leaves, the player can influence the melodies that are produced. Each time a leaf is hit, it changes color and the sound it makes. A screen like "Nanocarps" [3,2], uses plankton, each with their own behaviour and direction. A sound is produced when the plankton hit a wave and using the microphone allows the player to reorganize them. In the same way, the application "Volvoice" [3,1] uses a computer microphone to record sounds which can then be modified (pitch, speed, filter) as much as you'd like by simply changing the shape of the plankton on the screen. Finally, the screen "Sun-Animalcule" [3,1] proposes for you to plant plankton anywhere on the screen. The musical note depends on where you plant it. The light produced by a day/night cycle hatches the seeds and produces the corresponding musical note. As the plankton embryos get bigger, their musical behaviour changes. Using the same graphical elements, each of the three productions make use of a particular aspect of music: Legato uses audio mixing of melodic lines on a particular theme in a loop, Cellos aligns different preset melodies, Moon tribe allows you to synchronize rhythmic loops. The structure itself of the interaction is copied or transposed on aspects of music such as harmony, counterpoint, melodic structure, synchronization of rhythms. The graphics possess their own aesthetic coherence and arbitrary sounds. In the same way, each visual and audio mode, possesses its own tempo. The tempo of the graphics and that of the music do not always perfectly overlap. They produce visual and audio temporal loop delays. The gesture does not merge the two modes, but coordinates them. It's the junction, the border between the two worlds. If this relationship leads to sensory fusion, it is only elusive, unambiguous, and subject to interpretation that varies greatly from one user to another. It is not directly a part of the technical process, but rather accomplished in the course of an untimely gesture, as a better user appropriation. It is impossible to say whether our gesture is guided by our interest in producing sound or pleasant visual graphics. We imperceptibly emphasize one or the other, thus combining the three dimensions: visual, audio, and gestural. These examples fit into our typology as follows : « Aura » [2,1], « SynthPond » [2,1], « Rez » [2,4] , « Child of Eden »[2,4] , « Metris » [2,3], « Pasy02 » [2,2] , « Tracys » [3,3], « Luminaria » [3,3], « Hanenbrows » [3,2], « Nanocarps » [3,2], « Volvoice » [3,1] , « Sun-Animalcule » [3,1], « Flying puppet » [3,4] 7/ Towards an analysis grid for "audio games" In addition to a classification table of these new objects, we propose some ideas for developing a grid analysis of "audio-games" : -The Distinction in relations between image and sound: Three modes interact and slide from one to another within the same ludic audio video production. Sound for the overall benefit of images (derived from traditional sound and audiovisual illustrations), images for the overall benefit of sound (derived from computerbased music where images aim to represent and manipulate sound), images and sound of equal importance (more specific to "audio-games" and hypermedia) producing perceptible fusion effects between the two modes. -The Distinction between sound producing modes: Some sounds are produced by direct intervention of the user. For example, when the user moves the mouse, clicks on or goes over an icon. Other sounds are automatically generated by the system without any user intervention, such as automatic background sounds. Yet others are generated automatically by the system but linked to user intervention, like a particular path in the application or an amount of time spent on an interaction. These different means for producing sound have a tendancy to interfere with each other and lead to a certain degree of confusion. Thus, studying precisely how the author manages the untimely audio mixing of the different layers of sound is essential for a detailed analysis on the relationships between images and sound in an interactive situation. -Taking graphic/audio/gestural tripthych into account: expressed with the notion of mapping (transcribing information that has been received in one register, the movement or graphic manipulation, into another register such as musical in this case). Several types of mapping can generally be distinguished: the relationship in which one parameter of a field corresponds to the parameter of another (one-to-one), the situation in which the parameter of one field is associated with several parameters of the other (one-to-many), and finally the relationship in which several parameters of one field are associated with one parameter of the other (many-toone). In these multisensory associations, the choice in associating the different parameters of each modality at stake and the manner in which they are associated is essential. Indeed, we note which audio and visual dimensions are affected by the interaction and what perceptible effects they produce. Regarding the sound: the note (pitch, length, intensity), the timbre (envelope, frequency…), the rules of manipulating musical structure, the audio mixing of several tracks, the general parameters like tempo… and to which graphic parameters these are assigned to (color, shape, opacity, sharpness, frame, level of iconicity…). What sensory effects are produced by the multiple combinations of image/sound/ gestures ? -The analysis of different cognitive objectives: "Audio-games" present specific situations where a user's gesture controls and produces images and sound at the same time while taking part in another logic, the game itself (for example, destroying space vessels in "Rez" or placing obstacles to manage how balls rebound in "Elektroplankton"). We have demonstrated this specific situation produces perceptible complex effects where the effects of synchronisation and fusion of images and sound are enhanced by gestures and different cognitive stakes. The main difficulty is associating these two levels (a game and musical production) in a meaningful way (neither a confrontational relationship, nor one that is too distant or anecdotal). Ideally, the ludic layer would give rise to new meaningful instrumental gestures in sound production and ultimately innovate music. To obtain optimal results, the game's rules and missions must therefore clarify the musical structures when played while keeping their impact and coherence as a game. We can be seen here, the difficulty of this desired balance. -These new objects thus stress the necessity of developing multimodal semiotic approaches of analysis which simultaneously take into account what can be seen and heard as well as gestures. A few tools might help us to make headway : -In 1961, Abraham Moles [START_REF] Moles | Théorie de l'information et perception esthétique[END_REF] proposed a scale of iconicity with thirteen levels for ranking images according to how closely they resemble the real object they represent. This progressive axis went from the most concrete and figurative representations to the most abstract representations like natural languages or artificial languages (mathematics etc.). This scale of iconicity can be applied to sound by developing two axis: that which goes from concrete to musical and that which goes from recorded to simulated (computer generated sound). -Inspired by Peirce's sign theory, the composer, François Bayle [START_REF] Bayle | Musique acousmatique : propositions … positions[END_REF] defines three properties of sound linked to the attention of the listener: the "icon" (isomorphic image or im-sound): the object is indicated by all of its characteristics, the "index" (indexed images or di-sound): certain graphic traits denote the object, the "symbol" (a metaphore or me-sound): the image represents the object with associative properties. These three kinds of signs give rise to three levels of hearing: one in which sounds are heard as corresponding to directly identifiable referents of reality (quality : quali-sign) ; one in which the relationship is more abstract, the sound becomes a significant element of something. Specialized listening: Sounds are heard as having been transformed (filtering, transposition, insert…), indications of musical composition (singularity: syn-sign); and finally, one in which the sign is governed by a known law which is independent from the sign itself (rebounds, oscillation…), listening which is oriented towards a sense of organisation, formal law…(stability: legi-sign). -Conceived by a team of musical researchers in Marseilles, an Unité Sémiotique Temporelle is "a musical segment which, even out of context, possesses a precise temporal signification thanks to its morphological structure" 30 . Nineteen USTs were identified and labelled: A word or a literary expression which most directly describes the manner in which the energy of sound is deployed over time [START_REF] Chute | Par vagues, Qui avance, Qui tourne, Qui veut démarrer, Sans direction par divergence d'information[END_REF] , most often with the help of a "morphological appellation, a qualifier which often refers to something extramusical. 32 This extramusical reference is a first step towards a generalization of these labels to other modalities. In this way, we can emphasize that all of these labels refer to an energetic or spatial movement which naturally connect them to gestures and images. 8/ To conclude: Towards what new instrumental and compositional gestures? It is currently difficult to foresee new instrumental and compositional gestures resulting from these types of interfaces. Nevertheless, we can note that they are part of a general movement which is creating completely new situations of musical interaction: real-time musical devices are transforming musical gestures by increasing or reshaping them, which results in separating gestures from the sound produced. Professional computer generated sound software 33 more and more frequently add network representations, physical simulations (rebounds, soundclouds…), shapes moving through labyrinths of notes, and ludic approaches to their traditional representations (treble clef and bass clef scores, midi data note grids, player piano music rolls, displayed waveforms…) (Vinet, 1999). « Audio-games » play a role in renewing musical gestures as well as the representation of sound and their musical structures. They make playing and composing music easier for a broad audience regardless of their knowledge in music. They make the musical gestures of musicians on stage more visible and comprehensible. Furthermore, they are likely to make the relationship between audiences and composers more explicit thanks to interactivity. RECOMMENDED LITTERATURE : -Cage J. ( 1976 RECOMMENDED WEBSITES : -http://www.centerforvisualmusic.org/ -Allures : http://www.mediafire.com/?fy920bhvu6q6b1v Figure 1 : 1 Figure 1 : John Cage (1960), Cartridge Music Figure 2 : 2 Figure 2 : John Cage (1968), Reunion (featuring Marcel Duchamp in Toronto) Figure 3 : 3 Figure 3 : Jordan Belson (1961), Allures (http://www.mediafire.com/?fy920bhvu6q6b1v) -Graphic scores: As we have seen above with the score Cartridge Music (John Cage, 1960), the principles of indetermination developed by numerous composers starting in the late 1950s 9 caused us to reconsider the score as a musical communication tool. A new genre appeared as sound and visual arts merged. Composers broadened their range of symbols (colors, shapes, depth, textures…) to express new tones and novel musical processes. These new representations introduced spaces of freedom linked to improvisation and emphasized global musical indications Figure 4 : 4 Figure 4 : Cornelius Cardew (1963 -1967), Treatise-Graphic interfaces for musical compositions : In the 1970s, the composer Iannis Xenakis, also interested in the connection between image and audio, proposed a graphic interface for the composition of music dubbed UPIC 10 11 . Thanks to a graphic palette, the composer can draw waveforms and amplitude envelopes and control both the structure of the sound and the general form of the work. More recently, Golan Levin 12 , a researcher at MIT 13 , a series of instruments 14 which closely combined image, sound, and gestures. A developer of graphic palettes to which he adds real-time computer generated sounds, he associates movement parameters, like the direction and speed of movement or the pressure of an electronic pencil, with sound parameters, like timbre, pitch, panning, and with graphic parameters, like color, the thickness of a stroke or direction. For Golan Levin, the notions of interactivity and generativity are closely linked: Images and sound are produced in real time as the result of a user's movement, hence creating a "generative and interactive malleable audiovisual substance".15 In France, the company Blue Yeti 16 proposes a dual screen musical drawing "Grapholine" system based on the transformation of sound samples (via a standard but also customizable sound database) by granular synthesis and offers a large range of manipulations and relations between image and sound (speed of the stroke, transparency, luminosity, color, pencil pressure…). At the UTC de Compiègne, two students 17 in their final year of computer science studies, proposed a virtual reality application (Immersive Music Painter, 2010) 18 in which the user, with his or her gestures, draws curves of different colors and thinknesses to which sound or melodies of controllable pitch, panning, and volume are associated. Figure 5 : 5 Figure 5 : Typology of « Audio-games » 6/ A brief presentation of a few "audio-games"-"Aura" (on iPhone and iPad)19 [2,1] 20 , very close to the aesthetics of a work of Oskar Fishinger as "Allegretto" (1936) or Jordan Belson's "Allures" (1961), allows users to create their own melodies over a computer generated musical background. Always in harmony with music in the background, the user can select the notes, timbre and volume using simple colored shapes on the screen. The audio-visual creation process is fluid and uninterrupted. The musical and visual background changes simply by shaking your iPhone. The user produces music and abstract shapes which are presented as an interactive generator of background music (and images) with the touch of a button. An application like "SynthPond" 21 (iPhone et iPad) [2,1] allows you to place different shapes on circles of varying diameters. Notes are played when they collide with waves generated by the user or with other nodes. Contrary to "Aura", "SythPond" produces repetitive musical loops making it possible to visually anticipate the output by following the movement of the nodes as they get closer to different junctions. The player can select the pitch, timbre, and tempo of the loops, which thus allows you to produce potentially complex melodies. Figure Figure 6 : « Aura » Figure 7 : « Synthpond » Figure 8 : 8 Figure 8 : « Rez » Figure 9 : « Child of Eden » Figure Figure 10 : « Pasy02 » Figure 11 : 11 Figure 11 : « ElectroPlankton » -"Flying puppet" (Nicolas Clauss, painter) 27 [3,4] : Nicolas Clauss' website proposes numerous "interactive screens" where visual aesthetics is clearly figurative, seeking a multi-sensorial experience without any objectives or particular missions. While sound is extremely important, the images never reflect any logical representation of the sound. The two modalities have creative autonomy and produce new sensorial experiences. For example, the triptych Legato, Cellos and Moon tribe uses dancing stick figures. Using the same graphical elements, each of the three productions make use of a particular aspect of music: Legato uses audio mixing of melodic lines on a particular theme in a loop, Cellos aligns different preset melodies, Moon tribe allows you to synchronize rhythmic loops. The structure itself of the interaction is copied or transposed on aspects of music such as harmony, counterpoint, melodic structure, synchronization of rhythms. The graphics possess their own aesthetic coherence and arbitrary sounds. In the same way, each visual and audio mode, possesses its own tempo. The tempo of the graphics and that of the music do not always perfectly overlap. They produce visual and audio temporal loop delays. The gesture does not merge the two modes, but coordinates Figure 12 : 12 Figure 12 : « Moon Tribe » Figure 13 : 13 Figure 13 : Typology of « Audio-games » ), Pour les oiseaux, Belfond. -Bonardi A., Rousseau F. (2003)," Music-ripping " : des pratiques qui provoquent la musicologie, ICHIM 03. -Bosseur J. Y. (1998), Musique et arts graphiques, Interactions au XX ième siècle, Minerve, Paris. -Bosseur J. Y. (2005), Du son au signe, Alternatives, Paris. -Collectif (2004), Jouable. Art, jeu et interactivité Workshop/Colloque, Haute école d'arts appliqués HES, Ecole Nationale des Arts Décoratifs, Ciren, Université Paris 8 -Genève, Centre pour l'image contemporaine. -Flusser V. (1996), Pour une philosophie de la photographie, Circé, Paris. -Genvo S. (2002), « Le game design de jeux vidéo : quels types de narration ? » in « Transmédialité de la narration vidéoludique : quels outils d'analyse ? », Comparaison, Peter Lang, 2002, p.103-112. -Havryliv M. (2005), « Playing with Audio : The Relationship between music and games », Master of arts, University Of Wollongong USA. -Kandinsky V. (1969), Du spirituel dans l'art, et dans la peinture en particulier, Denoël-Gonthier, Paris -Manovitch L. (2001), The language of new media, MIT Press, Cambridge. -Natkin S. (2004), Jeux vidéo et médias du XXIe siècle : quels modèles pour les nouveaux loisirs numériques, Paris, Vuibert. -Stranska L. (2001), Les partitions graphiques dans le contexte de la création musicale Tchèque et Slovaque de la seconde moitié du vingtième siècle, Thèse de Musicologie, Paris IV. -Vinet H. (1999), Concepts d'interfaces graphiques pour la production musicale et sonore in Interfaces homme-machine et création musicale, Hermes, Paris. -Weissberg J.L. (2001), L'amateur, l'émergence d'une nouvelle figure politique, http://multitudes.samizdat.net/L-amateuremergence-d-une-figure -Zénouda H. (2008), Les images et les sons dans les hypermédias artistiques contemporains : de la correspondance à la fusion, L'Harmattan, Paris. fusion, L'Harmattan, Paris. 10 Unité Polyagogique Informatique du CEMAMu (Centre d'Etudes de Mathématiques et Automatique Musicales) 11 http://www.youtube.com/watch?v=yztoaNakKok 12 http://acg.media.mit.edu/people/golan/ 13 Massachusetts Institute of Technology (USA) 14 « Aurora » (1999), « Floo » (1999), « Yellowtail » (1999), « Loom » (1999), « Warbo » (2000) 15 « an inexhaustible audiovisual substance which is created and manipulated through gestural mark-making » Golan Levin, Painterly Interfaces for Audiovisual Performance, B.S. Art and Design, [LEVIN 1994], p.19. 16 http://www.blueyeti.fr/Grapholine.html 17 Camille Barot and Kevin Carpentier. 18 http://www.utc.fr/imp/ -Golan Levin : http://acg.media.mit.edu/people/golan/ -Blue Yeti : http://www.blueyeti.fr/Grapholine.html -Aura : http://www.youtube.com/watch?v=rb-9AWP9RXw&feature=related -Synthpond : http://www.youtube.com/watch?v=mN4Rig_A8lc&feature=related -REZ : http://www.youtube.com/watch?v=2a1qsp9hXMw -Child Of Eden : http://www.youtube.com/watch?v=xuYWLYjOa_0&feature=fvst -Trope : http://www.youtube.com/watch?v=dlgV0X_GMPw -Pasy02 : http://www.youtube.com/watch?v=JmqdvxLpj6g&feature=related -Sonic Wire : http://www.youtube.com/watch?v=ji4VHWTk8TQ&feature=related -Electroplankton : http://www.youtube.com/watch?v=aPkPGcANAIg -Audio table : http://www.youtube.com/watch?v=vHvH-nWH3QM -Nicolas Clauss : http://www.flyingpuppet.com -Cubase : http://www.steinberg.fr/fr/produits/cubase/start.html -Nodal : http://www.csse.monash.edu.au/~cema/nodal/ We borrow the term « a-musicological» from Francis Rousseau and Alain Bonardi (Bonardi, Rousseau, 2003). We borrow the term « a-musicological» from Francis Rousseau and Alain Bonardi(Bonardi, Rousseau, 2003). Referenced above (p.35) Referenced above (p.78) Five film exercices (1943 -1944) (http://www.my-os.net/blog/index.php?2006/06/20/330-john-whitney) Withney J. (1980), Digital harmony on the complementary of music and visual arts, Bytes Books, New Hampshire. « Studie Nr 7. Poème visuel » (1929-1930), « Studie Nr » (1931) … 8 « A Colour Box » (1935) … John Cage but also Earle Brown, Pierre Boulez, André Boucourechliev among others…. http://www.youtube.com/watch?v=rb-9AWP9RXw&feature=related [2,1] = 2 on the paradigmatic axis and 1 on the syntagmatic axis http://www.youtube.com/watch?v=mN4Rig_A8lc&feature=related http://www.youtube.com/watch?v=2a1qsp9hXMw http://www.youtube.com/watch?v=xuYWLYjOa_0&feature=fvst Havryliv Mark, Narushima Terumi, « Metris: a game environment for music performance », http://ro.uow.edu.au/era/313/ http://www.youtube.com/watch?v=JmqdvxLpj6g&feature=related http://www.youtube.com/watch?v=aPkPGcANAIg http://www.flyingpuppet.com/
43,612
[ "18681" ]
[ "6198" ]
01759358
en
[ "shs" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01759358/file/CivilServantsPrivateSector.pdf
Anne Boring Claudine Desrieux Romain Espinosa Aspiring top civil servants' distrust in the private sector * Keywords: Public service motivation, private sector motivation, career choices, civil servants. Méfiance envers le secteur privé des aspirants hauts fonctionnaires Service public, Secteur privé, choix de carrière, fonction publique In this paper, we assess the beliefs of aspiring top civil servants towards the private sector. We use a survey conducted in a French university known for training most of the future highranking civil servants and politicians, as well as students who will work in the private sector. Our results show that students aspiring to work in the public sector are more likely to distrust the private sector, to believe that conducting business is easy, and are less likely to see the benefits of public-private partnerships. They are also more likely to believe that private sector workers are self-interested. These results have strong implications for the level of regulation in France, and the cooperation between the public and private sector. Introduction Government regulation of the economy is strongly and negatively correlated with trust [START_REF] Aghion | Regulation and distrust[END_REF]. More distrustful citizens tend to elect politicians who promote higher levels of regulation of the private sector. However, elected officials are not the only ones deciding on levels of regulation. Civil servants, especially high ranking officials, also design and enforce regulations that directly impact the private sector. 1 Civil servants are different from elected officials, because they tend to remain in office when there are political changes. In many countries, and in France in particular, civil servants tend to spend their entire professional careers working in the public sector. The beliefs that these non-elected government officials have regarding the private sector are therefore likely to influence the regulations that apply to the private sector. If high-ranking civil servants have more negative beliefs regarding the private sector, they might promote higher levels of regulation than what the population would vote for through a democratic process. In this paper, we aim to document French civil servants' beliefs regarding the private sector. To do so, we analyze how students aspiring to high-ranking positions in the public sector differ in their trust in the private sector compared to other students. Analyzing the beliefs of these aspiring civil servants is important given their future influence on the regulation of the private sector. We more specifically study whether these potential top regulators show greater distrust towards the private sector. Our analysis relies on an original dataset that includes information collected from a survey addressed to students enrolled at Sciences Po, one of the most prestigious universities in France. Sciences Po is known to be the best educational program leading to the highest positions in French higher administration. Since 2005, between 70% and 88% of the students admitted to the French National School of Administration (Ecole nationale d'administration or ENA) are former Sciences Po students. While a large share of high ranking civil servants graduated from this university, a majority of Sciences Po students choose careers in the private sector. We can therefore compare the beliefs of students who will be the future public sector leaders, with students who will work in the private sector. To measure students' beliefs, the survey includes questions that assess students' level of distrust in the private sector, how they perceive people who choose to work in the private sector, and their views regarding the private provision of public goods. We also collect information on their motivations to aim for a career in the public sector, and their trust in public institutions. To conduct our analysis, we rely on standard statistical methods (group comparison tests), ordered response models (ordered probit and logit), and principal component analyses. Using ideal point estimation techniques, we also compare students' trust toward the private sector, and their views on how easy they think it is for entrepreneurs or firms to conduct business. We find that students who aspire to work in the public sector: (i) tend to show more distrust in the private sector, (ii) believe that conducting business is relatively easy, (iii) are less likely to see benefits in public-private partnerships, and (iv) tend to trust public institutions more than the other students. Our results suggest that students who aspire to work in the public sector have a stronger taste for public regulation of economic activities. These results provide some evidence of a selection bias in career choices: higher administration workers may have more negative beliefs regarding the private sector. This difference may worsen over time, given that civil servants, once in office, have limited experience of the private sector and share their offices with civil servants who hold similar beliefs. These beliefs could also complicate collaborations between the public and the private sector, for instance in the provision of public services. The paper is structured as follows. Section 2 relates our work to the existing economics literature. Section 3 describes the survey and students' responses. Section 4 presents our data analyses. We conclude in section 5. Literature Our paper is related to three strands of the economics literature: on the interaction between trust and regulation, on civil servants' characteristics, and on intrinsic versus extrinsic motivation of workers. In this section, we explain how our analysis connects to each of these topics. First, our paper is inspired by the literature on the relationship between trust and regulation, as developed by [START_REF] Aghion | Regulation and distrust[END_REF], [START_REF] Cahuc | Civic Virtue and Labor Market Institutions[END_REF] and [START_REF] Carlin | Public trust, the law, and financial investment[END_REF]. The seminal work by [START_REF] Aghion | Regulation and distrust[END_REF] documents how government regulation is negatively correlated with trust: distrust creates public demand for regulation, and regulation discourages the formation of trust because it leads to more government ineffectiveness and corruption. They therefore show the existence of multiple equilibria, as beliefs shape institutions and institutions shape beliefs. Most of this literature relies on general measures of trust 2 . We adopt a different approach. Indeed, we precisely measure the level of trust towards the private sector of people aspiring to work in the public sector. Our measure is therefore more accurate to address our goal, which is to determine whether civil servants have more or less trust in the private sector than people working in the private sector. As discussed in the introduction, a higher level of civil servants' distrust in the private sector could lead to a risk of over-regulation of the private sector. Second, our paper is related to the literature on preferences of civil servants. This literature shows that public sector workers are often more pro-social than private sector workers. For instance, [START_REF] Gregg | How important is prosocial behaviour in the delivery of public services[END_REF] use data from the British Household Panel Survey to show that individuals in the non-profit sector are more likely to donate their labor (measured by unpaid overtime), compared to those in the for-profit sector. Using the American General Social Surveys, [START_REF] Houston | Walking the Walk" of Public Service Motivation: Public Employees and Charitable Gifts of Time, Blood, and Money[END_REF] finds that government employees are more likely to volunteer for charity work and to donate blood, than for-profit employees. However, he finds no difference among public service and private employees in terms of individual philanthropy. Analyzing data from the American National Election Study, [START_REF] Brewer | Building social capital: Civic attitudes and behavior of public servants[END_REF] shows that civil servants report higher participation in civic affairs. Using survey data from the German Socio-Economic Panel Study, [START_REF] Dur | Intrinsic Motivations of Public Sector Employees: Evidence for Germany[END_REF] also find that public sector employees are significantly more altruistic than observationally equivalent private sector employees, but that they are also lazier. Finally, using revealed preferences, [START_REF] Buurman | Public sector employees: Risk averse and altruistic?[END_REF] show that public sector employees have a stronger inclination to serve others, compared to employees from the private sector. 3 All these papers have post-employment choice settings. In our paper, we study beliefs regarding the private and public sectors when students are choosing their professional careers. Within the literature focusing on students, [START_REF] Carpenter | Public service motivation as a predictor of attraction to the public sector[END_REF] provide evidence showing that students with a strong public service orientation (evaluated by surveys addressed to American students) are more attracted to government jobs. Vandenabeele (2008) uses 2 In surveys, the measure of trust is most often measured with the "generalized trust" question. This question runs as follows: "Generally speaking, would you say that most people can be trusted, or that you can't be too careful when dealing with others?" Possible answers are either "Most people can be trusted" or "Need to be very careful." The same question is used in the European Social Survey, the General Social Survey, the World Values Survey, Latinobarómetro, and the Australian Community Survey. See [START_REF] Algan | Trust, Well-Being and Growth: New Evidence and Policy Implications[END_REF] 3 However, when tenure increases, this difference in pro-social inclinations disappears and even reverses later on. Their results also suggest that quite a few public sector employees do not contribute to charity because they feel that they have already been contributing enough to society through work for too small a paycheck. data on students enrolled in Flemish Universities to show that students with high pro-social orientation have stronger preferences for prospective public employers. Through experiments conducted on students selected to work for the private and public sectors in Indonesia, [START_REF] Banuri | Pro-social motivation, effort and the call to public service[END_REF] show that prospective entrants into the Indonesian Ministry of Finance exhibit higher levels of pro-social motivation than other students. Our paper adds to this literature, by focusing on the beliefs students have regarding the private sector, instead of measuring pro-social behavior to explain student selection of careers in the public or private sector. Our different approach is important, because individuals interested in working in the public sector may put more value on public services or exhibit more pro-social values, while showing no particular distrust towards the private sector. Alternatively, they can simultaneously show more interest in the public sector and more distrust in the private sector. Our analysis enables us to distinguish between these two possibilities, using a unique French dataset. Very few papers have investigated the possibility of self-selection of civil servants as a consequence of negative beliefs towards the private sector. Papers by [START_REF] Saint-Paul | Le Rôle des Croyances et des Idéologies dans l'Économie Politique des Réformes[END_REF][START_REF] Saint-Paul | Endogenous Indoctrination: Occupational Choices, the Evolution of Beliefs and the Political Economy of Reforms[END_REF] are an exception. They a adopt a theoretical approach to explain why individuals who are negatively biased against market economies are more likely to work in the public sector. Our approach is complementary and provides empirical evidence supporting this claim. Finally, our paper is related to the literature on intrinsic and extrinsic motivation, as defined by Bénabou andTirole (2003, 2006). Extrinsic motivation refers to contingent monetary rewards, while intrinsic motivation corresponds to an individual's desire to perform a task for its own sake or for the image the action conveys. Many papers have explored the consequences of intrinsic motivation in different contexts, such as on wages [START_REF] Leete | Wage equity and employee motivation in nonprofit and for-profit organizations[END_REF]), knowledge transfers [START_REF] Osterloh | Motivation, knowledge transfer, and organizational forms[END_REF]), cooperation [START_REF] Kakinaka | An interplay between intrinsic and extrinsic motivations on voluntary contributions to a public good in a large economy[END_REF], training (DeVaro et al. ( 2017)), and law enforcement [START_REF] Benabou | Laws and Norms[END_REF]). However, there lacks studies that analyze the extent to which values and beliefs regarding professional sectors matter when individuals choose their jobs. Our paper aims to fill this gap by exploring the different arguments that students use to explain their career choices, distinguishing between extrinsic and intrinsic motivations. In addition, we explore the beliefs of students aspiring to work in the public sector regarding the reasons why other students choose to work in the private sector. Setting and survey In this section, we describe the institutional context of our study (subsection 3.1). We also provide information about data collection and respondents (subsection 3.2). Careers of graduates Sciences Po is a prestigious French university that specializes in social sciences. In the cohort of students who graduated in 2013 and who entered the labor market in the year following their graduation, 69% worked in the private sector, 23.5% worked in the public sector, and 7.5% worked in an international organization or a European institution. The university is especially well-known for educating France's high-ranking civil servants: a large share of the top positions in the French administration are held by Sciences Po alumni. Sciences Po is the university students attend when they ambition to get admitted to the ENA or other schools leading to high-ranking civil servant positions, and which are only open through competitive exams (concours). In the cohort of students who passed these competitive exams in 2016-17, 82% of students admitted to the ENA were Sciences Po graduates. Sciences Po graduates also represented 67% of those admitted to top administrative positions in the National Assembly, 32% of future hospital directors, and 57% of future assistant directors of the Banque de France. 4 A large majority of top diplomats are also Sciences Po graduates. Sciences Po graduates therefore have a large influence on policy-making and regulation in France. Students who graduate from this university tend to hold high-ranking positions, whether in the public or private sector. Differences in wage ambitions may partly explain students' preferences for the private sector over the public sector. While wages tend to be high for most of the public sector positions held by Sciences Po graduates, young graduates in private sector areas such as law and banking tend to earn substantially higher wages right after graduation. On the other hand, public sector jobs provide more employment security. Beliefs regarding other characteristics of each sector are likely to have a large influence on students' choices for careers. The goal of our survey is therefore to get a better understanding of how these beliefs may guide students' choices for one sector over another. The survey In order to investigate differences in students' beliefs regarding the private sector, we designed an online survey that was only accessible to students. The survey included questions on students' beliefs regarding (i) the public sector and the private sector; (ii) their classmates' views of both sectors; (iii) social relations at work, more specifically on unions and labor laws; (iv) entrepreneurship and economic regulation; and (v) a case study on public-private partnerships. The survey also included a question on students' choices for future jobs and careers. The questionnaire was sent by the administration in mid-September 2014, two weeks after the beginning of classes, to the undergraduate and graduate students from the main campus (in Paris) and one of the satellite campuses (in Le Havre), representing a cohort of approximately 10,000 students. A total of 1,430 students completed at least part of the survey (including seven students who were not directly targeted by the survey), with approximately half of these students answering all of the questions (see Table 1 in Appendix A for a description of the sample sizes by year of study). The survey took approximately 15 minutes to complete from start to finish. Answers were recorded as students made progress through the questions, such that we are able to analyze answers to the first parts of the survey for students who did not complete it. Among the students who completed at least part of the survey, only a few (5%) are from the satellite undergraduate campus. Overall, 62% of respondents are Master's degree students, and 38% are undergraduates. There are also three PhD students and one student preparing administrative admissions' exams who answered the survey. The share of female and male students who answered the survey is representative of the gender ratio in the overall student population (40% of respondents are male students, whereas the overall Sciences Po male student population was 41% in 2014). The share of respondents is similar across Master's degrees (Table 2). For instance, 19.5% of all Master's students were in public affairs in 2014, compared to 20.6% of respondents. A large share (88%) of respondents are French, leading to an overrepresentation of French students. We therefore decided to drop the non-French students from our original sample. Several reasons motivated our approach. First, including non-French students in the analysis would increase the heterogeneity of respondents. Indeed, the students' answers to the questions are likely to be highly culture-dependent. The size of our dataset and the strong selection effects of foreign students prevent us from any inference from this subsample. Second, our objective is to measure the level of distrust regarding the private sector in France for prospective French civil servants. In this regard, French citizens are more likely to work in France after their graduation. Focusing on these students increases the external validity of our study. Third, some questions deal specifically with French institutions, further justifying keeping only the answers submitted by French students. While foreign students may be familiar with some of these institutions (such as the Constitutional Court or the National Assembly), other institutions (such as the Conseil de Prud'hommes i.e. French labor courts), are very unlikely to be known by 20-year old foreign students. Finally, the share of students who went to high school in France and who were admitted as undergraduates (the "undergraduate national" admissions program in Table 2) are overrepresented among respondents. Indeed, they represent 41.7% of all students, but 62.3% of respondents. This overrepresentation mostly disappears when only French students are included. Indeed, 65.1% of all French students were admitted through the undergraduate national admissions procedure, whereas 69.9% of French respondents were admitted through this procedure. The final dataset therefore includes the answers given by the 1,255 French students who completed at least part of the survey5 . A total of 740 students answered the question on their professional goals: 41% of these students were considering working in the public sector, and 59% in the private sector. The data suggest that a selection bias of respondents as a function of students' study or admissions program type is unlikely (comparing columns ( 4) and ( 5) of Table 2). 6The dataset does not provide any specific information on socioeconomic background. However, 9.8% of the French students who are included in the final dataset were admitted as undergraduates through a special admissions program designed for high school students from underprivileged education zones. These students represent 9.2% of the share of the French students who responded to the survey, suggesting no significant selection bias of respondents according to this criterion. Finally, we also checked whether our sample was representative of the overall Sciences Po student population by checking for differences in the two other observable characteristics of students, namely age and grades. Regarding age, French students who answered the survey were slightly younger than the overall French Sciences Po student population (20.5 vs. 21.1). The difference is statistically significant for both undergraduate and graduate students, but remains small in size. We checked whether this difference could be explained by the fact that the respondents were better students, and might therefore be younger. While the dataset does not provide any information on high school grades, we use students' first year undergraduate grades to compare French respondents with the overall French student population. Indeed, in the first year of undergraduate studies, the core curriculum is mandatory for all students and similar across campuses, enabling us to compare students. We know the grades of students who were Master's degree students in 2014-15, and who were first year undergraduates in 2011-12. The difference in average grades between the 227 French respondents (13.4) and the 774 French non-respondents (13.2) is weakly significant (t-test p-value=0.09 ) 7 . A two-sample Kolmogorov-Smirnov test comparing the distributions of grades between students who answered the survey with the other students yields an exact p-value of 0.063. Comparing the distribution of grades for students who completed the question on professional aspirations with the other French students yields an exact p-value=0.215. The difference in average first year grades between the 253 French Master's students who were first year students in 2010-11, and the other 769 French students, is not statistically significant (13.2 for respondents, compared to 13.1 for the other French students, with a p-value=0.40 ). 8 . The two-sample Kolmogorov-Smirnov tests are not statistically significant when comparing the grades of all French students with respondents (p-value=0.225 ) nor with the smaller sample of students who completed the question on aspirations (p-value=0.198 ), suggesting no significant difference in the distribution of grades between respondents and non-respondents. Although we do not have the grades of the other respondents, the statistical evidence presented suggests that the respondents are likely to be representative of the overall French Sciences Po student population at the university. Results Beliefs regarding the private sector Our survey enables us to investigate aspiring civil servants' perception of the private sector. The survey was addressed to all students, i.e. those aspiring to work in the public sector and those wishing to work in the private sector. This approach allows us to evaluate whether the answers of students aspiring to work in the public sector are different from the answers of the other students. We use three series of questions to evaluate students' beliefs regarding the private sector. Reasons to work in the private sector First, the survey asked students to report their perception of the factors that determine their classmates' motivations for careers in the private sector. The suggested motivations were the following: to work with more competent or more motivated teams (Competence and Motivation), to benefit from more work flexibility and a stronger sense of entrepreneurial spirit (Flexibility and Entrepreneurship), and to have the opportunity to earn higher wages (Wage). For each of these items, respondents could answer: Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree, or Strongly Agree. We ordered these answers and assigned them numerical values from 1 (Strongly Disagree) to 5 (Strongly Agree). Figure 1 shows the respondents' average perceptions for each of the factors driving other students to work in the private sector, according to the respondents' prospective careers. Table 3 displays the distribution of answers by students' career choices. Table 4 in further shows the results of ordered probit estimations and p-values of two-group mean-comparison tests. First, we find that students tend to have diverging beliefs regarding the reasons that drive people to work in the private sector (Table 3). Only 28% of the students who want to take a public sector exam (strongly or weakly) agree with the fact students who plan to work in the private sector are attracted by more competent teams, compared to 42% of students who want to work in the private sector. We observe similar differences for more motivated teams (35% vs. 49%) and for entrepreneurial spirit (69% vs. 88%). However, both types of students seem to have similar beliefs regarding the attractiveness of the private sector in terms of the work flexibility (50% vs. 51%) and the higher wages (97% vs. 94%) it offers. These results are confirmed by univariate analyses (two-group mean comparison tests and ordered probit estimates in Table 4).9 Students who aspire to become civil servants are indeed less Competence Motivation Flexibility Entrepreneurship Wage PCA Note: Bars correspond to the average scores of each of the two groups of students for each question. Answers take values from 1 (Strongly Disagree) to 5 (Strongly Agree). The distribution of answers is presented in table 3. Full description of the questions are given in the Online Appendix. likely to say that other students aspire to work in the private sector because of (i) greater entrepreneurial spirit (p-values<0.001 ), (ii) more competent teams (p-values<0.001 ), and (iii) more motivated teams (p-values<0.001 ). In other words, these students have relatively more negative beliefs regarding the private sector. To confirm the existence of latent negative beliefs regarding the reasons that drive people towards the private sector, we run a principal component analysis (PCA) on the above dimensions. The first axis of the associated PCA, which explains 38.6% of the variations, is mainly correlated with Competence, Motivation and Entrepreneurship 10 . Students who aspire to become civil servants show statistically higher scores on this first axis than those who want to work in the private sector. This result confirms that students interested in becoming civil servants have a worse underlying perception of the reasons that drive people to choose the private sector. We can interpret this result in two non-mutually excluding ways. First, it might be that students who plan to become civil servants believe that other reasons than the ones listed in the survey motivate their classmates' choices. However, the list contains the main arguments usually mentioned to explain the choices for preferring the private sector over the public sector. Second, it might be that students who aspire to work in the public sector are less likely to believe that the private sector 10 More specifically, we have: ρ = 0.607 for Competence, ρ = 0.593 for Motivation, ρ = 0.236 for Flexibility, ρ = 0.449 for Entrepreneurship and ρ = 0.150 for Wage. The two dimensions that are the least correlated with the first axis (i.e., Wage and Flexibility) do not discriminate between prospective sectors of work. allows for more entrepreneurship, more competent and/or more motivated teams. Result 1 Students who plan to work in the public sector are less likely to see entrepreneurship, competence and motivation as factors that drive other students to choose to work in the private sector. Preferences towards regulation of the private sector We now investigate students' preferences towards regulation of the private sector. The survey included a series of questions about the challenges that private sector companies and employees face in France. We group these questions into two dimensions. The first set of questions measures the level of Distrust in companies. We designed the second set of questions to capture students' beliefs regarding the easiness to conduct business in France today (Easy to do business). The questions associated with each set are presented below, together with a positive or negative sign to represent how answers correlate with the associated dimension. Distrust in companies is associated with students' beliefs on whether: • union representatives should benefit from extra protection against being fired (+); • employees should have a stronger role in the company's decision-making process (+); • controls of labor law enforcement are currently sufficient in France (-); • thresholds above which union representation becomes mandatory in the company are too high (+); • layoffs should be banned when companies make profits (+); • the government should legislate to limit employers' excessive remunerations (+). Easy to do business is associated with students' beliefs on whether: • procedures to fire an employee should be made easier for the employer (-); • procedures to create a new business should be made easier (-); • procedures to hire an employee should be simplified (-); • labor costs are contributing to high unemployment in France (-); • it is currently easy to create a company in France (+); • it is currently easy to find funds to open a business in France (+); • it is currently easy for a young entrepreneur in France to obtain legal advice and support to start a business (+). Descriptive Statistics Tables 5 and6 in the appendix show the distribution of answers to the questions associated with the two dimensions (Distrust in Companies and Easy to Conduct Business respectively). First, we observe that students who plan to work in the public sector have a higher tendency to distrust the private sector (Table 5). For instance, 31% of these students think that the government should legislate to limit employers' excessive remunerations, against 24% for their classmates who want to work in the private sector. Similarly, students who aspire to become civil servants are more likely to believe that controls on the enforcement of labor regulations are currently insufficient in France (52% vs. 45%). They are also more likely to believe in a strong support of higher levels of protection for union representatives in firms (18% vs. 12%). Second, students who plan to work in the public sector are more likely to believe that conducting business in France is easy (table 6). For instance, 47% of these students weakly or strongly oppose reforms that would facilitate laying off employees, against 35% for their classmates who aspire to work in the private sector. Similarly, they are: (i) less likely to disagree with the statement that creating a business in France is easy (55% vs. 60%), (ii) more likely to believe that procedures to hire new employees should not be facilitated (10% vs. 5%), and (iii) more likely to believe that finding funds to open a business is easy (29% vs. 24%). Ideal points To further investigate differences in beliefs between the two groups of students, we propose to locate students on the two dimensions (Distrust in companies and Easy to conduct business), using an augmented version of the graduated response model often used in ideal point estimations. Our method departs from PCA in two ways. First, we use the above definition of the dimensions to constrain the sign of the correlation between the questions and their associated dimension. For instance, we assume that a stronger support for the protection of union representatives against being laid off cannot be negatively correlated with the level of distrust in companies on the entire sample. The correlation can be either positive or null. Second, we do not consider the answers as continuous but as ordered variables. The estimation of ideal points takes into account this information to generate the two dimensions. More specifically, we estimate the following logistic model: y ij = α j θ i + u ij (1) where α j is a discrimination parameter associated to question j, θ i is individual i's score on the estimated dimension, and u ij is an idiosyncratic logistic random term. The parameter α j represents the correlation between the question at stake and the dimension we aim to capture. The signs of the α j are constrained by the above definition of the axes. Parameters θ i represent students' opinions on the associated dimension. Higher scores for the first dimension are associated with stronger distrust in the private sector. Individuals who display higher θs are more likely to believe that conducting business is easy. The full methodology of the estimation of the two dimensions is presented in Appendix B. For robustness purposes, we also run PCA for each of the two dimensions, and we obtain identical results. The correlation coefficient between the first axis of the PCA and our first dimension is greater than 0.99.11 It is equal to 0.975 for the second dimension.12 Figure 2 represents the average individual scores on the two dimensions (i.e., θ i ) according to the students' willingness to work in the public sector. Table 7 in Appendix A shows the results of two-group mean-comparison tests. We find that students who plan to work in the public sector display a stronger distrust in the private sector (p-value=0.088 ), and are more likely to think that conducting business in France is currently relatively easy (p-value=0.017 ). Considering that our questions deal with regulation issues related to the private sector, this result implies that students who aspire to work in the public sector have a stronger taste for public regulation of economic activities. Result 2 Students who plan to work in the public sector have a higher level of distrust in the private sector, and are more likely to believe that doing business is easy. Overall, they have a Note: Bars correspond to the average scores of each of the two groups of students for the two dimensions. The mean of each group is presented in table 7. Full description of the questions are given in the Online Appendix. stronger taste for public regulation of economic activities. Perception of public-private partnerships The survey included a case study about public-private partnerships. The questions relate to students' beliefs regarding the benefits of the private provision of public goods. The questions reflect the perception of the relative advantages of the private and public sectors. The first question asked students whether they perceived delegated management of public goods as a good tool per se (Delegated Management). 13 The three following questions asked students whether delegating management is a good tool to reduce management costs, to foster innovation, and to improve the quality of the services (Cost Reduction, Innovation, Quality Improvement, respectively). The following question described a conflict between the contracting public authority and its private partner, and investigated whether students perceived the public authorities' decision to expropriate the private firm as legitimate (Legitimate Expropriation). Students were then asked about the extent to which the State should compensate the firm for the expropriation (Damages). The final set of questions analyzed the answers to a case about arbitration aimed at solving the conflict (instead of litigation by national courts). Students answered questions about the extent to which the arbitration decision should take into account the following arguments: the state must stick to its contractual commitments towards the firm (Commitments), the state must be allowed to nationalize sectors it considers as essential for economic growth (Nationalization), the fact that water is a vital good justifies that the state can override the contractual agreements (Necessary Good ), and devaluation is a legitimate motive for the firm to increase prices (Devaluation). Finally, we run a PCA, and explain the scores on the first dimension, which represents an overall positive perception of private provision of public goods14 . Figure 3 shows the average scores for the two groups of students. Table 8 shows the distribution of answers for the questions about the relative advantages of the private provision of public goods. Table 9 in Appendix A presents the associated estimates of regression estimations and p-values of two-group mean-comparison tests for all items. First, we observe a general trend: students who plan to work in the public sector are less enthusiastic about the use of public-private partnerships than their classmates who plan to work in the private sector. For instance, they are only 21% to strongly agree with the fact that publicprivate partnerships can foster innovation, against 30% of their classmates. Similarly, only 16% strongly believe that public-private partnerships can improve the quality of the provision (vs. 25% for students who want to work in the private sector). They are also slightly less likely to strongly believe that delegated management is a good thing (12% vs. 15%) and that it helps reduce costs (30% vs. 33%). The statistical analysis confirms these findings. Students who want to become civil servants are statistically less likely to see delegated management as improving the quality of services (p-value=0.012 ) or as fostering innovation (p-value=0.010 ). Moreover, they are more likely to consider expropriation as legitimate (p-value=0.020 ). Third, students who plan to work in the public sector are also more likely to consider that the state must be allowed to nationalize key sectors (p-value=0.002 ). Finally, we run a PCA on all items associated to the public-private partnerships. Results show that students who aspire to become civil servants have more negative beliefs regarding the overall benefits of the private provision of public goods. The first axis of the PCA, which can be viewed as a pro-business preference for the provision of public goods15 , and which explains 30% of the variations, is indeed significantly higher for students who plan to work in the private sector. Result 3 Students who aspire to become civil servants are less likely to see benefits in the private provision of public goods, and are thus more likely to support the government in case of publicprivate partnerships. Beliefs regarding the public sector The above results suggest that students aspiring to work in the public sector tend to distrust the private sector to a greater extent. This greater distrust can either result from a general distrust in society or can be specifically targeted against the private sector. If future civil servants are more Note: Bars correspond to the average scores of each of the two groups of students for each question. Answers take values from 1 to 4 (for all items). The distribution of answers is presented in table 8. Full description of the questions are given in the Online Appendix. distrustful in general, they may not necessarily increase the level of regulation by the state, because distrust in both sectors would offset each other. However, if they trust the public sector more than the other students, we would expect higher levels of government regulation of the private sector. Our survey therefore included questions designed to evaluate students' beliefs regarding the public sector. These questions enable us to investigate whether the relative distrust of students who aspire to become civil servants is generalized or targeted against the private sector only. We therefore investigate how students perceive the public sector, including by asking them which factors explain their choice to work as civil servants. Trust in institutions Students were asked to report their level of trust on an 11-point scale (from 0-no trust to 10-total trust) for a list of seven public institutions: the Upper Chamber (Senate), the Lower Chamber (National Assembly), the police (Police), the legal system in general (Legal System), judges in general (Judges), the French Constitutional Court (Constit. Council ), and the French Administrative Supreme Court (Conseil d'Etat). Figure 4 graphs the average level of trust for both types of students (i.e. those aspiring to work in the public sector, and those preferring the private sector). The two columns on the right-hand side of the graph show the average scores for the first dimension of a PCA, which represents the generalized level of trust in public institutions. 16 Table 10 in Appendix A displays the summary statistics associated with these questions together with the p-value associated with the two-group mean comparison test for each variable. First, we find that judicial institutions benefit from the highest levels of trust (Legal System, Judges, Constitutional Council and Conseil d'Etat). Political institutions and the police benefit from significantly lower levels of trust. Although this dichotomy holds for both kinds of students under scrutiny, we observe systematic higher scores for students who plan to become civil servants than those who aspire to careers in the private sector. Students who want to work as civil servants have higher levels of trust in the Senate (5.87 vs 5.49), the National Assembly (6.25 vs. 5.77), the Police (6.08 vs 5.88), the Legal System in general (6.93 vs. 6.73), Judges (7.25 vs. 6.84), the Constitutional Council (7.36 vs. 7.08), and the Administrative Supreme Court (7.39 vs. 6.83). The differences are statistically significant for the Lower Chamber (p-value=0.002 ), the Upper Chamber (p-value=0.022 ), Judges (p-value=0.005 ), the Constitutional Council (p-value=0.078 ), and the Administrative Supreme Court (p-value<0.001 ). The first dimension of the PCA, which represents the generalized level of trust in institutions, is also significantly higher for prospective civil servants (p-value=0.001 ). Although beliefs regarding the legal system in general and the police are not statistically different across students, the students aspiring to become civil servants still display a higher average level of trust. These results show that students who plan to work in the public sector do not have a generalized distrust towards society, but show distrust targeted against the private sector. Result 4 Students who plan to become civil servants display a higher level of trust in public institutions. Reasons to become a civil servant To complete our analysis, we asked students about their beliefs regarding the factors that explain why individuals aspire to work in the public sector. More precisely, students were asked to report their beliefs regarding the factors that determine their classmates' choices to become civil servants. We included a list of potential benefits of being a civil servant, related to both extrinsic and intrinsic motivation. Among the extrinsic motivation factors, we suggested a lower workload (Lower Workload ), a more convenient family life (Easy Family), and greater job security (Greater Security). For intrinsic motivation, we suggested the following factors: a source of social gratification (Social Gratification), more opportunities to change society (Change Society), and personal satisfaction of being involved in public affairs (Satisfaction). Figure 5 shows the average scores for each group of students, ranking the answers from 1 (Strongly Disagree) to 5 (Strongly Agree). Table 11 shows the distribution of answers according to career aspirations. Table 12 presents the results of ordered probit estimations and the p-values of two-group mean-comparison tests. Students who aspire to become civil servants are more likely to believe that pro-social reasons are driving their classmates' choices for the public sector. The estimations indicate that students who aspire to work in the public sector are more likely to believe that their classmates choose to become civil servants (i) for the satisfaction of being involved in public affairs (84% mildly or strongly agree vs. 73% for students who want to work in the private sector, p-values < 0.001 ), and (ii) for the opportunities they have to change society (74% vs. 63%, p-values < 0.001 ). On the contrary, students who do not plan to become civil servants are more likely to believe that their classmates are interested in working in the public sector for self-concerned reasons, i.e. the lower workload (19% vs. 13%, p-values < 0.001 ), and the convenience to organize family life (only 33% disagree vs. 43% of future civil servants disagree, p-value=0.105 for the ordered probit estimation, and p-value=0.085 for the two-group mean-comparison test). Finally, we run a PCA on these six dimensions. The first axis, which explains 35% of the variations, is positively correlated with the pro-social motivations to choose the public sector (i.e. Satisfaction (ρ = 0.392), Social Gratification (ρ = 0.36), Change Society (ρ = 0.386)), depicted in blue in Figure 5, and negatively with the self-concerned motivations (i.e. Greater Job Security (ρ = -0.247), Lower Workload (ρ = -0.516), More Convenient Family Life (ρ = -0.49)), depicted in red in Figure 5. The comparison of the PCA scores of the two types of students shows that, on average, students aspiring to a career in the public sector are more likely than their classmates to think that students aspire to become civil servants for pro-social reasons (p-value<0.001 ). Result 5 Both types of students recognize that people aspiring to work in the public sector generally do so for pro-social reasons (i.e. the satisfaction of being involved in public affairs, and the opportunity to change society). However, this result is stronger for students who want to become public servants. Students who plan to work in the private sector are more likely to believe that their classmates aspire to careers in the public sector for self-concerned reasons (i.e. lower workload, more convenient family life). Robustness Checks The survey contained questions about the perception of the public and the private sectors, and students' aspiration to work in the public sector after graduation. The fact that both the dependent and the independent variables were obtained from the same survey might generate some methodological concerns, usually referred to as the Common Source Bias (CSB). In our case, we are not able to rule out the possibility that participants sought to reduce cognitive dissonance or to improve / protect self-image by aligning their answers. Nevertheless, the impact of the CSB is limited by the fact that questions were asked on successive screens, and that half of the dimensions discussed above were explored before participants were asked about their personal professional aspirations. Moreover, the aspiration to work in the public sector was obtained by asking whether participants intended to take exams to enter the public sector. Given the long preparation that these exams require, it seems unlikely that previous declarations about the attractiveness of each sector affected participants' declaration about their intention to take these exams. In order to test the robustness of our results to the CSB, we use respondents' identifier to retrieve the Master's program of graduate students. We then associate to each graduate student the average proportion of students registered in his/her graduate school who ended up working in the private sector (based on the post-graduation employment survey of students who graduated in 2015). This measure reflects the average ex-post propensity to really work in the private sector, and is not subject to the CSB. We observe that this variable is highly correlated with the individual declarations in the survey (ρ = 0.468, p < 0.001). We run all the previous estimations replacing the potentially biased self-declaration by this exogenous measure. We cluster observations at the graduate school level given the level of aggregation of information. The new results, displayed in table 13, lose in statistical significance, mostly because of the reduction in the variance in the explanatory variable and in the degrees of freedom, but confirm the above results. Indeed, individuals with higher chances of working in the public sector trust public institutions (National Assembly, Senate, Judges, Administrative Supreme Court) significantly more, are more likely to believe that public servants work in the public sector for noble reasons (Gratification, Change) and less likely to believe they do so for the potentially lower workload (Workload). They are also less likely to believe that students who want to work in the private sector plan to do so because of greater entrepreneurial spirit. They are also more likely to believe that unionists should be more protected against employers. Moreover, in the case of public-private partnerships, they are more likely to find that government intervention is legitimate, and they are more likely to accept nationalization. Attrition Bias The survey included five successive sections. A non-negligible proportion of respondents answered only part of it. Students who did not complete the survey may represent a specific subset of the population, such that students who answered the last set of questions may not be representative of the set of students who started the survey but quit before the end. To investigate whether attrition in the survey changed the composition of respondents over the different sections, we regress each of our dependent variables on the number of sections the respondents completed. Should an attrition bias on the dependent variable emerge, we would observe a significant coefficient associated with the number of screens. For instance, if the least trustful students stopped answering first, we would observe that the level of trust in public institutions (first screen of the survey) significantly increases with the number of screens completed. Implementing this strategy over 25 dependent variables, we obtain only three significant relationships (two at 10% and one at 1%). Assuming that there is no attribution effect (i.e., that the dependent variables are not correlated with the number of completed screens), the probability to have at least 3 out of 25 regressions in which the coefficient is significant at 10% equals 20.5%. 17 Therefore, we are not able to reject the hypothesis of no attrition effect regarding the dependent variables. 17 The probability of no significant relationship is ( 9 10 ) 25 . The probability of one significant relationship is 25! (25-1)! 1 10 ( 9 10 ) 24 . The probability of two significant relationships is 25! (25-2)! ( 1 10 ) 2 ( 9 10 ) 23 . The probability of at least three significant relationships at 10% equals 1 minus the sum of these three probabilities, which is p = 20.5%. Conclusion Our results suggest that future civil servants distrust the private sector to a greater extent than the other students. They also believe that conducting business is relatively easy, and are less likely to see benefits in public-private partnerships. These results provide some evidence of a selection effect in career choices: individuals working in the public sector hold more negative beliefs regarding the private sector. Our evidence also suggests that this distrust is more specifically targeted towards the private sector and is not generalized: future civil servants show high level of trust regarding public institutions. Civil servants' distrust towards the private sector has strong implications in terms of political economy. First, civil servants are in charge of the design and implementation of regulation. Their distrust of the private sector may lead to its over-regulation, therefore generating difficulties in conducting business. The 2018 Doing Business report edited by the World Bank provides evidence of these difficulties. This report provides objective measures of business regulations and their enforcement across 190 economies. It captures several important dimensions of the regulatory environment as it applies to local firms. 18 The global indicator that accounts for this regulatory environment is called "Ease of Doing Business". Each country is evaluated through its distance to frontier (DTF), which measures the distance of each economy to the "frontier" representing the best performance observed on each of the indicators across all economies in the Doing Business sample since 2005. An economy's DTF is represented on a scale from 0 to 100, where 0 represents the lowest performance and 100 represents the frontier. The ease of doing business ranking ranges from 1 to 190. France ranks 31 st , and performs lower than the average score of OECD Countries. 19 Our results suggest that a possible explanation for the large regulation of private business may come from the relatively greater distrust of public sector workers towards the private sector. This distrust may also have a negative impact on the judiciary. Most judges in French courts are civil servants. [START_REF] Cahuc | Les juges et l'économie: une défiance française[END_REF] show that judges distrust business and free market economy more than the rest of the population. Our results confirm this distrust of top civil servants against private business. They can also shed a new light on a regular debate concerning the identity of judges. In some particular courts (such as labor or commercial courts in France), judges are lay judges (i.e. they are not civil servants but representatives of employees and/or employers nominated by their peers). 20 In many others countries, labor or commercial courts are composed of both lay and professional judges, or even only professional judges. An argument supporting lay judges could be to avoid the distrust of professional judges (who are civil servants) towards the private sector. Third, the distrust of public sector workers towards the private sector could have a negative impact on cooperative projects such as public-private partnerships. These contracts aim at organizing a cooperation between public and private sector actors to build infrastructures and provide public services. Public-private partnerships combine the skills and resources of both the public and private sectors by sharing risks and responsibilities. Yet, the success of such partnerships depends on the ability of the two sectors to cooperate. For this reason, the World Bank has identified the cooperation and good governance between public and private actors as a key to successful public-private partnerships.21 As a consequence, distrust towards public-private partnerships could hurt such projects. Finally, the results presented in this paper can also be read in an optimistic manner: students who propose to work in the public sector display the highest levels of trust in the public sector. Despite the lower wages proposed by the public sector, prospective civil servants devote their career to the public affairs because they believe that they can be useful to society in doing so. Overall, the strong motivation of prospective civil servants and their beliefs in their mission are two factors that might contribute to the well-functioning of the State. Note: Panel A includes Master's degree students only. "Respondents" refers to students who at least started completing the survey. Some Master's level students were admitted as undergraduates, whereas others were admitted as undergraduates. "Career specified" refers to those students who completed the survey at least to the point where they indicated their intention to work in the public sector or not. Panels A and B do not include information on four French students who answered the survey: two were postgraduate students (one PhD and one preparing administrative exams), one was an undergraduate student on a campus that did not receive the survey, and one was an executive education student. Hence the total of French respondents is 1,253 students. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed reasons to work in the private sector. The independent variable is a dummy variable equal to 1 if the student plans to take a public sector exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed arguments in favor or against the use of Public-private partnerships (PPP). The independent variable is a dummy variable equal to 1 if the student plans to take a public exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed arguments in favor or against the use of Public-private partnerships (PPP). The independent variable is a dummy variable equal to 1 if the student plans to take a public exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 436 who said the opposite. A Tables B Ideal Points Estimates The Bayesian estimation of ideal points is usually referred to as the one dimensional item response theory. Such models were originally aimed at measuring students' performance on a test, and to locate them on a unique dimension. The original objective consisted in estimating three sets of parameters: (i) an ability parameter for each student, (ii) a difficulty parameter for each question of the test, and (iii) a discrimination parameter for each question. Bayesian methods were developed to discriminate students according to their ability, by taking into account questions' difficulty level, and by estimating their "relevance" to correctly discriminate students.22 These models have since been used in the political science literature, especially in the case of Supreme Court voting [START_REF] Bafumi | Practical issues in implementing and understanding bayesian ideal point estimation[END_REF], [START_REF] Martin | Dynamic ideal point estimation via markov chain monte carlo for the u.s. supreme court, 1953-1999[END_REF], [START_REF] Martin | The median justice on the united states supreme court[END_REF], where researchers located Justices on a liberal-conservative dimension. Our goal consists in estimating students' preferences on two dimensions (Distrust in Companies and Easy to do Business). To do so, we use the students' answers described in subsection 4.1.2. The possible answers to these questions had the following ordering: strongly disagree, slightly disagree, indifferent, slightly agree, strongly agree. The model is defined by a logistic utility model, where the latent utility depends on both the questions' and students' parameters: y * ij = α j θ i + u ij where α j is the discrimination parameter of question j, θ i is the score of individual i on the estimated dimension, and u ij is a random component. Given that we have five possible ordered answers, the associated observed choices are given by: y ij = 1 if y * ij ≤ φ 1j y ij = 2 if y * ij > φ 1j et y * ij ≤ φ 2j . . . y ij = 5 if y * ij > φ 4,j where φ j is the vector of thresholds for the ordinal choice model. The hyperpriors are set as follows: α j ∼ N (µ α , σ 2 α ) φ j ∼ N (µ φ , σ 2 φ ) θ i ∼ N (0, 1) µ α ∼ N (0, 1) and σ α ∼ Exp(0.1) µ φ ∼ N (0, 1) and σ φ ∼ Exp(0.1) Given that we know a priori the correlation of the answers with the desired axes, we are able to reverse the order of the answers for the questions that are negatively correlated (see section 4.1.2). We use this information and overidentify the model by setting: ln(α j ) ∼ N (µ α , σ • l'Assemblée Nationale ; • le Sénat ; • le système légal ; • la police ; • les juges ; • le Conseil Constitutionnel ; • le Conseil d'État. • Une plus grande sécurité de l'emploi ; • Une plus grande satisfaction vis-à-vis de soi-même de s'occuper des affaires publiques ; • Une plus grande gratification vis-à-vis d'autrui de s'occuper des affaires publiques ; • La possibilité de changer la société ; • Une charge de travail moins importante ; • Une organisation facilitée de la vie familiale. • Un salaire plus élevé ; • Une plus grande flexibilité du travail ; • Un meilleur esprit d'entrepreneuriat ; • Des équipes plus compétentes ; • Des équipes plus motivées dans leur travail. 4. Parmi les autres étudiants inscrits dans votre master et de destinant à la fonction publique, quels sont selon vous les facteurs déterminant ce choix de carrière ? (Veuillez indiquer les trois facteurs les plus importants par ordre d'importance.) • Le salaire ; • La sécurité de l'emploi ; • La liberté d'entreprendre ; • La charge de travail ; • Le fait d'être utile à la société ; • La reconnaissance sociale ; • L'ambition politique ; • L'équilibre de la vie familiale. 3. D'un point de vue personnel, quels facteurs influencent vos choix de carrière ? (Premier facteur, deuxième facteur, troisième facteur) • Le salaire ; • La sécurité de l'emploi ; • La liberté d'entreprendre ; • La charge de travail ; • Le fait d'être utile à la société ; • La reconnaissance sociale ; • L'ambition politique ; • L'équilibre de la vie familiale. • Réduire les coûts de gestion ; • Encourager l'innovation ; • Améliorer la qualité des services. Énoncé Consignes Les questions ci-dessous vous proposent d'analyser un cas relatif à un contrat entre une partie publique et une partie privée. Dans un premier temps, le cas vous est décrit de manière succinte, vous apportant les éléments nécessaires à la compréhension du litige qui oppose les parties. Ensuite, vous serez amené à répondre à plusieurs questions liées au cas. La plupart des questions sont sujettes à interprétation, si bien qu'il n'existe pas de "bonne" ou de "mauvaise" réponse : n'hésitez donc pas à donner votre avis. Présentation du cas Considérons une autorité gouvernementale qui établit un contrat de concession avec une entreprise privée étrangère afin d'assurer la distribution de l'eau auprès de sa population. Cette entreprise emprunte en dollars pour réaliser les investissements nécessaires au contrat de concession. Quelques années plus tard, la monnaie du pays est dévaluée par décision du gouvernement, ce qui cause un important problème de rentabilité à l'entreprise privée : elle perçoit ses recettes en monnaie locale et a des charges en dollars, liées à son emprunt. L'entreprise demande alors à l'autorité gouvernementale une autorisation pour augmenter le prix de l'eau de 10% pour combler une partie de ses recettes manquantes. Le gouvernement refuse la réévaluation du prix. L'entreprise ne peut plus poursuivre ses investissements, et certains foyers ne parviennent pas à se faire raccorder aux réseaux de distribution d'eau. Excédée par ces problèmes de distribution d'eau à toute la population, l'autorité gouvernementale décide unilatéralement de mettre fin au contrat (avant son terme) en expropriant l'entreprise de ses investissements. • Aucune indemnité ; • Une partie de l'investissement ; • L'intégralité de l'investissement ; • L'intégralité de l'investissement et une partie des profits escomptés ; • L'intégralité de l'investissement et la totalité des profits escomptés ; • L'intégralité de l'investissement, la totalité des profits escomptés et des dommages punitifs. Énoncé Arbitrage Afin de contester les décisions de l'Etat, l'entreprise investisseuse saisit une cour d'arbitrage internationale spécialisée ainsi que le droit de l'Etat concerné l'avait prévu lors de la signature du contrat. 5. À votre avis, quelle importance convient-il d'attributer à chacun des arguments suivants pour résoudre le litige ? [0 : aucune importance ; 10 : essentiel à la résolution du litige] • L'Etat doit respecter ses engagements vis-à-vis de l'entreprise investisseuse et lui garantir la pérennité de ses actifs. • L'Etat doit pouvoir nationaliser les secteurs qu'il estime essentiels au développement économique de son pays. Figure 1 : 1 Figure 1: Average perception of factors driving students to choose the private sector according to the prospective sector of work. Figure 2 : 2 Figure 2: Average attitudes towards the private sector according to the prospective sector of work. Figure 3 : 3 Figure 3: Average perception of the private provision of public goods according to the prospective sector of work. Figure 4 : 4 Figure 4: Average level of trust in institutions according to the prospective sector of work (0 to 10 scale). Figure 5 : 5 Figure 5: Average perception of factors driving students to choose the public sector according to the prospective sector of work. 2 . 2 Parmi les arguments suivants, quels sont ceux qui, à votre avis, motivent les individus à s'engager dans une carrière publique ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 3 . 3 Parmi les arguments suivants, quels sont ceux qui, à votre avis, motivent les individus à exercer une activité dans le secteur privé ?[1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] Partie 2 : 2 Relations sociales au travail 1. D'un point de vue personnel, vous paraît-il justifié qu'un délégué syndical bénéficie d'une protection renforcée par rapport aux autres salariés, notamment en matière licenciement ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 2. D'un point de vue personnel, vous paraît-il nécessaire aujourd'hui de renforcer les pouvoirs du salarié dans la prise de décision en entreprise (par exemple, par un renforcement de leur participation dans les conseils d'administration) ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 3. La justice prud'homale est aujourd'hui organisée autour des syndicats : les conseillers prud'homaux sont des individus élus par les salariés et par les chefs d'entreprise pour les représenter et trancher les litiges provenant de conflits individuels au travail. Ce système vous semble-t-il juste ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 4. Pensez-vous à titre personnel qu'il existe assez de contrôles de l'application du droit du travail en France (ex : inspection du travail) ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 5. Pensez-vous à titre personnel qu'il faille abaisser les seuls au-delà desquels une représentation syndicale en entreprise est obligatoire ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 6. Pensez-vous à titre personnel que les procédures de licenciement devraient être allégées ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 4 .Partie 5 : 45 Êtes-vous membre d'un parti politique ? [0 : Non ; 1 : Oui] 5. Militez-vous à Sciences Po ? [0 : Non ; 1 : Oui] 6. Êtes-vous membre d'un syndicat à Sciences Po ? [0 : Non ; 1 : Oui]7. Êtes-vous membre d'une association Sciences Po ? (Hors parti, hors syndicat) [0 : Non ; 1 : Oui] Cas pratique PPP Énoncé On parle de gestion directe d'un service public local (distribution d'eau, collecte des déchets, approvisionnement des cantines scolaires, etc...) lorsque la collectivité locale concernée assure elle-même l'exploitation et la gestion de ce service. C'est une structure publique (une régie publique) qui assure le service. On parle de gestion déléguée lorsque la collectivité confie ce service à une entreprise, généralement privée, qui opère sous son contrôle. Le choix de l'entreprise privée se réalise le plus souvent par appel d'offres ce qui implique une mise en concurrence des candidats à la gestion du service. 1. D'un point de vue personnel, diriez-vous que la gestion déléguée est une bonne chose ? [1 : Non, pratiquement jamais ; 2 : Oui, mais dans quelques cas seulement ; 3 : Oui, dans la plupart des cas ; 4 : Oui, dans la majorité des cas] 2. Recourir à un contrat avec une entreprise privée pur gérer un service local vous paraît-il un moyen de... [1 : Non, pratiquement jamais ; 2 : Oui, mais dans quelques cas seulement ; 3 : Oui, dans la plupart des cas ; 4 : Oui, dans la majorité des cas] 3 . 3 D'un point de vue purement personnel, pensez-vous que la décision d'expropriation de l'État était justifiée ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 4. Si vous aviez la possibilité d'indemniser l'entreprise investisseuse, vous proposeriez d'indemniser à hauteur de (une seule réponse possible) : Table 1 : 1 Sample size of respondents vs. overall student population.Note: The table only includes students from the Le Havre and Paris campuses, and only College or Master's students. Third year students participate in a mandatory study abroad program, which explains why they did not participate in the survey. "Other students" includes Master's students who had a special status, and who were not registered in a specific year. All students Only French students Overall Respondents Overall Respondents All Career specified (1) (2) (3) (4) (5) 1st year students 933 287 794 261 155 2nd year students 2,587 243 831 228 131 3rd year students 1,046 2 940 0 0 4th year students 2,711 438 1,726 372 203 5th year students 2,755 453 2,027 392 249 Other students 10 0 5 0 0 Observations 10,042 1,423 6,323 1,253 738 Table 2 : 2 Descriptive statistics of respondents vs. overall student population, by Master's degree and admissions program. All students Only French students Overall Respondents Overall Respondents All Career specified (1) (2) (3) (4) (5) Panel A: Field of Master's degree (graduate students only) Business 12.64% 13.13% 15.49% 13.87% 13.27% Economics & finance 15.39% 15.26% 16.03% 15.05% 17.26% Environment 1.77% 2.47% 1.70% 2.49% 2.21% European affairs 4.58% 3.37% 3.84% 2.62% 3.10% History 0.81% 1.80% 1.14% 1.96% 1.55% International Affairs 20.27% 18.52% 11.82% 14.53% 11.73% Journalism 1.66% 1.80% 2.07% 1.83% 0.66% Law 8.65% 9.88% 10.28% 9.82% 10.18% Other 6.08% 0.00% 0.47% 0.00% 0.00% Political science 2.18% 3.03% 2.37% 3.40% 3.1% Public Affairs 19.47% 20.65% 26.18% 23.82% 26.11% Sociology 0.61% 1.35% 0.90% 1.44% 1.99% Urban 5.89% 8.75% 7.71% 9.16% 8.85% Total 100.00% 100.00% 100.00% 100.00% 100.00% Observations 4,696 891 2,997 764 452 Panel B: Admissions program (all students) Undergraduate national 41.73% 62.33% 65.10% 69.91% 71.54% Undergraduate international 9.28% 9.49% 5.80% 5.59% 4.47% Undergraduate priority 6.45% 8.29% 9.77% 9.18% 8.94% International exchange 18.65% 0.07% 0.60% 0.00% 0.00% Master's national 10.50% 12.02% 15.42% 13.01% 12.60% Master's international 10.58% 6.89% 2.56% 2.08% 2.17% Other 2.81% 0.91% 0.74% 0.24% 0.27% Total 100.00% 100.00% 100.00% 100.00% 100.00% Observations 10,042 1,423 6,323 1,253 738 Table 3 : 3 Descriptive statistics of reasons driving students to choose the private sector. Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree Competence 10% 33% 29% 23% 5% Motivation 10% 25% 30% 28% 7% Yes Flexibility 3% 26% 20% 42% 8% Entrepreneurship 1% 10% 20% 49% 20% Wage 0% 1% 2% 40% 57% Competence 3% 28% 27% 29% 13% Motivation 3% 20% 28% 35% 14% No Flexibility 3% 31% 15% 38% 13% Entrepreneurship 1% 6% 15% 45% 33% Wage 1% 3% 3% 39% 55% Table 4 : 4 Perception of the factors driving students to choose the private sector according to the prospective sector of work. Reason Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value Wage .111 1.247 .212 .107 Flexibility -.029 -.361 .718 .931 Entrepreneurship -.322 -3.981 <0.001 <0.001 Competence -.416 -5.268 <0.001 <0.001 Motivation -.39 -4.951 <0.001 <0.001 Table 5 : 5 Descriptive statistics of questions associated with Distrust in Companies Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree Union representatives should benefit from extra protection against being fired. 14% 24% 8% 35% 18% Employees should have a stronger role in the company's decision-making process. 2% 9% 11% 44% 35% Controls of labor law enforcement are cur-rently sufficient in France. 13% 39% 21% 22% 5% Yes Thresholds above which union representa- tion becomes mandatory in the company 13% 25% 30% 25% 8% are too high. Layoffs should be banned when companies make profits. 16% 30% 12% 31% 11% The government should legislate to limit employers' excessive remunerations. 7% 20% 10% 32% 31% Union representatives should benefit from extra protection against being fired. 14% 27% 12% 35% 12% Employees should have a stronger role in the company's decision-making process. 2% 8% 11% 45% 33% Controls of labor law enforcement are cur-rently sufficient in France. 9% 36% 28% 23% 4% No Thresholds above which union representa- tion becomes mandatory in the company 10% 25% 33% 24% 8% are too high. Layoffs should be banned when companies make profits. 20% 30% 13% 28% 8% The government should legislate to limit employers' excessive remunerations. 12% 16% 15% 33% 24% Table 6 : 6 Descriptive statistics of questions associated with Easy to Conduct Business Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree Procedures to fire an employee should be made easier for the employer. 16% 31% 14% 31% 9% It is currently easy to create a company in France. 12% 43% 16% 25% 4% Procedures to create a new business should be made easier. 1% 6% 17% 43% 33% Yes Procedures to hire an employee should be simplified. 1% 9% 11% 46% 33% Labor costs are contributing to high un-employment in France. 8% 27% 9% 38% 19% It is currently easy to find funds to open a business in France. 9% 42% 20% 25% 4% It is currently easy for a young en- trepreneur in France to obtain legal advice 4% 40% 21% 32% 2% and support to start a business. Procedures to fire an employee should be made easier for the employer. 9% 26% 20% 32% 12% It is currently easy to create a company in France. 16% 44% 13% 23% 5% Procedures to create a new business should be made easier. 1% 3% 15% 43% 38% No Procedures to hire an employee should be simplified. 0% 5% 15% 46% 33% Labor is too costly, which currently con-tributes to high unemployment in France. 7% 22% 10% 40% 21% It is currently easy to find funds to open a business in France. 9% 48% 19% 21% 3% It is currently easy for a young en- trepreneur in France to obtain legal advice 6% 39% 26% 26% 3% and support to start a business. Table 7 : 7 Attitudes towards the private sector according to the prospective sector of work.Means in plain text, standard errors in parentheses. The p-value corresponds to a two-group mean comparison test. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. Dimension Private Sector Public Sector p-value Distrust in companies -.014 .097 .088 (.041) (.05) Easy to conduct business -.067 .084 .017 (.039) (.051) Table 8 : 8 Descriptive statistics of questions associated with Provision of public goods. Want to take public exams? Reason Never Sometimes Most often Always Delegated Management 9% 31% 48% 12 % Yes Cost Reduction Innovation 9% 14% 24% 31% 37% 34% 30% 21% Quality Improvement 18% 38% 29% 16% Delegated Management 10% 29% 46% 15% No Cost Reduction Innovation 12% 10% 22% 29% 34% 30% 33% 30% Quality Improvement 16% 30% 29% 25% Note: The sample of students consists of 277 respondents who declared their intention to become civil servants, and 385 who said the opposite. Table 9 : 9 Perception of private provision of public goods according to the prospective sector of work. Arguments for/against PPP Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value Delegated Management -.032 -.381 .703 .745 Reduce Cost -.01 -.121 .903 .956 Foster Innovation -.224 -2.651 .008 .010 Service Quality -.209 -2.479 .013 .012 Legitimate Expropriation .212 2.544 .011 .020 Expropriation Compensation -.109 -1.313 .189 .239 Engagements -.204 -1.067 .286 .286 Nationalize .682 3.163 .002 .002 Vital Good .301 1.446 .148 .149 Devaluation -.312 -1.455 .146 .146 PCA -.43 -3.193 .001 .001 Table 10 : 10 Descriptive statistics of levels of trust in institutions according to the prospective sector of work. Intend to take exams for civil servants? Institution No Yes p-value National Assembly 5.771 (.099) 6.25 (.115) .002 Senate 5.493 (.104) 5.868 (.126) .022 Legal System 6.734 (.094) 6.928 (.107) .179 Police 5.878 (.103) 6.079 (.119) .207 Judges 6.835 (.097) 7.247 (.104) .005 Constitutional Council 7.078 (.105) 7.359 (.117) .078 Administrative Supreme Court 6.826 (.103) 7.385 (.104) <0.001 PCA 6.423 (.076) 6.783 (.079) .001 Means in plain text, standard errors in parentheses. Note: The sample of students consists of 304 respondents who declared their in- tention to become civil servants, and 436 who said the opposite. Table 11 : 11 Descriptive statistics of reasons driving students to choose the public sector. : The sample of students consists of 304 respondents who declared their intention to become civil servants, and 436 who said the opposite. Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree Lower Workload 29% 45% 14% 11% 2% Easy Family 13% 30% 22% 29% 6% Yes Greater Security 4% 13% 15% 46% 22% Social Gratification 2% 14% 20% 43% 21% Change Society 2% 12% 13% 48% 26% Satisfaction 0% 4% 12% 53% 31% Lower Workload 18% 41% 22% 16% 3% Easy Family 10% 23% 32% 29% 6% No Greater Security 3% 11% 13% 49% 24% Social Gratification 2% 13% 23% 45% 17% Change Society 4% 17% 17% 48% 15% Satisfaction 1% 7% 18% 51% 22% Note Table 12 : 12 Perception of the factors driving students to choose the public sector according to the prospective sector of work. Reason Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value Lower Workload -.329 -4.098 <0.001 <0.001 Easy Family -.127 -1.619 .105 .085 Greater Security -.103 -1.284 .199 .178 Social Gratification .063 .795 .427 .519 Change Society .334 4.142 <0.001 <0.001 Public Affairs Satisfaction .291 3.54 <0.001 <0.001 Table 13 : 13 Robustness check for the Common Source Bias: Results of regressions of the dependent variable (first column) on the proportion of students in the Master's program who end up working in the public sector. Dependent Variable Coefficient T-stat P-value Competent Teams -.0042 -.985 0.324 Motivation -.0022 -.479 0.632 Flexibility .0038 1.009 0.313 Entrepreneurship -.0048 -1.668 0.095 Higher Wage .0051 1.595 0.111 Protection of unionists .0082 2.521 0.040 Participation of employees .0045 .674 0.522 Controls of law enforcement .0019 .631 0.548 Representation threshold -.0023 -.378 0.717 Limited firing if profits .0007 .077 0.941 Limitation of remuneration .0079 .844 0.426 Facilitation of firing -.0092 -1.509 0.175 Easy to create new firm .0027 1.318 0.229 Facilitation of new firms .0013 .42 0.687 Facilitation of hiring .0026 1.693 0.134 Labor is too costly -.0044 -.767 0.468 East to get funds .0044 .969 0.365 Easy to get counsel .0024 .901 0.398 Delegated Management .0019 .36 0.719 Service Quality -.002 -.298 0.765 Foster Innovation -.0017 -.332 0.740 Reduction of Costs .0007 .143 0.886 Legitimate Expropriation .0117 2.124 0.034 Commitments -.0078 -.683 0.516 Nationalize .026 2.505 0.041 Vital Good .0186 1.667 0.140 Devaluation .01 .695 0.509 National Assembly .022 3.408 0.011 Senate .0251 4.158 0.004 Legal System .0173 1.781 0.118 Police .0121 1.331 0.225 Judges .0202 2.444 0.045 Constitutional Council .0199 1.622 0.149 Administrative Supreme Court .0368 2.61 0.035 Greater Security .0012 .457 0.648 Public Affairs Satisfaction .0132 5.601 0.648 Social Gratification .0084 3.229 0.001 Change Society .0108 5.89 0.000 Lower Workload -.0105 -1.867 0.062 Easy Family .0015 .337 0.736 The coefficients correspond to the estimated difference between students who plan to work in the public sector relatively to those who want to work in the private sector. The econometric specification is either a linear model or an ordered choice model, depending on the nature of the dependent variable. Note: Standard errors are clustered at the graduate school level. Aspiring top civil servants' distrust in the private sectorA. Boring, C. Desrieux and R. Espinosa Online Appendix Questionnaire 2 α ) . Partie 1 : Service public et secteur privé 1. Sur une échelle de zéro à dix, quel est votre niveau de confiance dans les institutions suivantes ? [0 : aucune confiance, 10 : confiance totale] For instance, in France, civil servants have a large influence on the law-making process when they write the content of new laws that are debated in Parliament(Chevallier, 2011). They also have power over how new laws are to be interpreted and implemented. http://www.sciencespo.fr/public/fr/actualites/ena-82-des-nouveaux-admis-viennent-de-sciences-po We included only students who were on the Paris or Le Havre campuses, and who were not in an executive education program. The 1,255 observations include the responses by two students who were post-graduate students. In a robustness check, we further explore the issue of selection through an attrition bias. We find no effect. The analysis includes all French respondents. Some Master's students completed their undergraduate studies at another university before being admitted to Sciences Po. We are interested in the overall difference in the beliefs of the two types of students. We therefore care about the differences between the two groups, regardless of the differences in the composition of the groups. The two groups of students might differ on several dimensions (such as social background, wealth, grades) that might explain their different beliefs regarding the private and public sectors. However, these differences of composition across students will result in differences of composition between public and private workers, which is the object of interest. We therefore estimate by univariate analyses the overall difference between prospective civil servants and private sector workers, regardless of the differences of composition. The first axis of the PCA explains 38.7% of the total variations. It is positively correlated with all dimensions, except for the controls on labor law enforcement, as our ideal point estimation assumes. The first axis of the PCA associated with the second set of variables explains 31% of the variations. The sign of the correlations between the first axis and the associated variables corresponds to the signs assumed in the ideal point estimation. In our context, delegated management refers to the decision of a public authority to contract out the management of a public service to a private company for a given period a time. The first dimension is positively correlated with all variables except Legitimate Expropriation, Nationalization, and Necessary Good. The first axis of the PCA is indeed positively correlated with all dimensions except Legitimate Expropriation, Nationalization and Necessary Good. The PCA's first dimension is positively correlated with all answers (National Assembly: ρ = 0.3558; Senate: ρ = 0.3663; Legal System: ρ = 0.4101; Police: ρ = 0.3103; Judges: ρ = 0.3811; Constitutional Council: ρ = 0.3981; Supreme Administrative Court: ρ = 0.4136. The first dimension explains 54.23% of the total variations. More precisely, it provides quantitative indicators on regulation for starting a business, dealing with construction permits, getting electricity, registering property, obtaining loans, protecting minority investors, paying taxes, trading across borders, enforcing contracts, and resolving insolvency. Doing Business also measures features of labor market regulations. For comparison, Germany ranks th and the UK is 7 th . Source: http://www.doingbusiness.org/data/exploreeconomies/france 20 For debates about judges' preferences in French labor courts, see[START_REF] Espinosa | Constitutional Judicial Behavior: Exploring the Determinants of the Decisions of the French Constitutional Council[END_REF];Desrieux and Espinosa (2017, 2018). http://blogs.worldbank.org/ppps/good-decisions-successful-ppps Researchers anticipated the possibility that some questions could be correctly answered by low-skilled students and wrongly answered by high-skilled students.
86,131
[ "1131450", "1208237", "15987" ]
[ "485122", "218465", "422772", "441569", "894" ]
01697097
en
[ "info" ]
2024/03/05 22:32:10
2013
https://hal.univ-reims.fr/hal-01697097/file/healthcom.pdf
Sylvia Piotin Aassif Benassarou Frédéric Blanchard Olivier Nocent Eric Bertin email: [email protected] Éric Bertin Abdominal Morphometric Data Acquisition Using Depth Sensors come I. INTRODUCTION Nutrition and eating disorders have become a national public health priority. In 2001, France launched the National Health and Nutrition Programme (PNNS 1 ) which aims to improve the health status of the population by acting on one of its major determinants: nutrition. At the regional level, Champagne-Ardenne particularly suffers from obesity. According to the 2012 study ObEpi [START_REF]ObÉpi -enquête épidémiologique nationale sur le surpoids et l'obésité[END_REF], the Champagne Ardenne region has experienced the highest increase in prevalence of obesity between 1997 and 2012 (+145,9%) and became the second region most affected behind the Nord Pas-de-Calais region, with an obesity rate of 20,9% (the national average is 15%). Within this context, the study of eating behaviors is an important issue for the understanding and prevention of disorders and diseases related to food. Building typologies of patients with eating disorders would help to better understand these diseases, thus allowing their prevention. We plan to develop a new acquisition pipeline in order to identify new objective variables based on morphological parameters like abdominal diameter, body surface area, etc. While a recent report by the French Academy of Medicine on unnecessarily prescribed tests2 highlights the generalized use of heavy and expensive imaging, the novelty of our Figure 1. Presentation of our mobile system approach lies in the use of a consumer electronics device (the Microsoft R Kinect TM sensor was initially dedicated to the Microsoft R Xbox 360 TM game console). This device has the advantage of being inexpensive, lightweight and less intrusive than conventional medical equipments. Even if these devices have been used in eHealth projects, their use has often been limited to the adaptation of successful video games to the medical environment: physical training programs to action against obesity [START_REF] Nitz | Is the Wii Fit TM a newgeneration tool for improving balance, health and well-being? A pilot study[END_REF], rehabilitation programs [START_REF] Huang | Kinerehab: a kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities[END_REF]. Within our project, the device is used as a measurement tool to collect morphological information in order to enrich or to confront the information extracted from surveys filled in by patients. Beyond its cost, this device can also be easily deployed in patient's homes or in medical practitioners offices, allowing monitoring on a regular basis (Figure 1) This paper presents the acquisition and analysis methodology that we have implemented. The results presented were obtained on a sample of seventeen healthy patients without any medical context. The main objective was to establish a proof of concept. After an overview of the uses of Kinect TM like sensors in a medical context, we present our abdominal morphology acquisition system. In the next section, the computation of quantitative indicators from raw data is exposed. Finally, we propose a first statistical analysis before concluding and presenting our future works. II. RELATED WORK The Microsoft R Kinect TM peripheral, as well as its recent competitor ASUS R Xtion TM , are made of an RGB camera, an IR emitter and an IR sensor. The latter is able to evaluate a depth map aligned with the video frame. Based on these two sources of data, it is possible to reconstruct people facing the device in 3D [START_REF] Shotton | Real-time human pose recognition in parts from single depth images[END_REF], [START_REF] Weiss | Home 3D body scans from noisy image and range data[END_REF]. the affordable cost of these new peripherals is probably the reason why they are inspiring so many projects. For instance, this technology is growingly involved in healthcare: apart from their use in fall risk assessment in elderly population [START_REF] Stone | Evaluation of an inexpensive depth camera for passive in-home fall risk assessment[END_REF], these cameras are also employed in motor rehabilitation programs [START_REF] Chang | A Kinect-based system for physical rehabilitation: A pilot study for young adults with motor disabilities[END_REF], [START_REF] Da Gama | Poster: Improving motor rehabilitation process through a natural interaction based system using Kinect sensor[END_REF], or for the improvement of workplace ergonomics [START_REF] Dutta | Evaluation of the Kinect TM sensor for 3-D kinematic measurement in the workplace[END_REF]. Not necessarily calling upon motion capture techniques, others use these cameras to quickly collect anthropometric data. Thanks to usual statistical tools such as principal component analysis, it is possible to extrapolate different morphological dimensions from a few measurements [START_REF] Samejima | A body dimensions estimation method of subject from a few measurement items using KINECT[END_REF]. One can also deduce precisely the position of the center of mass of an individual if one combines a Kinect and a Wii Balance Board, popularized by the video game Wii Fit [START_REF] Gonzalez | Estimation of the center of mass with Kinect and Wii balance board[END_REF]. Finally, Velardo and Dugelay have developed an automated system capable of providing nutritional advice depending on the body mass index and basal metabolism rate calculated from parameters either measured or statistically induced from measurements [START_REF] Velardo | What can computer vision tell you about your weight?[END_REF]. III. ACQUISITION ACCURACY Since the popularization of RGB-D devices (RGB + depth), many metrological studies have dealt with the quality of acquisition [START_REF] Gonzalez-Jorge | Metrological evaluation of Microsoft Kinect and Asus Xtion sensors[END_REF]: accuracy (difference between the measured and the real value) ranges from 5 mm to 15 mm at a distance of 1 m, from 5 mm to 25 mm at a distance of 2 m. Even if these lightweight tracking systems do not really compete with more cumbersome ones (such as Vicon's3 ), they can be a low cost alternative solution in many applications where high accuracy (less than 1 mm) is not a matter of concern. We also performed our own experiments. We scanned a wood panel with a 700 mm long diagonal at three different distances (800 mm, 1600 mm and 2400 mm). The values shown in tables I and II represent the mean value of ten consecutive measures. The improved accuracy observed, in comparison with the aforementioned results, can be explained by the use of the mean value of several measurements. According to a framerate of 30 frames per second, a measure can be performed in a third of a second following our method. This is not really an issue since we are measuring static poses of patients. In our study, we use the 3D reconstruction and pose detection capabilities of RGB-D devices to measure in real time several morphological characteristics of a patient (size, bust waist and hip measurements, shape of the abdomen, . . . ). These data produced by the acquisition system can be compared to the patient's responses provided during the Stunkard's test: this test is to ask the patient about his perception (often subjective) of his morphology and ask him to lie within a range of silhouettes [START_REF] Stunkard | Use of the Danish Adoption Register for the study of obesity and thinness[END_REF]. With the OpenNI library [START_REF] Openni | OpenNI framework[END_REF] and the NiTE middleware [START_REF] Nite | NiTE middleware[END_REF], we are able to identify the pixels corresponding to each user located in the sensor field. With this information, we reconstruct the visible body surface in 3D space. Since NiTE also allows us to track the skeleton of each individual, we can position planes on the abdomen: the sagittal plane and the transverse plane. By calculating the intersection between the surface and each plane we get a sagittal profile and a transverse profile (Figure 2). We can record these profiles in the patient's medical file to establish a follow-up. These profiles are also stored in an anonymized database to conduct a statistical analysis. V. PARAMETER EXTRACTION The previous step of acquisition provides two profiles from the intersection between the body surface and the sagittal and transverse planes. These profiles, composed of segments joining points from the reconstruction, are relatively noisy. A first step of smoothing using spline interpolation can adjust these geometric data and make them more "understandable" and exploitable (Figure 3). After the first treatment we already have, for each individual, a first visual "signature" of the abdominal morphology (Figure 4). The interest of these visual signatures is to provide a simplified graphical representation to enable a fast and synthetic visualization. On the other hand, these visual descriptions facilitate description and interpretation of clusters when creating typologies by clustering. Finally, they have a major interest in monitoring the evolution of the patient by the doctor. The profiles obtained during the acquisition are also used to extract features and more "conventional" measurements. It is possible, for example, from the smoothed cross-section, to calculate various lengths that are good estimations of the abdominal dimensions of the subject. In this study, we have limited ourselves, for example, to calculate the diameter (d H and d V ), the height (h H and h V ) and the length (l H and l V ) of each profile (transverse and sagittal) (Figure 5). Other calculations of lengths or surfaces are possible [START_REF] Lu | Automated anthropometric data collection using 3d whole body scanners[END_REF], [START_REF] Yu | The 3d scanner for measuring body surface area: a simplified calculation in the chinese adult[END_REF], [START_REF] Lin | Comparison of three-dimensional anthropometric body surface scanning to waist-hip ratio and body mass index in correlation with metabolic risk factors[END_REF]. The choice of these numerical descriptors of the abdominal morphology defines a parameter space in which subjects are represented. In this study, individuals who have lent to the experience are represented in a space of dimension 6 defined by the variables d H , d V , h H , h V , l H et l V (Table III). id d H h H l H d V h V l V 1 VI. TOWARDS A TYPOLOGY OF ABDOMINAL MORPHOLOGIES After describing individuals in this parameter space, our goal is to automatically extract groups of abdominal morpholo- gies by clustering. We first proceed to a principal component analysis to reduce the dimensionality of the problem and to project people in a subspace whose components are uncorrelated. The representation of subjects in the factorial subspace (related to the first two principal components) allows us to visualize the similarities between individuals (Figure 6). First, clustering is achieved, using the k-medoids algorithm. The algorithm is used in the space of the first 3 factors (consideration of the eigenvalues shows that the first 3 principal components explain 94% of the inertia) and it is set to search for 3 clusters (after reading the dendrogram obtained by hierarchical clustering). The statistical description of obtained clusters is presented in the table IV. The medoids produced by the clustering algorithm provide representative individuals of each cluster (Table V). For a more detailed extraction of representative individuals (or exemplars) we can use the method described in [START_REF] Blanchard | Data representativeness based on fuzzy set theory[END_REF]. VII. CONCLUSION In this paper, we presented a software prototype able to acquire abdominal morphological parameters using a consumer electronics depth sensor. This prototype is lightweight and inexpensive, and the acquisition is quite insensitive to the capture conditions (this type of sensor is designed to operate in most environments, in private homes). We also proposed an algorithmic solution to analyze collected data. The results presented in this paper allowed us to validate the principle and the calculation methods of our tool. The aim was not to draw medical conclusions but to establish a proof of concept. Our prototype will now be used in a clinical context dealing with obesity and eating behaviors. The perspectives of this work are multiple. In order to provide medical practitioners the ability to make a diagnosis, we started to combine this tool with the web framework that we developed [START_REF] Nocent | Toward an immersion platform for the World Wide Web using 3D displays and tracking devices[END_REF]. It already supports RGB-D cameras and can transmit the skeletons of users to a distant browser. Provided we also stream the acquired profiles and the descriptors extracted from the analysis, the practitioner may remotely have all the information he needs. So far, our system performs various measurements from a single capture. Using 3D scanning techniques [START_REF] Curless | A volumetric method for building complex models from range images[END_REF], we could capture the entire body surface of a subject, either by moving the camera around, like KinectFusion [START_REF] Izadi | KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera[END_REF], or by combining several sources simultaneously, as is the case with OmniKinect [START_REF] Kainz | Om-niKinect: real-time dense volumetric data acquisition and applications[END_REF]. Finally, the data analysis should be extended to enable the construction of typologies adapted to the studied diseases, and a much larger amount of data. Figure 2 .Figure 3 . 1 Patient 2 Figure 4 . 23124 Figure 2. Acquisition of the sagittal and transverse profiles. The individual is highlighted and the two profiles are overprinted on his abdomen. Figure 5 . 5 Figure 5. Examples of morphometric indicators calculated from the sagittal and transverse profiles Figure 6 . 6 Figure 6. Representation, in the factorial design, of individuals and clusters obtained by the k-medoids algorithm. Table V MEDOIDS V : REPRESENTATIVE INDIVIDUALS OF OBTAINED CLUSTERS (INDIVIDUALS WHOSE IDENTIFIERS ARE 14, 8 AND 16 ARE RESPECTIVELY THE MEDOIDS OF CLUSTERS 1, 2 AND 3) param. cluster mean sd median min max d H 1 324.76 16.72 323.99 304.31 348.24 d H 2 275.24 23.53 272.49 253.21 300.02 d H 3 280.81 25.16 289.61 229.57 308.40 h H 1 84.39 7.55 82.27 75.24 95.75 h H 2 56.43 7.32 54.46 50.29 64.53 h H 3 68.30 14.86 66.71 53.23 100.68 l H 1 382.20 19.19 383.02 362.60 412.56 l H 2 301.91 23.29 308.42 276.06 321.26 l H 3 323.83 18.97 331.72 283.72 344.00 d V 1 259.31 64.15 268.89 156.98 330.10 d V 2 222.77 13.79 224.42 208.23 235.66 d V 3 320.27 37.07 320.30 274.25 384.82 h V 1 20.91 15.58 15.62 9.29 46.73 h V 2 7.75 4.89 9.56 2.21 11.49 h V 3 26.36 18.62 16.55 11.87 64.84 l V 1 266.88 65.20 273.02 159.15 333.59 l V 2 226.12 10.57 227.48 214.94 235.95 l V 3 329.85 38.60 340.40 278.51 386.62 Table IV STATISTICAL DESCRIPTION OF CLUSTERS Cluster id d H h H l H d V h V l V 1 14 332.15 82.27 383.46 288.23 23.58 296.57 2 8 272.49 64.53 308.42 208.23 11.49 214.94 3 16 302.74 62.13 331.72 278.60 30.04 289.58 http://www.mangerbouger.fr/pnns/ http://www.academie-medecine.fr http://www.vicon.com ACKNOWLEDGEMENTS The authors would like to warmly thank the volunteers who helped them test and improve their prototype.
15,691
[ "753115", "176865", "173957", "741069" ]
[ "21189", "21189", "21189" ]
01759560
en
[ "info" ]
2024/03/05 22:32:10
2012
https://hal.univ-reims.fr/hal-01759560/file/paper.pdf
Olivier Nocent email: [email protected] Crestic Sic Sylvia Piotin Jaisson Maxime Laurent Grespi Crestic Lucas Sic Toward an immersion platform for the World Wide Web using autostereoscopic displays and tracking devices Keywords: I.3.1 [Computer Graphics]: Hardware Architecture-Three-dimensional displays, I.3.2 [Graphics Systems]: Distributed/network graphics-, I.3.3 [Computer Graphics]: Picture/Image Generation-Display algorithms, I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Virtual reality, J.3 [Life and Medical Sciences]: Medical information systems-, Web graphics, 3D perception, autostereoscopic displays, natural interaction Figure 1: Autostereoscopic technology allows 3D images popping out of the screen while natural interaction provides a seamless manipulation of 3D contents. Introduction A recurrent and key issue for 3D technologies resides in immersion. 3D web technologies try to reach the same goal in order to enhance the user experience. Interaction and depth perception are two factors that significantly improve the feeling of immersion. But these factors rely on dedicated hardware that can not be addressed through JavaScript for security reasons. In this paper, we present an original way to interact with hardware via a web browser, using web protocols by providing an easy-to-use immersion platform for the World Wide Web. This plugin-free solution leveraging the brand new features of HTML5 (WebGL, WebSockets) allows to handle autostereoscopic displays for immersion and different type of tracking devices for natural interaction (Figure 2). Because We-bGL is a low-level API, we decided to develop the WebGLUT (We-bGL Utility Toolkit) API on top of WebGL. WebGLUT enhances WebGL by providing extra features like linear algebra data structures (vectors, matrices, quaternions), triangular meshes, materials, multiview cameras to handle autosteroscopic displays, controllers to address different tracking devices, etc. WebGLUT was created at the same time (and with the same philosophy) as WebGLU [START_REF] Delillo | WebGLU development library for WebGL[END_REF]SpiderGL [Di Benedetto et al. 2010]. Our contribution is written as follows: Section 2 presents related work to 3D content on the web and on 3D displays. Section 3 describes the autostereoscopic display technology by providing equations and algorithms to generate multiple views. Section 4 is dedicated to our network-based tracking system. Finally, we mention very shortly in Section 5 a case study related to medical imaging using our immersion platform. Related work Web browsers have acquired over the past years the ability to efficiently incorporate and deliver different kinds of media. 3D content can be considered as the next evolution to these additions although requirements of 3D graphics in terms of computational power and unifying standard are more restrictive than still images or videos. Several technologies have been developed to achieve this integration. The Virtual Reality Markup Language (VRML) [START_REF] Raggett | Extending WWW to support platform independent virtual reality[END_REF]] replaced afterward by X3D [START_REF] Brutzman | X3D: Extensible 3D Graphics for Web Authors[END_REF] was proposed as a text-based format for specifying 3D scenes in terms of geometry and material properties and for the definition of basic user interaction. Even if the format itself is a standard, the rendering within the web browser usually relies on proprietary plugins. But the promising X3DOM [START_REF] Behr | X3DOM: a DOM-based HTML5/X3D integration model[END_REF]] initiative aims to include X3D elements as part of the HTML5 DOM tree. More recently, the WebGL [Khronos Group 2009] API was introduced to provide imperative programming mechanisms to display 3D contents in a more flexible way. As its name suggests, WebGL is the JavaScript analogous to the OpenGL ES 2.0 API for C/C++. It provides capabilities for displaying 3D content within a web browser which was previously the exclusive domain of desktop environments. Leung and Salga [START_REF] Leung | Enabling WebGL[END_REF] emphasize the fact that WebGL gives the chance to not just replicate desktop 3D contents and applications, but rather to exploit other web features to develop richer content and applications. In this context, web browsers could become the default visualization interface [START_REF] Mouton | Collaborative visualization: current systems and future trends[END_REF]]. Even if real-time 3D rendering has become a common feature in many applications, the resulting images are still bidimensional. Nowadays, this limitation can be partly overcome by the use of 3D displays that significantly improve depth perception and the ability to estimate distances between objects. Therefore, the content creation process needs to be reconsidered For computer generated imagery, the rendering system can seamlessly render one or more related views depending on the application [START_REF] Abildgaard | An autostereoscopic 3D display can improve visualization of 3D models from intracranial MR angiography[END_REF][START_REF] Benassarou | Autostereoscopic visualization of 3D time-varying complex objects in volumetric image sequences[END_REF]]. 3D contents are obviously clearer and more usable than 2D images because they doe not involve any inference step. Finally, 3D contents can also address new emerging devices like 3D smartphones [START_REF] Harrold | Autostereoscopic display technology for mobile 3DTV applications[END_REF] and mobile 3DTV, offering new viable platforms for developing 3D web applications. Autostereoscopic technology The term stereoscopy denotes techniques where a separate view is presented to the right and left eye, these separate views inducing a better depth perception. Different solutions exist for the production of these images as well as for their restitution. For image restitution with non time-based techniques, one can use anaglyph and colored filters, polarizing sheets with polarized glasses or autostereoscopic displays [START_REF] Halle | Autostereoscopic displays and computer graphics[END_REF]]. The technology of autostereoscopic displays presents the great advantage to allow multiscopic rendering without the use of glasses. Therefore, the spectator can benefit from a stereoscopic rendering more naturally, and this is especially true for 3D applications in multimedia. Multiple view computation The geometry of a single camera is usually defined by its position, orientation and viewing frustum as shown in Figure 3. But in stereo rendering environments, we need two virtual cameras, one for each left/right eye. And for multiview autostereoscopic displays [Prévoteau et al. 2010], we need up to N virtual cameras. Each virtual camera has a given offset position and its own off-axis asymmetric sheared viewing frustum, the view direction remaining unchanged. The near/far planes are preserved, and a focus plane has to be manually defined where the viewing zones converge. The choice of the focus distance will determine if the objects appear either behind or in front of the screen, providing or not a pop-out effect. Given a perspective projection matrix P, the following calculations allow to identify six parameters l, r, b, t, n, f as defined in the OpenGL glFrustum command: l and r are the left and right coordinates of the vertical clipping planes, b and t are the bottom and top coordinates of the horizontal clipping planes, n and f are the distances to the near and far depth clipping planes. First, the distances n and f (refer to Figure 3 for the geometric signification of these terms) are given by Equation 1. n = 1 -k 2k P34 f = nk where k = P33 -1 P33 + 1 (1) In order to compute l, r, b and t, we need to define the half-width wF (respectively wn) of the image at the focus distance F (respectively at the near distance n) from the horizontal field of view α: wF = tan(α/2)F wn = tan(α/2)n (2) where tan(α/2) = P -1 11 according to the definition of the projection matrix P. The viewing frustum shift for the camera j for j ∈ {1, . . . , N } in the near plane, denoted as s j n is given by Equation 3where d is the interocular distance and ws the physical screen width. s j n = dwn ws (j -1) - N -1 2 (3) Finally, l = -wn + s j n r = wn + s j n t = ρwn b = -t (4) where ρ is the viewport aspect ratio. The camera position offset along the horizontal axis is given by s j F , the viewing frustum shift in the focus plane. s j F = dwF ws (j -1) - N -1 2 (5) Autostereoscopic image rendering Thanks to the computations presented in the previous section we are able to produce N separate perspective views from N different virtual cameras. The use of these images depends on the chosen stereoscopic technology. One of the simplest cases relies on quadbuffering, where the images are rendered into left and right buffers independently, the stereo images being then swapped in sync with shutter glasses. Other techniques, which are not time-based, need the different images to be combined in one single image. Let I j for j ∈ {1, . . . , N } be the image generated by the virtual camera j, the color components I f inal c (x, y) for c ∈ {R, G, B} of the pixel (x, y) in the final image are given by: I f inal c (x, y) = I M(x,y,c) c (x, y) c ∈ {R, G, B} (6) where M is a mask function and R, G, B stand for red, green and blue channels. As Lenticular sheet displays consist of long cylindrical lenses that focus on the underlying image plane so that each vertical pixel stripe corresponds to a given viewing zone, the function M does not depend on the color component c and is simply given by Equation 7. I f inal c (x, y) = I x mod N c (x, y) (7) It is worth noticing that Equation 7clearly shows that the horizontal resolution of the restituted image is reduced by a factor 1/N compared to the native resolution m of the display. This resolution loss is one of the main drawbacks of autostereoscopic displays, even if it is only limited to 3m/N while using wavelength-selective filters because each pixel's RGB components correspond to three different view zones [Hübner et al. 2006]. Technically speaking, the WebGL implementation of the autostereoscopic image rendering is a twostep rendering process using deferred shading techniques. Pass #1 (multiple images rendering) renders N low-resolution images by shifting the viewport along the y-axis. The N images are vertically stored in a single texture via a FrameBuffer Object (FBO). Pass #2 (image post-processing) renders a window-aligned quad. Within the fragment shader, each color component of the output fragment is computed according to Equation 7where the N images are read from an input 2D texture. These two rendering passes are encapsulated in the method shoot() of the MultiViewCamera object. Tracking system Another characteristic of our immersion platform for the World Wide Web resides in the use of tracking devices in order to interact with a 3D scene in a straightforward way. Our aim is to address a large range of tracking devices like mouses, 3D mouses, flysticks or even more recent devices like IR depth sensors (Microsoft R Kinect, ASUS R Xtion Pro). In the same fashion we handle autostereoscopic displays, we propose a plugin-free solution to interact with ad-hoc tracking devices within a web browser by using HTML5 WebSockets [W3C R 2012]. The proprietary ART R DTrack tracking system is an optical tracking system which delivers visual information to a PC in order to be processed. The resulting information, 3D position and orientation of the flystick is then broadcast on every chosen IP address, using the UDP protocol. A PHP server-side script, which can be seen as a WebSocket Server, is running on the web server. The WebSocket server is waiting for UDP datagrams, containing locations and orientations, from the DTrack system. At receipt, the data is parsed and sent via WebSocket to the client using the JSON format. Thanks to this networked architecture, we are able to stream JSON encoded data coming from the tracking system to the web browser. The location and the orientation of the flystick are then used to control the WebGLUT virtual camera. Using the same approach, we have also imagined a more affordable solution to interact with a 3D scene in an even more straightforward way. Indeed, we use recent IR depth sensors like Microsoft R Kinect and ASUS R Xtion Pro to perform Natural Interaction. Just like the ART R DTrack system, our C++ program acts as a UDP server and, at the same time, collects information about the location and the pose of a user facing an IR depth sensor. Thanks to the OpenNI framework [OpenNI TM 2010], we are able to track the user's body and detect characteristic gestures. This information, streamed over the network using our hardware/software architecture can be used to interact with a 3D scene: move the virtual camera, trigger events, etc. These aspects are exposed within the WebGLUT API through the concept of Controller. A Controller can be attached to any type of Camera. This controller is responsible for updating the properties of the camera (position, orientation, field of view, etc.) depending on its state change. At the time of writing, we manage three types of controllers: MouseController, DTrackController and KinectController. 5 Case study: ModJaw R The ModJaw R (Modeling the Human Jaw) project has been developed by our research team and Maxime Jaisson who is preparing a PhD thesis in odontology related to the impact of Information Technology on dentistry practice and teaching. ModJaw R [START_REF] Jaisson | ModJaw R : a kind of magic[END_REF] aims to provide 3D interactive educational materials for dentistry tutors for teaching mandibular kinematics. There exist several similarities with a former project called MAJA (Modeling and Animation of JAw movements) [START_REF] Reiberg | MAJA: Modeling and Animating the Human Jaw[END_REF]]. One main difference between ModJaw R and MAJA consists in the data nature. Indeed, we use real-world data obtained by motion capture and CT scanners. In this way, students are able to study a wide variety of mandible motions according to specific diseases or malformations. Among the future works, Jörg Reiber wanted to add an internet connection to MAJA. In this way, ModJaw R can be seen as an enhanced up-to-date version of MAJA relying on real-world data sets and cutting edge web technologies exposed by the brand new features of HTML5 (WebGL, WebSockets, etc.). The choice of web technologies to develop this project was mainly dictated by the following constraints: Easy-to-use: users just have to open a web browser to access to the software and its user-friendly graphical interface. As it is fully web-based, ModJaw R also incorporates online documentation related to anatomy, mandibular kinematics, etc. Easy-to-deploy: the framework does not require any install, it can be used from all the computers within the faculty or even from your home computer if you use a HTML5 compatible web browser. Since the software is hosted on a single web server, it is really easy to upgrade it. Conclusion In this paper, we have presented an original solution for providing immersion and natural interaction within a web browser. The main benefits of our contribution rely on the seamless interaction between a web browser and autostereoscopic displays and tracking devices through new HTML5 features like WebGL and WebSockets. This plugin-free framework allows to enhance the user experience by leveraging dedicated hardware via JavaScript. Thanks to its network-based approach, this framework can easily be extended to handle other devices in the same fashion. For instance, we began to explore the possibility to manage haptic devices. As our tracking system is completely plugin-free and fully network-based, it could be seamlessly integrated in web-based collaborative environments allowing users to remotely interact with shared 3D contents displayed in a web browser. Figure 2 : 2 Figure 2: Global structure of our immersion platform for the World Wide Web Figure 3 : 3 Figure 3: Geometry of a single camera (left) and multiple axis-aligned cameras (right). Acknowledgements The authors would like to thank Romain Guillemot, research engineer at CReSTIC SIC, for his expertise in autostereoscopic displays and his precious help for porting the source code of multiview cameras from OpenGL to WebGL.
16,570
[ "173957", "753115", "176865", "177046" ]
[ "21189", "531156", "21189", "21189", "21189" ]
01759578
en
[ "info" ]
2024/03/05 22:32:10
2008
https://hal.univ-reims.fr/hal-01759578/file/4200715.pdf
Antoine Jonquet email: [email protected] Olivier Nocent Yannick Remion The art to keep in touch The "good use" of Lagrange multipliers Keywords: Physically-based animation, constraints, contact simulation Physically-based modeling for computer animation allows to produce more realistic motions in less time without requiring the expertise of skilled animators. But, a computer animation is not only a numerical simulation based on classical mechanics since it follows a precise story-line. One common way to define aims in an animation is to add geometric constraints. There are several methods to manage these constraints within a physically-based framework. In this paper, we present an algorithm for constraints handling based on Lagrange multipliers. After few remarks on the equations of motion that we use, we present a first algorithm proposed by Platt. We show with a simple example that this method is not reliable. Our contribution consists in improving this algorithm to provide an efficient and robust method to handle simultaneous active constraints. Introduction For about two decades, the computer graphics community has investigated the field of physics in order to produce more and more realistic computer animations. In fact, physically-based modeling in animation allows to generate stunning visual effects that would be extremely complex to reproduce manually. On one hand, the addition of physical properties to 3D objects automates the generation of motion just by specifying initial external forces. On the other hand, physicallybased animations are even more realistic than traditional key-framed animations that require the expertise of many skilled animators. As a consequence, the introduction of physically-based methods in modeling and animation significantly reduced the cost and production time of computer generated movies. But, one main drawback of this kind of framework is that it relies on heavy mathematics usually hard to tackle for a computer scientist. A second main disadvantage concerns the input of a physically-based animation: in fact, forces and torques are not really user-friendly since it is really difficult to anticipate a complex motion just by specifying an initial set of external forces. A computer animation is definitely not a numerical simulation because it follows a story-line. According to Demetri Terzopoulos [TPB + 89], an animation is simulation plus control. One way to ensure that the objects fulfill the goals defined by the animator is to use geometric constraints. A constraint is an equality or an inequality that gathers different parameters of the animation like the total time elapsed, the positions or the orientations of the moving objects. In a less general way, mechanical simulations also benefit from the use of constraints in order to prevent interpenetration between physical objects for example. There are several methods to handle constraints, summarized in a survey paper by Baraff [Bar93]. But, since our research work is mostly devoted to mechanical simulation, we decided to focus on the use of Lagrange multipliers to manage geometric constraints. In fact, numerical simulations require robust and reliable techniques to ensure that the constraints are never violated. Moreover, with this method we are also able to measure the amount of strain that is necessary to fulfill a given constraint. In this paper, we present a novel algorithm to manage efficiently several simultaneous active geometric constraints. We begin by detailing the physical equations that we use before presenting Platt's algorithm [START_REF] Platt | A Generalization of Dynamic Constraints[END_REF] that is the only algorithm of this type based on Lagrange multipliers. With a simple example, we demonstrate that this algorithm is not suitable for handling simultaneous active constraints. We then introduce our own contribution in order to show how to improve Platt's algorithm to make it reliable, robust and efficient. Lagrange equations of motion Lagragian dynamics consist in an extension of newtonian dynamics allowing to generate a wide range of animations in a more efficient way. In fact, Lagrange equations of motion rely on a set of unknowns, denoted as a state vector x of generalized coordinates, that identifies the real degrees of freedom (DOF) of the mechanical systems involved. Within this formalism, the DOF are not only restricted to rotations or translations. For example, a parameter u ∈ [0, 1] which gives the relative position of a point along a 3D parametric curve can be considered as a single generalized coordinate. Unconstrained motion The evolution of a free mechanical system only subject to a set of external forces is ruled by the Lagrange equations of motion (1). M ẍ = f (1) M is the mass matrix. ẍ is the second time derivative of the state vector. Finally, the vector f corresponds to the sum of external forces. For more details concerning this formalism, we suggest to read [START_REF] Goldstein | Classical Mechanics[END_REF] and [START_REF] Arnold | Mathematical Methods of Classical Mechanics[END_REF]. Constrained motion By convention, an equality constraint will always be defined as in equation ( 2) where E is the set of indices of all the equality constraints. g k (x) = 0 ∀k ∈ E (2) Constraints restrict the set of reachable configurations to a subspace of R n where n is the total number of degrees of freedom. As mentioned before, there exists three main methods to integrate constraints in equation (1) The projection method consists in modifying the state vector x and its first time derivative ẋ in order to fulfill the constraint. This modification can be performed with an iterative method like the Newton-Raphson method [START_REF] Vetterling | The Art to keep in touch: The "Good Use of Lagrange Multipliers[END_REF]. Even if this method is very simple and seems to ensure an instantaneous constraint fulfillment, it is not robust enough: indeed it can not guarantee that the process converges in the case of simultaneous active constraints. The penalty method adds new external forces, acting like virtual springs, in order to minimize the square of the constraint equation, considered as a positive energy function. The main advantage of this method is its compatibility with any dynamic engine since it only relies on forces. But this method leads to inexact constraint fulfillment, allowing interpenetration between the physical objects. In order to diminish this interpenetration, the stiffness of the virtual springs must be significantly increased, making the numerical system unstable. The Lagrange method consists in calculating the exact amount of strain, denoted as the Lagrange multiplier, needed to fulfill the constraint. This method guarantees that constraints are always exactly fulfilled. Since the use of Lagrange multipliers introduces a set of new unknowns, equation (1) must be completed by a set of new equations, increasing the size of the initial linear system to solve. But we consider that this method is most suitable for efficiently managing geometric constraints. For all the reasons mentioned above, we chose the Lagrange method to manage our geometric constraints. According to the principle of virtual work, each constraint g k adds a new force perpendicular to the tangent space of the surface g k (x) = 0. The Lagrange multiplier λ k corresponds to the intensity of the force related to the constraint g k . With these new forces, equation ( 1) is modified as follows: M ẍ = f + k∈E λ k ∂g k ∂x (3) We add new equations to our system by calculating the second time derivative of equation ( 2), leading to equation (4). n i=1 ∂g k ∂x i ẍi = - n i,j=1 ∂ 2 g k ∂x i ∂x j ẋi ẋj ∀k ∈ E (4) In order to correct the numerical deviation due to round-off errors, Baumgarte proposed in [START_REF] Baumgarte | Stabilization of constraints and integrals of motion in dynamical systems[END_REF] a constraint stabilization scheme illustrated by equation ( 5). The parameter τ -1 can be seen as the speed of constraint fulfillment. n i=1 ∂g k ∂x i ẍi = - n i,j=1 ∂ 2 g k ∂x i ∂x j ẋi ẋj - 2 τ n i=1 ∂g k ∂x i ẋi - 1 τ 2 g k (5) When we mix equations (1) and (5), we obtain a linear system where the second time derivative of the state vector x and the vector of Lagrange multipliers Λ are the unknowns. M -J T -J 0 ẍ Λ = f -d (6) J is the jacobian matrix of all the geometric constraints and d corresponds to the right term of equation (5). Inequality constraints management By convention, an inequality constraint will always be defined as in equation ( 7) where F is the set of indices of all the inequality constraints. g k (x) ≥ 0 ∀k ∈ F (7) For a given state vector x, we recall the following definitions: • the constraint is said to be violated by x when g k (x) < 0. This means that the state vector x corresponds to a non allowed configuration. • the constraint is said to be satisfied by x when g k (x) ≥ 0. • the constraint is said to be active when g k (x) = 0. In this case, the state vector x belongs to the boundary of the subspace defined by the inequality constraint g k . The management of inequality constraints is more difficult than the management of equality constraints. An inequality constraint must be handled only if it is violated or active. In fact, the algorithm is a little more complicated as we explain in the next sections. That is why we define two subsets within F: F + is the set of indices of all handled inequality constraints and F -is the set of indices of ignored inequality constraints. Finally, we have F = F -∪F + . The jacobian matrix of constraints J of equation ( 6) is built from all the constraints g k where k ∈ E ∪ F + . Previous work Within the computer graphics community, the main published method devoted to inequality constraints management using Lagrange multipliers, known as "Generalized Dynamic Constraints", was proposed by Platt in [START_REF] Platt | A Generalization of Dynamic Constraints[END_REF]. In his paper, he describes how to use Lagrange multipliers to assemble and simulate collisions between numerical models. This method is an extension of the work of Barzel and Barr [START_REF] Barzel | A modeling system based on dynamic constraints[END_REF] that specifies how constraints must be satisfied. Moreover, Platt proposes a method to update F + (the set of handled inequality constraints) during the animation. This algorithm can be compared to classical active set methods [START_REF] Björck | Numerical Methods for Least Squares Problems[END_REF][START_REF] Nocedal | Numerical Optimization[END_REF]. We do not focus on collision detection that is a problem by itself. We are aware that this difficult problem can be solved in many ways, we encourage the reader to refer to the survey paper by Teschner et al. [TKZ + 05]. During the collision detection stage, we assume that the dynamic engine may rewind time until the first constraint activation is detected. This assumption can produce an important computational overhead that can restrict our method to off-line animations production depending on the complexity of the scene simulated. In any case, this stage ensures that constraints urn:nbn:de:0009-6-12767, ISSN 1860-2037 are never violated. But, it is possible that several constraints are activated simultaneously. The main topic of this paper is to provide a reliable algorithm to handle these multiple active constraints in an efficient way. At the beginning of the animation, Platt populates the set F + with all the active constraints. Algorithm 1-Platt's algorithm Solve equation (6) to get ẍ and Λ 1 Update x and ẋ (numerical integration) 2 for each k ∈ F do 3 if k ∈ F + then 4 if λ k ≤ 0 then 5 k is moved to F - 6 else 7 if g k (x) < 0 then 8 g k is moved to F + 9 For each time step, according to algorithm 1, we solve equation ( 6) and update the state vector in order to retrieve new positions and velocities at the end of the current time step. We then check the status of each inequality constraint. If a constraint g k is active, it is still handled until its Lagrange multiplier is negative or null, that is to say that the Lagrange multiplier corresponds to a force that prevents from deactivation. According to the new values of the state vector x, if the previously inactive constraint g k is now violated (g k (x) < 0), the constraint must be added to F + in order to prevent the system to enter in such a configuration. Figure 1: A simple example with two simultaneous active constraints Even if this algorithm seems to give a reliable solution for inequality constraints handling, some prob-lems remain. We set up a simple scene as in figure 1 to illustrate the insufficiencies of Platt's method. A particle of mass m is constrained to slide on a 2D plane. It starts from an acute-angle corner modeled by two linear inequality constraints g 1 (x) ≥ 0 and g 2 (x) ≥ 0 where g 1 (x) = x -y and g 2 (x) = y. Finally, this particle is subject to a single external force f = (2, -1). In this particular case, the state vector x is composed of the 2D coordinates (x, y) of the particle. According to equation (1), the generalized mass matrix for this system is defined by: M = m 0 0 m (8) As the geometric constraints g 1 (x) and g 2 (x) are linear, their first and second time derivative do not produce any deviation term defined in equation ( 5): d = 0 0 (9) According to the initial value of the state vector x = (0, 0), the two constraints g 1 (x) and g 2 (x) are active, so their indices are inserted in F + and J, the jacobian matrix of constraints, is defined as follows: J = 1 -1 0 1 (10) From equations (6) (8) (9) and (10), we obtain a linear system whose unknowns are the second time derivative of the state vector x and the two Lagrange multipliers λ 1 and λ 2 associated with g 1 and g 2 :     m 0 -1 0 0 m 1 -1 -1 1 0 0 0 -1 0 0         ẍ ÿ λ 1 λ 2     =     2 -1 0 0     (11) The solutions are ẍ = (0, 0) and Λ = (-2, -1). The particle does not move during this time step because ẍ and ÿ are null. But, since λ 1 and λ 2 are both negative, their corresponding constraints are moved to F -. This means that, for the next time step, the system will be free of any constraints. As the force remains constant, the next value of ẍ will be equal to (2m -1 , -m -1 ). These values will lead to an illegal position of the particle, under the line y = 0. These computations are illustrated by the figure 2. The amount of violation of the constraint g 2 (x) = y mainly depends on the ratio between the mass m of the particle and the intensity of the external force f . urn:nbn:de:0009-6-12767, ISSN 1860-2037 Section 5 of this paper presents the different results and comparisons. Our contribution 4.1 A first approach The problem of Platt's method relies on the fact that it keeps some inequality constraints in F + that should be ignored. In fact, the condition g k (x) < 0 used to populate F + with inequality constraints is not well suited and an alternative approach is proposed. A solution would be to replace the condition g k (x) < 0 by a violation tendency condition expressed as J k ẍ < d k . An active constraint that does not fulfill the violation tendency condition will be satisfied but inactive during the next time step and does not have to be handled. At the beginning of the animation, we solve equation (1) to get ẍ and we then populate the set F + with the active constraints that fulfill the violation tendency condition. It is clear that we handle less constraints than Platt because our criteria is more restrictive. We briefly verify that this algorithm gives a correct solution to our example illustrated in figure 1. According to equation (1), ẍ = (2m -1 , -m -1 ). The two constraints g 1 and g 2 are active because x = (0, 0) but only g 2 fulfills the violation tendency condition as mentioned in equation (12). J 1 ẍ = 3m -1 ⇒ 1 ∈ F - J 2 ẍ = -m -1 ⇒ 2 ∈ F + (12) In this special case, equation ( 6) becomes: Algorithm 2-Platt's improved algorithm Solve equation ( 6) to get ẍ and Λ 1 Update x and ẋ (numerical integration) 2 for each k ∈ F do 3 if k ∈ F + then 4 if λ k ≤ 0 then 5 k is moved to F - 6 else 7 if J k ẍ < d k then 8 k is moved to F + 9   m 0 0 0 m -1 0 -1 0     ẍ ÿ λ 2   =   2 -1 0   ( 13 ) The solutions of the linear system (13) are ẍ = (2m -1 , 0) and λ 2 = 1. Finally, the particle will slide along the x-axis without crossing the line y = 0 because the constraint g 1 that was not handled did not introduce a false response. This new algorithm seems to manage multiple inequality constraints in a good way, but we could highlight a problem with this method by using the same example illustrated in figure 1 with a new external force f = (-1, -2). At the beginning, since x = (0, 0), the constraints g 1 and g 2 are active. From equation (1), we obtain that ẍ = (-m -1 , -2m -1 ), and from equation ( 14) that only the constraint g 2 is handled. According to equation ( 14), we build the linear system (15). J 1 ẍ = m -1 ⇒ 1 ∈ F - J 2 ẍ = -2m -1 ⇒ 2 ∈ F + (14) (a) (b) (c)   m 0 0 0 m -1 0 -1 0     ẍ ÿ λ 2   =   -1 -2 0   (15) The solutions are ẍ = (-m -1 , 0) and λ 2 = 2. After the update of x and ẋ, the particle slides through the plane defined by the constraint g 1 and reaches an illegal state. This is due to the fact that the Lagrange multiplier λ 2 pushes the system in an illegal state according to the constraint g 1 , which was not previously inserted in equation (6) as it did not satisfy the violation tendency criterion. These computations are again illustrated by the figure 3. The "right" algorithm The use of the violation tendency condition J k ẍ < d k improves simultaneous active constraints management, since only the appropriate inequality constraints are handled by equation ( 6). But we have seen, from the second example, that it is not sufficient to produce a consistent configuration. In fact, the constraints from F + that fulfill the violation tendency condition will produce a vector Λ of Lagrange multipliers that prevent the system from being in an illegal configuration according to these handled constraints. In the meantime, the constrained accelerations ẍ of the system could lead to an illegal configuration according to some constraints in F -. The only way to deal with this problem is to use the newly computed constrained accelerations to test if the active inequality constraints g k (where k ∈ F -) fulfill the violation tendency condition and have to be handled. We then need to introduce an iterative process that computes the accelerations and checks if a previously ignored constrained must be handled or not according to the violation tendency condition evaluated with the newly computed constrained accelerations. This process is repeated until the sytem reaches the appropriate state. We propose a simple and efficient solution to the inequality constraints handling problem. At the beginning of each time step, all active inequality constraints g k are detected, and F + is emptied. We then begin an iterative process that runs until there is no new insertion in F + . The constrained accelerations ẍ are computed from equation ( 6) and the violation tendency condition J k ẍ < d k is tested on each active inequality constraint. For any inequality constraint g k that fulfills the condition, we insert its index k in F + and start another iterative step. In a recent communication [START_REF] Raghupathi | QP-Collide: A New Approach to Collision Treatment[END_REF], Raghupathi presented a method also based on Lagrange multipliers. For realtime considerations, they do not allow the dynamic engine to rewind time to get back to the first constraint activation. They have to manage constraints at the end of the time step, trying to find the right accelerations to ensure constraints fulfillement. They also confess that this process is not guaranteed to converge for a given situation. Results and Comparisons We will now compare the results obtained with Platt's algorithm and our method, using the example illustrated in figure 1. Figure 4 and 5 illustrate a compari-son of the positions and accelerations along y-axis of a particle of mass m = 2 and m = 3. We recall that the inequality constraint g 2 forbids negative values for y and that the constant force f applied to the particle is equal to (2, -1). As shown on figure 4, Platt's algorithm holds the particle in the corner at the first time step, and releases it at the next time step. As a consequence, the particle evolves in an illegal state during the following steps. With a mass m = 2, the error related to the position is less than 10 -5 with an oscillating acceleration (right column). But if we set the mass m to 3, as shown in the figure 5, errors are much more important, and the particle crosses the line y = 0 modeled by the constraint g 2 . As illustated, our algorithm keeps the particle along the x-axis within a controlled numerical error value, that is less than 10 -8 in these examples. To illustrate multiple contact constraints, we have set a billiard scene composed of 10 fixed balls placed in a corner and a moving ball that slides towards them. For each ball, we define two inequality constraints according to the corner and one inequality constraint for each pair of balls based on their in-between distance. This example is finally composed of 11 balls and 77 inequality constraints (figure 6). It is rather difficult to compare the computation times of Platt's algorithm and ours since the simulations made of simultaneous active constraints are not well handled by Platt's algorithm and produce corrupted numerical values that can lead to infinite loops. But it is quite clear that in the worst case, our method may solve n linear systems of increasing size where n is the total number of inequality constraints. The complexity of our solution is then higher than Platt's algorithm. But we recall that our main contribution is not to speed up an existing method but to propose a reliable algorithm mainly dedicated to off-line simulations. Conclusion In this paper, we presented a novel algorithm to manage simultaneous active inequality constraints. Among all the existing methods to handle constraints within a physically-based animation, we focused on the Lagrange method which provides a reliable way to ensure that constraints are always exactly fulfilled. But, in the special case of several active inequality constraints, we have to take care on how to handle these simultaneous constraints. Platt proposed an algorithm based on Lagrange multipliers but we showed that this method is unable to solve even simple examples. We then explained how to improve this algorithm in order to propose a new reliable and efficient method for inequality constraints handling. Beyond the example illustrated in figure 1, we produced a short movie simulating a billiard game. Some snapshots are gathered in figure 6. Figure 2 : 2 Figure 2: (a) Since the two constraints are active, they handle by Platt's algorithm (b) The related Lagrange multipliers are negative, the constraints are then ignored (c) The new unconstrained acceleration leads to an illegal position Algorithm 3 - 3 Figure 3: (a) The two constraints are handled since they are active (b) According to the violation tendency condition, only the constraint g 2 still handled (c) The newly computed constrained acceleration leads to an illegal position Figure 4 : 4 Figure 4: Comparison of Platt's algorithm and our method using the example illustrated in figure 1 with a mass m = 2. The numerical values correspond respectively to position and acceleration along the y-axis
23,817
[ "1030327", "173957", "177078" ]
[ "21189", "21189", "21189" ]
01759644
en
[ "math" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01759644/file/stcic.pdf
Michel Granger email: [email protected] Mathias Schulze email: [email protected] DEFORMING MONOMIAL SPACE CURVES INTO SET-THEORETIC COMPLETE INTERSECTION SINGULARITIES Keywords: 2010 Mathematics Subject Classification. Primary 32S30; Secondary 14H50, 20M25 Set-theoretic complete intersection, space curve, singularity, deformation, lattice ideal, determinantal variety We deform monomial space curves in order to construct examples of set-theoretical complete intersection space curve singularities. As a by-product we describe an inverse to Herzog's construction of minimal generators of non-complete intersection numerical semigroups with three generators. Introduction It is a classical problem in algebraic geometry to determine the minimal number of equations that define a variety. A lower bound for this number is the codimension and it is reached in case of set-theoretic complete intersections. Let I be an ideal in a polynomial ring or a regular analytic algebra over a field K. Then I is called a set-theoretic complete intersection if √ I = √ I for some ideal I admitting height of I many generators. The subscheme or analytic subgerm X defined by I is also called a set-theoretic complete intersection in this case. It is hard to determine whether a given X is a set-theoretic complete intersection. We address this problem in the case I ∈ Spec K{x, y, z} of irreducible analytic space curve singularities X over an algebraically closed (complete non-discretely valued) field K. Cowsik and Nori (see [START_REF] Cowsik | Affine curves in characteristic p are set theoretic complete intersections[END_REF]) showed that over a perfect field K of positive characteristic any algebroid curve and, if K is infinite, any affine curve is a set-theoretic complete intersection. To our knowledge there is no example of an algebroid curve that is not a set-theoretic complete intersection. Over an algebraically closed field K of characteristic zero, Moh (see [START_REF] Moh | A result on the set-theoretic complete intersection problem[END_REF]) showed that an irreducible algebroid curve K[[ξ, η, ζ]] ⊂ K[[t] ] is a set-theoretic complete intersection if the valuations , m, n = υ(ξ), υ(η), υ(ζ) satisfy (0.1) gcd( , m) = 1, < m, ( -2)m < n. We deform monomial space curves in order to find new examples of set-theoretic complete intersection space curve singularities. Our main result in Proposition 3.2 gives sufficient numerical conditions for the deformation to preserve both the value semigroup and the set-theoretic complete intersection property. As a consequence we obtain Corollary 0.1. Let C be the irreducible curve germ defined by O C = K t , t m + t p , t n + t q ⊂ K{t} where gcd( , m) = 1, p > m, q > n and there are a, b ≥ 2 such that = b + 2, m = 2a + 1, n = ab + b + 1. Let γ be the conductor of the semigroup Γ = , m, n and set d 1 = (a + 1)(b + 2), δ = min {p -m, q -n}. In the setup of Corollary 0.1 Moh's third condition in (0.1) becomes ab < 1 and is trivially false. Corollary 0.1 thus yields an infinite list of new examples of non-monomial set-theoretic complete intersection curve germs. Let us explain our approach and its context in more detail. Let Γ be a numerical semigroup. Delorme (see [START_REF] Delorme | Sous-monoïdes d'intersection complète de N[END_REF]) characterized the complete intersection property of Γ by a recursive condition. The complete intersection property holds equivalently for Γ and its associated monomial curve Spec(K[Γ]) (see [START_REF] Herzog | Generators and relations of abelian semigroups and semigroup rings[END_REF]Cor. 1.13]) and is preserved under flat deformations. For this reason we deform only non-complete intersection Γ. A curve singularity inherits the complete intersection property from its value semigroup since it is a flat deformation of the corresponding monomial curve (see Proposition 2.3). The converse fails as shown by a counter-example of Herzog and Kunz (see [START_REF] Herzog | Die Wertehalbgruppe eines lokalen Rings der Dimension 1[END_REF]). In case Γ = , m, n , Herzog (see [START_REF] Herzog | Generators and relations of abelian semigroups and semigroup rings[END_REF]) described minimal relations of the generators , m, n. There are two cases (H1) and (H2) (see §1) with 3 and 2 minimal relations respectively. In the non-complete intersection case (H2) we describe an inverse to Herzog's construction (see Proposition 1.4). Bresinsky (see [Bre79b]) showed (for arbitrary K) by an explicit calculation based on Herzog's case (H2) that any monomial space curve is a complete intersection. Our results are obtained by lifting his equations to a (flat) deformation with constant value semigroup. In section §2 we construct such deformations (see Proposition 2.3) following an approach using Rees algebras described by Teissier (see [Zar06, Appendix, Ch. I, §1]). In §3 we prove Proposition 3.2 by lifting Bresinsky's equations under the given numerical conditions. In §4 we derive Corollary 0.1 and give some explicit examples (see Example 4.2). It is worth mentioning that Bresinsky (see [Bre79b]) showed (for arbitrary K) that all monomial Gorenstein curves in 4-space are settheoretic complete intersections. Ideals of monomial space curves Let , m, n ∈ N generate a semigroup Γ = , m, n ⊂ N. d = gcd( , m). We assume that Γ is numerical, that is, gcd( , m, n) = 1. Let K be a field and consider the map ϕ : K[x, y, z] → K[t], (x, y, z) → (t , t m , t n ) whose image K[Γ] = K[t , t m , t n ] is the semigroup ring of Γ. Pick a, b, c ∈ N minimal such that a = b 1 m + c 2 n, bm = a 2 + c 1 n, cn = a 1 + b 2 m for some a 1 , a 2 , b 2 , b 2 , c 1 , c 2 ∈ N. / ∈ {a 1 , a 2 , b 1 , b 2 , c 1 , c 2 }. Then (1.1) a = a 1 + a 2 , b = b 1 + b 2 , c = c 1 + c 2 and the unique minimal relations of , m, n read a -b 1 m -c 2 n = 0, (1.2) -a 2 + bm -c 1 n = 0, (1.3) -a 1 -b 2 m + cn = 0. (1.4) Their coefficients form the rows of the matrix (1.5)   a -b 1 -c 2 -a 2 b -c 1 -a 1 -b 2 c   . Accordingly the ideal I = f 1 , f 2 , f 3 of maximal minors (1.6) f 1 = x a -y b 1 z c 2 , f 2 = y b -x a 2 z c 1 , f 3 = x a 1 y b 2 -z c of the matrix (1.7) M 0 = z c 1 x a 1 y b 1 y b 2 z c 2 x a 2 . equals ker ϕ, and the rows of this matrix generate the module of relations between f 1 , f 2 , f 3 . Here K[Γ] is not a complete intersection. (H2) 0 ∈ {a 1 , a 2 , b 1 , b 2 , c 1 , c 2 }. One of the relations (a, -b, 0), (a, 0, -c), or (0, b, -c) is minimal relation of , m, n and, up to a permutation of the variables, the minimal relations are a = bm, (1.8) a 1 + b 2 m = cn. (1.9) Their coefficients form the rows of the matrix (1.10) a -b 0 -a 1 -b 2 c . It is unique up to adding multiples of the first row to the second. Overall there are 3 cases and an overlap case described equivalently by 3 matrices (1.11) a -b 0 a 0 c , a -b 0 0 -b c , a 0 -c 0 b -c . Here K[Γ] is a complete intersection. In the following we describe the image of Herzog's construction and give a left inverse: Proof. The first statement holds due to minimality. By Buchberger's criterion the generators 1.6 form a Gröbner basis with respect to the reverse lexicographical ordering on x, y, z. Let g denote a normal form of g = x ˜ -z ñ with respect to 1.6. Then g ∈ I if and only if g = 0. By (1.1) reductions by f 2 can be avoided in the calculation of g. If r 2 and r 1 many reductions by f 1 and f 3 respectively are applied then Proof. (H1') Given a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ∈ N \ {0}, define a, b, c by (1.1) and set = b 1 c 1 + b 1 c 2 + b 2 c 2 = b 1 c + b 2 c 2 = b 1 c 1 + bc 2 , (1.12) m = a 1 c 1 + a 2 c 1 + a 2 c 2 = ac 1 + a 2 c 2 = a 1 c 1 + a 2 c, (1.13) n = a 1 b 1 + a 1 b 2 + a 2 b 2 = a 1 b + a 2 b 2 = a 1 b 1 + ab 2 , (1. g = x ñ-a 1 r 1 -ar 2 y b 1 r 2 -r 1 b 2 z r 1 c+r 2 c 2 -z ˜ . and g = 0 is equivalent to ˜ = r 1 c + r 2 c 2 , b 1 r 2 = r 1 b 2 , ñ = a 1 r 1 + ar 2 . Then r i = b i gcd(b 1 ,b 2 ) for i = 1, (a) Consider ñ, ˜ ∈ N as in Lemma 1.2. Then x ñ -z ˜ ∈ I = ker ϕ means that (t ) ñ = (t n ) ˜ and hence ñ = ˜ n. So the pair ( , n) is pro- portional to ( ˜ , ñ) which in turn is propotional to ( , n ) by Lemma 1.2. Then the two triples ( , m, n) and ( , m , n ) are proportional by symmetry. Since gcd( , m, n) = 1 by hypothesis ( , m , n ) = q •( , m, n) for some q ∈ N. By Lemma 1.2 q divides gcd(b 1 , b 2 ) and by symmetry also gcd(a 1 , a 2 ) and gcd(c 1 , c 2 ). By minimality of the relations (1.2)-(1.4) gcd(a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ) = ( , m , n ) is in the corresponding subcase of (H2), gcd(a, b) = 1, (1.18) ∀q ∈ ∩[-b 2 /b, a 1 /a] ∩ N : gcd(-a 1 + qa, -b 2 -qb, c) = 1. (1.19) In this case, ( , m, n) = ( , m , n ). Proof. (a) By Lemma 1.3.(a) e = 1 is a necessary condition. Conversely let e = 1. By definition (1.5) is a matrix of relations of ( , m , n ). Assume that ( , m , n ) is in case (H2). By symmetry we may assume that ( , m , n ) admits a matrix of minimal relations In particular c ≥ d . Then b 1 ≥ b contradicts (1.12) since = b 1 c + b 2 c 2 ≥ b c + b 2 c 2 > b c ≥ b d = . We may thus assume that b 1 < b . The difference of first rows of (1.20) and (1.5) is then a relation a -a b 1 -b c 2 of ( , m , n ) with a -a < 0, b 1 -b < 0 and c 2 > 0. Then c 2 ≥ c ≥ d by choice of c . This contradicts (1.12) since = b 1 c 1 + bc 2 ≥ b 1 c 1 + b d > b d = . We may thus assume that ( , m , n ) is in case (H1) with a matrix of unique minimal relations (1.21)   a -b 1 -c 2 -a 2 b -c 1 -a 1 -b 2 c   of type (1.5) where a = a 1 + a 2 , b = b 1 + b 2 , c = c 1 + c 2 . as in (1.1). Then (a, b, c) ≥ (a , b , c ) by choice of the latter and = b 1 c + b 2 c 2 = b 1 c 1 + b c 2 by Lemma 1.3.(a). If (a i , b i , c i ) ≥ (a i , b i , c i ) for i = 1, 2, then = b 1 c + b 2 c 2 ≥ b 1 c + b 2 c 2 = implies c = c and hence (a, b, c) = (a , b , c ) by symmetry. By unique- ness of (1.21) then (a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ) = (a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ) and hence the claim. By symmetry it remains to exclude the case c 2 > c 2 . The difference of first rows of (1.21) and (1.5) is then a relation a -a b 1 -b 1 c 2 -c 2 of ( , m , n ) with a -a ≤ 0, c 2 -c 2 < 0 and hence b 1 -b 1 ≥ b by choice of the latter. This leads to the contradiction = b 2 c 2 + b 1 c > b 1 c ≥ b c + b 1 c > b 2 c 2 + b 1 c = . = d = b , a = m d = a . Writing the second row of (1.10) as a linear combination of (1.20) yields -a 1 + qa -b 2 -qb c = p -a 1 -b 2 c . with p ∈ N and q ∩ [-b 2 /b, a 1 /a] ∩ N and hence p = 1 by (1.19). The claim follows. The following examples show some issues that prevent us from formulating stronger statement in Proposition 1.4.(b). Example 1.5. (a) Take (a, -b, 0) = (3, -2, 0) and (-a 1 , -b 2 , c) = (-1, -4, 4). Then ( , m , n ) = (4, 6, 7) which is in case (H2). The second minimal rela- tion is (-2, -1, 2) = 1 2 ((-a 1 , -b 2 , c) -(a, -b, 0)). The same ( , m , n ) is obtained from (a, 0, -c) = (7, 0, -4) and (-a 2 , b, -c 1 ) = (-1, 3, -2). This latter satisfies (1.18) and (1.19) but (a, 0, -c) is not minimal. (b) Take (a, -b, 0) = (4, -3, 0) and (-a 1 , -b 2 , c) = (-2, -1, 2). Then ( , m , n ) = (3, 4, 5) but (a, -b, 0) is not a minimal relation. In fact the corresponding complete intersection K[Γ] defined by the ideal x 3 -y 4 , z 2 -x 2 y is the union of two branches x = t 3 , y = t 4 , z = ±t 5 . Deformation with constant semigroup Let O = (O, m) be a local K-algebra with O/m ∼ = K. Let F • = {F i | i ∈ Z} A = i∈Z F i s -i ⊂ O[s ±1 ]. It is a finite type graded O[s]-algebra and flat (torsion free) K[s]-algebra with retraction A A/A ∩ m[s ±1 ] ∼ = K[s]. For u ∈ O * there are isomorphisms (2.1) A/(s -u)A ∼ = O, A/sA ∼ = gr F O. Geometrically A defines a flat morphism with section Spec(A) π / / A 1 K ι i i with fibers over K-valued points π -1 (x) ∼ = Spec(O), ι(x) = m, 0 = x ∈ A 1 K , π -1 (0) ∼ = Spec(gr F O), ι(0) = gr F m. F • = m • W O W , F • = F •,w = m • = υ -1 [•, ∞] O. Setting t = t /s and identifying K ∼ = O W /m W this yields a finite extension of finite type graded O W -and flat (torsion free) K[s]-algebras (2.2) A = i∈Z (F i ∩ O W )s -i ⊂ i∈Z F i s -i = O W [s, t] = B ⊂ O W [s ±1 ] with retraction defined by K[s] ∼ = B/(B <0 + Bm W ). The stalk at w is A = A w = i∈Z (F i ∩ O)s -i ⊂ i∈Z F i s -i = O[s, t] = B ⊂ O[s ±1 ]. At w = w ∈ W the filtration F w is trivial and the stalk becomes A w = O W,w [s ± 1]. The graded sheaves gr F O W ⊂ gr F O W are thus supported at w and the isomorphism gr F (O W ) w = gr F O ∼ = K[t ] ∼ = K[N] identifies (2.3) (gr F O W ) w = gr F O ∼ = K[Γ ], Γ = υ(O \ {0}) with the semigroup ring K[Γ ] of O, The analytic spectrum Spec an W (-) → W applied to finite type O Walgebras represents the functor T → Hom O T (-T , O T ) from K-analytic spaces over W to sets (see [START_REF] Henri | Familles d'espaces complexes et fondements de la géométrie analytique[END_REF]Exp. 19]). Note that Spec an W (K[s]) = Spec an {w} (K[s]) = L is the K-analytic line. The normalization of W is ν : W = Spec an W (O W ) → W and B = ν * B where B = O W [s, t]. Applying Spec an W to (2.2) yields a diagram of K-analytic spaces (see [Zar06, Appendix]) (2.4) X = Spec an W (A) π & & Spec an W (B) = Y ρ o o L ι 8 8 where π is flat with π • ρ • ι = id and π -1 (x) ∼ = Spec an W (O W ) = W, ι(x) = w, 0 = x ∈ L, π -1 (0) ∼ = Spec an W (gr F O W ), ι(0) ↔ gr F m W . Remark 2.1. Teissier defines X as the analytic spectrum of A over W × L (see [Zar06, Appendix, Ch. I, §1]). This requires to interpret the O W -algebra A as an O W ×L -algebra. Remark 2.2. In order to describe (2.4) in explicit terms, embed L ⊃ W ν / / W ⊂ L n with coordinates t and x = x 1 , . . . , x n and X = {(x, s) | (s 1 x 1 , . . . , s n x n ) ∈ W, s = 0} ⊂ L n × L, Y = (t, s) t = st ∈ W ∪ L × {0} ⊂ L × L. This yields the maps X → W ← Y . The map ρ in (2.4) becomes ρ(t, s) = (x 1 (t )/s 1 , . . . , x n (t )/s n ) for s = 0 and the fiber π -1 (0) is the image of the map ρ(t, 0) = ((ξ 1 (t), . . . , ξ n (t)), 0), ξ k (t) = lim s→0 x k (st)/s k = σ(x k )(t). Taking germs in (2.4) this yields the following. Proposition 2.3. There is a flat morphism with section S = (X, ι(0)) π / / (L, 0) ι k k with fibers π -1 (x) ∼ = (W, w) = C, ι(x) = w, 0 = x ∈ L, π -1 (0) ∼ = Spec an (K[Γ ]) = C 0 , ι(0) ↔ K[Γ + ]. The structure morphism factorizes through a flat morphism X = Spec an W (A) f 3 3 f / / (|W |, A) / / W and f # ι(0) : A → O X,ι(0) induces an isomorphism of completions (see [Car62, Exp. 19, §2, Prop. 4]) A ι(0) ∼ = O X,ι(0) . This yields the finite extension of K-analytic domains O S = O X,ι(0) ⊂ O Y,ι(0) . We aim to describe O Y,ι(0) and K-analytic algebra generators of O S . In explicit terms O S is obtained from a presentation I → O[x] → A → 0 mapping x = x 1 , . . . , x n to ι(0) = A ∩ m[s ±1 ] + As as (2.5) O S = O{x}/O{x}I = O{x} ⊗ O[x] A, O{x} = O ⊗K{x}. The graded K-algebra A/sA is thus generated by ξ. Extend F • to the graded filtration F • [s ±1 ] on O[s ±1 ]. For i ≥ j, (A/As) i = gr F i A i •s i-j ∼ = / / gr F i A j . Thus finitely many monomials in ξ, s generate any A j /F i A j ∼ = F j /F i over K. With γ the conductor of Γ and i = γ + j, F γ ⊂ m ∩ O = m and hence F i = F γ F j ⊂ mF j . Therefore these monomials generate A j as O-module by Nakayama's lemma. It follows A = O[ξ, s] as graded K-algebra. Using O = K ξ and ξ = ξs then O S = K ξ , ξ, s = K ξ, s (see (2.5)). We now reverse the above construction to deform generators of a semigroup ring. Let Γ be a numerical semigroup with conductor γ generated by = 1 , . . . , n . Pick corresponding indeterminates x = x 1 , . . . , x n . The weighted degree deg(-) defined by deg(x) = makes K[x] a graded K-algebra and induces on K{x} a weighted order ord(-) and initial part inp(-) . The assignment x i → i defines a presentation of the semigroup ring of Γ (see (2.3)) K[x]/I ∼ = K[Γ] ⊂ K[t ] ⊂ K{t } = O. The defining ideal I is generated by homogeneous binomials f = f 1 , . . . , f m of weighted degrees deg(f ) = d. Consider elements ξ = ξ 1 , . . . , ξ n defined by (2.7) ξ j = t j + i≥ j +∆ j ξ j,i t i s i-j ∈ K[t, s] ⊂ O[t, s] = B with ∆ i ∈ N \ {0} ∪ {∞} minimal. Set δ = min {∆ }, ∆ = ∆ 1 , . . . , ∆ n . With deg(t) = 1 = -deg(s) ξ defines a map of graded K-algebras K[x, s] → K[t, s] and a map of analytically graded K-analytic domains K{x, s} → K{t, s} (see [SW73] for analytic gradings). Remark 2.6. Converse to (2.6), any homogeneous ξ ∈ K{t, s} of weighted degree can be written as ξ = ξ /s for some ξ ∈ K{t }. It follows that ξ(t, 1) = ξ (t) ∈ K{t}. Consider the curve germ C with K-analytic ring (2.8) O = O C = K ξ , ξ = ξ(t, O S = K ξ, s = K{x, s}/ F , F = f -f s. Proof. First let Γ = Γ. Then Lemma 2.5 yields the first equality in (2.10). By flatness of π in Proposition 2.3, the relations f of ξ(t, 0) = t lift to relations F ∈ K{x, s} m of ξ. That is, F (x, 0) = f and F (ξ, s) = 0. Since f and ξ have homogeneous components of weighted degrees d and , F can be written as F = f -f s where f ∈ K{x, s} m has homogeneous components of weighted degrees d + 1. This proves in particular the last claim. Since f i (t ) = 0, any term in f i (ξ, s)s = f i (ξ) involves a term of the tail of ξ j for some j. Such a term is divisible by t d i +∆ j which yields the bound for ord(f i (x, 1)). Conversely let f with homogeneous components satisfy (2.9). Suppose that there is a k ∈ Γ \ Γ. Take h ∈ K{x} of maximal weighted order k such that υ(h(ξ )) = k . In particular, k < k and inp h(t ) = 0. Then inp h ∈ I = f and inp h = m i=1 q i f i for some q ∈ K[x] m . Set h = h - m i=1 q i F i (x, 1) = h -inp h + m i=1 q i f i (x, 1). Then h (ξ ) = h(ξ ) by (2.9) and hence υ(h (ξ )) = k . With (2.9) and homogeneity of f it follows that ord(h ) > k contradicting the maximality of k. Remark 2.8. The proof of Proposition 2.7 shows in fact that the condition Γ = Γ is equivalent to the flatness of a homogeneous deformation of the parametrization as in (2.7). These Γ-constant deformations are a particular case of δ-constant deformations of germs of complex analytic curves (see [Tei77, §3, Cor. 1]). The following numerical condition yields the hypothesis of Proposition 2.7. Lemma 2.9. If min {d} + δ ≥ γ then Γ = Γ. Proof. Any k ∈ Γ is of the form k = υ(p(ξ )) for some p ∈ K{x} with p 0 = inp(p) ∈ K[x]. If p 0 (t ) = 0, then k ∈ Γ. Otherwise, p 0 ∈ f and hence k ≥ min {d} + min { }. The second claim follows. Set-theoretic complete intersections We return to the special case Γ = , m, n of §1. Recall Bresinsky's method to show that Spec(K[Γ]) is a set-theoretic complete intersection (see [Bre79a]). Starting from the defining equations (1.6) in case (H1) he computes f c 1 = (x a -y b 1 z c 2 ) c = x a g 1 ± y b 1 c z c 2 c = x a g 1 ± y b 1 c z (c 2 -1)c (x a 1 y b 2 -f 3 ) = x a 1 g 2 ∓ y b 1 c z (c 2 -1)c f 3 ≡ x a 1 g 2 mod f 3 where g 1 ∈ x, z and g 2 = x a-a 1 g 1 ± y b 1 c+b 2 z (c 2 -1)c . He shows that, if c 2 ≥ 2, then further reducing g 2 by f 3 yields g 2 = x a-a 1 g 1 ± y b 1 c+b 2 z (c 2 -2)c (x a 1 y b 2 -f 3 ) ≡ x a-a 1 g 1 ± x a 1 y b 1 c+2b 2 z (c 2 -2)c mod f 3 ≡ x a 1 g1 + y b 1 c+2b 2 z (c 2 -2)c mod f 3 ≡ x a 1 g 3 mod f 3 for some g1 ∈ K[x, y, z]. Iterating c 2 many times yields a relation (3.1) f c 1 = qf 3 + x k g, k = a 1 c 2 , where g ≡ y mod x, z with from (1.12). One computes that x a 1 f 2 = y b 1 f 3 -z c 1 f 1 , z c 2 f 2 = x a 2 f 3 -y b 2 f 1 . Bresinsky concludes that (3.2) Z(x, z) ⊂ Z(g, f 3 ) ⊂ Z(f 1 , f 3 ) = Z(f 1 , f 2 , f 3 ) ∪ Z(x, z) making Spec(K[Γ]) = Z(g, f 3 ) a set-theoretic complete intersection. As a particular case of (2.7) consider three elements ξ = t + i≥ +∆ ξ i s i-t i , (3.3) η = t m + i≥m+∆m η i s i-m t i , ζ = t n + i≥n+∆n ζ i s i-n t i ∈ K[t, s]. (3.5) F c 1 = qF 3 + x k G, G(x, y, z, 0) = g, then C = S ∩ Z(s -1) = Z(G, F 3 , s -1) is a set-theoretic complete intersection. Proof. Consider a matrix of indeterminates M = Z 1 X 1 Y 1 Y 2 Z 2 X 2 and the system of equations defined by its maximal minors F 1 = X 1 X 2 -Y 1 Z 2 , F 2 = Y 1 Y 2 -X 2 Z 1 , F 3 = X 1 Y 2 -Z 1 Z 2 . By Schap's theorem (see [START_REF] Schaps | Deformations of Cohen-Macaulay schemes of codimension 2 and non-singular deformations of space curves[END_REF]) there is a solution with coefficients in K{x, y, z} [[s]] that satisfies M (x, y, z, 0) = M 0 . Grauert's approximation theorem (see [Gra72]) coefficients can be taken in K{x, y, z, s}. Using the fact that M is a matrix of relations, we imitate in Bresinsky's argument in (3.2), Z(G, F 3 ) ⊂ Z(F 1 , F 3 ) = Z(F 1 , F 2 , F 3 ) ∪ Z(X 1 , Z 2 ). The K-analytic germs Z(G, F 3 ) and Z(G, X 1 , Z 2 ) are deformations of the complete intersections Z(g, f 3 ) and Z(g, x a 1 , z c 2 ), and are thus of pure dimensions 2 and 1 respectively. It follows that Z(G, F 3 ) does not contain any component of Z(X 1 , Z 2 ) and must hence equal Z(F 1 , F 2 , F 3 ) = S. The claim follows. Proposition 3.2. Set δ = min(∆ , ∆m, ∆n) and k = a 1 c 2 . Then the curve germ C defined by (3.3) is a set-theoretic complete intersection if min(d 1 , d 2 , d 3 ) + δ ≥ γ, min(d 1 , d 3 ) + δ ≥ γ + k , or, equivalently, min(d 1 , d 2 + k , d 3 ) + δ ≥ γ + k . Proof. By Lemma 2.9 the first inequality yields the assumption Γ = Γ on (3.3). The conductor of ξ k O equals γ + k and contains (F if i )(ξ , η , ζ ), i = 1, 3, by the second inequality. This makes F i -f i , i = 1, 3, divisible by x k . Substituting into (3.1) yields (3.5) and by Lemma 3.1 the claim. Remark 3.3. We can permute the roles of the f i in Bresinsky's method. If the role of (f 1 , f 3 ) is played by (f 1 , f 2 ), we obtain a formula similar to (3.1), f b 1 = qf 2 + x k g with k = a 2 b 1 . Instead of x k , there is a power of y if we use instead (f 2 , f 1 ) or (f 2 , f 3 ) and a power of z if we use (f 3 , f 1 ) or (f 3 , f 1 ). The calculations are the same. In the examples we favor powers of x in order to minimize the conductor γ + k . We assume that a, b ≥ 2 and b + 2 < 2a + 1 so that < m < n. The maximal minors (1.6) of M 0 are then Series of examples f 1 = x a+1 -yz, f 2 = y b+1 -x a z, f 3 = z 2 -xy b with respective weighted degrees d 1 = (a + 1)(b + 2), d 2 = (2a + 1)(b + 1), d 3 = 2ab + 2b + 2 where d 1 < d 3 < d 2 . In Bresinsky's method (3.1) with k = 1 reads f 2 1 -y 2 f 3 = xg, g = x 2a+1 -2x a yz + y b+2 . We reduce the inequality in Proposition 3.2 to a condition on d 1 . Lemma 4.1. The conductor of ξO is bounded by γ + ≤ d 2 - m < d 3 . In particular, d 2 ≥ γ + 2 and d 3 > γ + . Proof. The subsemigroup Γ 1 = , m ⊂ Γ has conductor γ 1 = ( -1)(m -1) = 2a(b + 1) = n + (a -1) + 1 ≥ γ. To obtain a sharper upper bound for γ we think of Γ as obtained from Γ 1 by filling gaps of Γ 1 . Since 2n ≥ γ 1 , Γ \ Γ 1 = (n + Γ 1 ) \ Γ 1 . The smallest elements of Γ 1 are i where i = 0, . . . , m . By symmetry of Γ 1 (see [Kun70]) the largest elements of N \ Γ 1 are γ 1 -1 -i = n + (a -1 -i) , i = 0, . . . , m , and contained in n + Γ 1 since the minimal coefficient a -1 -i is nonnegative by a -1 - m ≥ a -1 - m = (a -1)b -3 b + 2 > -1. They are thus the largest elements of Γ \ Γ 1 . Their minimum attained at i = m then bounds γ ≤ γ 1 -1 - m . Substituting γ 1 + -1 = d 2 yields the first particular inequality. The second one follows from d 2 -d 3 = 2a -b -1 = m -< m . Proof of Corollary 0.1. (a) This follows from Lemma 2.9. (b) By Lemma 4.1, the inequality in Proposition 3.2 simplifies to d 1 + δ ≥ γ + . The claim follows. (c) Suppose that d 1 + q -n ≥ γ + for some q > n and a, b ≥ 3. Set p = γ -1 -. Then n > m + and Γ ∩ (m + , m + 2 ) can include at most n and some multiple of . Since ≥ 4 it follows that (m + , m + 2 ) contains a gap of Γ and hence γ -1 > + m and p > m. Moreover (a -1)b ≥ 4 is equivalent to Example 4.2. We discuss a list of special cases of Corollary 0.1. d 1 + p -m ≥ γ + . (a) a = b = 2. The monomial curve C 0 defined by (x, y, z) = (t 4 , t 5 , t 7 ) has conductor γ = 7. Its only admissible deformation is (x, y, z) = (t 4 , t 5 + st 6 , t 7 ). However this deformation is trivial and our method does not yield a new example. To see this, we adapt a method of Zariski (see [Zar06, Ch. III, (2.5), (2.6)]). Consider the change of coordinates x = x + 4s 5 y = t 4 + 4s 5 t 5 + 4s 2 5 t 6 and the change of parameters of the form τ = t+O(t 2 ) such that x = τ 4 . Then τ = t + s 5 t 2 + O(t 3 ) and hence y = τ 5 + O(t 7 ) and z = τ 7 + O(t 8 ). Since O(t 7 ) lies in the conductor, it follows that C ∼ = C 0 . In all other cases, Corollary 0.1 yields an infinite list of new examples. (b) a = 3, b = 2. Consider the monomial curve C 0 defined by (x, y, z) = (t 4 , t 7 , t 9 ). By Zariski's method from (a) we reduce to considering the deformation (x, y, z) = (t 4 , t 7 , t 9 + st 10 ). (c) a = b = 3. The monomial curve C 0 defined by (x, y, z) = (t 5 , t 7 , t 13 ) has conductor γ = 17. We want to satisfy p ≥ γ+ -d 1 +m = 9. The most general deformation of y thus reads y = t 7 + s 1 t 9 + s 2 t 11 + s 3 t 16 . The parameter s 1 can be again eliminated by Zariski's method as in (a). This leaves us with the deformation (x, y, z) = (t 5 , t 7 + s 2 t 11 + s 3 t 16 , t 13 + s 4 t 16 , t 13 ) which is non-trivial due to part (c) of Corollary 0.1 with p = 11. (d) a = 8, b = 3. The monomial curve C 0 defined by (x, y, z) = (t 5 , t 17 , t 28 ) has conductor γ = 47. The condition in part (b) of Corollary 0.1 requires p ≥ γ -d 1 + m = 19. In fact, the deformation (x, y, z) = (t 5 , t 17 + st 18 , t 28 ) is not flat since C has value semigroup Γ = Γ ∪ {46}. However C is isomorphic to the general fiber of the flat deformation in 4-space (x, y, z, w) = (t 5 , t 17 + st 18 , t 28 , t 46 ). (a) If d 1 + δ ≥ γ, then Γ is the value semigroup of C. (b) If d 1 + δ ≥ γ + , then C is a set-theoretic complete intersection. (c) If a, b ≥ 3 and d 1 + q -n ≥ γ + , then C defined by p := γ -1 -> mis a non-monomial set-theoretic complete intersection. 14) and e = gcd( , m , n ). Note that , m , n are the submaximal minors of the matrix in (1.5). (H2') Given a, b, c ∈ N \ {0} and a 1 , b 2 ∈ N, define , m , n , d by = bd , (1.15) m = ad , (1.16) n d = a 1 b + ab 2 c , gcd(n , d ) = 1. (1.17) Remark 1.1. In the overlap case (1.11) the formulas (1.15)-(1.16) yield ( , m , n ) = (bc, ac, ab). Lemma 1.2. In case (H1), let ñ ∈ N be minimal with x ñ -z ˜ ∈ I for some ˜ ∈ N. Then gcd( ˜ , ñ) = 1 and (ñ, ˜ ) • gcd(b 1 , b 2 ) = (n , ). 2 and the claim follows. Lemma 1.3. (a) In case (H1), equations (1.12)-(1.14) recover , m, n. (b) In case (H2), equations (1.15)-(1.17) recover , m, n, d. 1 and hence q = 1. The claim follows. (b) By the minimal relation (1.8) gcd(a, b) = 1 and hence ( , m) = d • (b, a). Substitution into equation (1.9) and comparison with (1.17) gives n d = a 1 b+ab 2 c = n d with gcd(n, d) = gcd( , m, n) = 1 by hypothesis. We deduce that (n, d) = (n , d ) and then ( , m) = ( , m ). Proposition 1.4. (a) In case (H1'), a 1 , a 2 , b 1 , b 2 , c 1 , c 2 arise through (H1) from some numerical semigroup Γ = , m, n if and only if e = 1. In this case, ( , m, n) = ( , m , n ). (b) In case (H2'), a, b, c, a 1 , b 2 arise through (H2) from some from some numerical semigroup Γ = , m, n if and only if -b 2 c of type (1.10). By choice of a , b , c it follows that a > a , b > b , c ≥ c . By Lemma 1.3.(b) d is the denominator of a 1 b +a b 2 c and = b d . (b) By Lemma 1.3.(b) the conditions are necessary. Conversely assume that the conditions hold true. By definition (1.10) is a matrix of relations of ( , m , n ). By hypothesis (1.20) is a matrix of minimal relations of ( , m , n ). By (1.18) gcd( , m ) = d and hence by Lemma 1.3.(b) b be a decreasing filtration by ideals such that F i = O for all i ≤ 0 and F 1 ⊂ m. Consider the Rees ring Let K be an algebraically closed complete non-discretely valued field. Let C be an irreducible K-analytic curve germ. Its ring O = O C is a one-dimensional K-analytic domain. Denote by Γ its value semigroup. Pick a representative W such that C = (W, w). We allow to shrink W suitably without explicit mention. Let O W be the normalization of O W . Then O W,w = (O, m) ∼ = (K{t }, t ) υ / / N ∪ {∞} is a discrete valuation ring. Denote by m W and m W the ideal sheaves corresponding to m and m. There are decreasing filtrations by ideal (sheaves) Consider the curve germ C in (2.8) with K-analytic ring(3.4) O = O C = K{ξ , η , ζ }, (ξ , η , ζ ) = (ξ, η, ζ)(t,1), and value semigroup Γ ⊃ Γ. We aim to describe situations where C is a set-theoretic complete intersection under the hypothesis that Γ = Γ. By Proposition 2.7, (ξ, η, ζ) then generate the flat deformation of C 0 = Spec an (K[Γ]) in Proposition 2.3. Let F 1 , F 2 , F 3 be the defining equations from Proposition 2.7. Lemma 3.1. If g in (3.1) deforms to G ∈ K{x, y, z, s} such that Redefining a, b suitably, we specialize to the case where the matrix in (1.7) is of the formM 0 = z xy y b z x a . By Proposition 1.4.(a) these define Spec(K[ , m, n ]) if and only if = b+2, m = 2a+1, n = ab+b+1(= (a+1) -m), gcd( , m) = 1. By (b), C is a set-theoretic complete intersection.It remains to show that C ∼ = C 0 . This follows from the fact thatΩ 1 C 0 → K{t}dt has valuations Γ \ {0} whereas the 1-form ω = mydx -xdy = (m -p)t p+ -1 dt ∈ Ω 1 C → K{t}dt has valuation p + = γ -1 ∈ Γ. While part (c) of Corollary 0.1 does not apply, C ∼ = C 0 remains valid. To see assume that C 0 ∼ = C induced by an automorphism ϕ of C{t}. Then ϕ(x) ∈ O C shows that ϕ has no quadratic term. This however contradicts ϕ(z) ∈ O C . The deformation (2.7) satisfies Γ = Γ if and only if there is a f ∈ K{x, s} m with homogeneous components such that and ord(f i (x, 1)) ≥ d i + min {∆ }. The flat deformation in Proposi- tion 2.3 is then defined by (2.10) 1), and value semigroup Γ ⊃ Γ. We now describe when (2.7) generate the flat deformation in Propo- sition 2.3. Proposition 2.7. (2.9) f (ξ) = f (ξ, s)s Any O W -module M gives rise to an O X -module With M = M w , its stalk at ι(0) becomes Lemma 2.4. Spec an W (B) = Spec an W (B) and hence O Y,ι(0) = K{s, t}. Proof. By finiteness of ν (see [START_REF] Henri | Familles d'espaces complexes et fondements de la géométrie analytique[END_REF]Exp. 19, §3, Prop. 9]), By the universal property of Spec an it follows that (see [Con06, Thm. 2.2.5.(2)]) Proof. By choice of F • there is a cartesian square By hypothesis and (2.3) the symbols σ(ξ ) generate the graded Kalgebra gr F O. Then σ(ξ ) = σ(ξ ) generate gr F m/ gr F m 2 = gr F (m/m 2 ) and hence ξ generate m/m 2 over K. Then m = ξ O by Nakayama's lemma and hence O = K ξ by the analytic inverse function theorem. Under the graded isomorphism (2.1) with ξ as in (2.6) (A/As)
30,741
[ "909276", "933187" ]
[ "396", "175400" ]
01759690
en
[ "info" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01759690/file/KES2017.pdf
Jean-Baptiste Louvet email: [email protected] Guillaume Dubuisson Duplessis Nathalie Chaignaud Laurent Vercouter Jean-Philippe Kotowicz Modeling a collaborative task with social commitments Keywords: human-machine interaction, collaborative document retrieval, social commitments Our goal is to design software agents able to collaborate with a user on a document retrieval task. To this end, we studied a corpus of human-human collaborative document retrieval task involving a user and an expert. Starting with a scenario built from the analysis of this corpus, we adapt it for a human-machine collaborative task. We propose a model based on social commitments to link the task itself (collaborative document retrieval) and the interaction with the user that our assistant agent has to manage. Then, we specify some steps of the scenario with our model. The notion of triggers in our model implements the deliberative process of the assistant agent. . Introduction Document retrieval (DR), which takes place in a closed database indexing pre-selected documents from reliable information resources, is a complex task for non expert users. To find relevant documents, interfaces allowing formulating more specific queries are hardly used because an expertise about the domain terminology is needed. It may require an external assistance to carry out this task according to the users information need. Thus, we propose to design software agents able to collaborate with a user on a document retrieval task. To this end, we adopt a cognitive approach by studying a corpus of human-human (h-h) collaborative document retrieval task in the quality-controlled health portal CISMeF (www.cismef.org) [START_REF] Darmoni | CISMeF : a structured health resource guide[END_REF] , which involves a user and an expert. In previous work [START_REF] Dubuisson Duplessis | Empirical Specification of Dialogue Games for an Interactive Agent[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] , extraction of dialogue patterns from the corpus has been done with their formalization into dialogue games [START_REF] Maudet | Modéliser l'aspect conventionnel des interactions langagières: la contribution des jeux de dialogue[END_REF] , which can be fruitfully exploited during the dialogue management process [START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] . This formalization uses the notion of social commitments introduced by Singh [START_REF] Singh | Social and psychological commitments in multiagent systems[END_REF] . In this article, we are interested in linking the task itself (collaborative DR) and the interaction with the user that our assistant agent has to manage. We show that the formalism 3 used to model the dialogue games through social commitments can be enhanced to describe the task. Our model makes the link between a high level structure (the task) and low level interaction (dialogue games). Starting with a scenario built from the analysis of the corpus of h-h interaction, we adapt it for a human-machine (h-m) interaction. Then, we specify each step of this scenario in terms of social commitments. This article consists of 5 parts: Section 2 gives a short state of the art on dialogue models. Section 3 describes the model we used to specify a collaborative task. Section 4 presents the scenario modeling the h-h collaborative document retrieval process and a discussion on its transposition in a h-m context. In Section 5, some steps of this scenario are detailed in terms of commitments. Finally, Section 6 gives some conclusions and future work. Related work on reactive/deliberative dialogue model To model dialogue, plan-based approaches and conventional approaches are often viewed as opposite, although some researchers argue that they are complementary [START_REF] Hulstijn | Dialogue games are recipes for joint action[END_REF][START_REF] Yuan | Informal logic dialogue games in human-computer dialogue[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] : Communication processes are joint actions between participants that require coordination. Nevertheless, coordination must stand on conventions reflected by interaction patterns. Thus, dialogue can be considered as a shared and dynamic activity that requires both high-level deliberative reasoning processes and low-level reactive responses. Dubuisson Duplessis 3 proposes to use a hybrid reactive/deliberative architecture where a theory of joint actions can be a "semantics" to the interaction patterns described as dialogue games. These dialogue games are modeled through the notions of social commitment and commitment store described below. Social Commitments Social commitments are commitments that bind a speaker to a community [START_REF] Singh | Social and psychological commitments in multiagent systems[END_REF] . They are public (unlike mental states such as belief, desire, intention), and are stored in a commitment store. Our formalization classically distinguishes a propositional commitment from an action commitment. Propositional commitment. A propositional commitment involves that an emitter (x) commits itself at the present on a proposition towards a receiver (y). Such a commitment is written C(x, y, p, s), meaning "x is committed towards y on the proposition p" is in state s. We only consider propositions describing present, which leads us to consider only two states for a propositional commitment: a propositional commitment is initially inactive (Ina). After its creation, it enters the state created (Crt). A created commitment can be canceled by its emitter. In this case it goes back in an inactive state. Action commitment. An action commitment involves that an emitter (x) commits itself at the present on the happening of an action in the future, towards a receiver (y). Such a commitment is written C(x, y, α, s), meaning "x is committed towards y on the happening of the action α" is in state s. An action commitment is initially inactive (Ina). In this state, it can be created. The creation attempt can fail (Fal) or succeed (Crt). An action commitment in Crt state is active. An active commitment can be violated, leading it to the Vio state. It corresponds to a situation in which the satisfaction conditions of the content of the commitment can not be fulfilled anymore. An active commitment can be fulfilled, leading it to the Ful state. An action commitment is satisfied if its content has been completed. In order to simplify the writing of the commitments, as in our case the interaction is between two interlocutors, we omit the receiver of the commitments. Consequently, a propositional commitment will be written C(x, p, s) and an action commitment will be written C(x, α, s). Conversational gameboard The conversational gameboard describes the state of the dialogue between the interlocutors at a given time. The conversational gameboard describes the public part of the dialogic context supposed strictly shared. T i stands for the conversational gameboard at a time i (the current time). In the framework of this article, we use a simple theory of instants where "<" is the relationship of precedence. The occurrence of an external event increments the time and makes the table evolve. An external event can be dialogic (e.g. an event of enunciation of a dialog act) or extra-dialogic (e.g. an event like light_on showing the occurrence of the action of turning the light on). The conversational gameboard includes a commitment store, which is a partially ordered set of commitments. It is possible to query the gameboard on the belonging (or non-belonging) of a commitment. This is formalized in equation 1a for belonging and 1b for non-belonging (c being a commitment). T i c, true if c ∈ T i , false otherwise (1a) T i c, equivalent to ¬(T i c) (1b) Dialogue games A dialogue game is a conventional bounded joint activity between an initiator and a partner. Rules of the dialogue game specify the expected moves for each participant, which are supposed to play their roles by making moves according to the current stage of the game. This activity is temporarily activated during the dialogue for a specific goal. A dialogue game is a couple type, subject , where type belongs to the set of existing dialogue games and subject is the goal of the game in the language of the subject of the game. We usually write a game under the form type(subject). A game is defined with social commitments. It's a quintuplet characterized for the initiator and the partner by [START_REF] Dubuisson Duplessis | Empirical Specification of Dialogue Games for an Interactive Agent[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] entry conditions describing the conditions the conversational gameboard must fulfill to enter the game, termination conditions, separated into two categories: Success conditions and failure conditions, rules expressed in terms of dialogic commitments, specifying the expected sequencing of expected or forbidden acts, and effects specifying the contextualized effects of dialogical actions in terms of generation of extra-dialogic commitments (i.e. related to the task). A sub-dialogue game is a child dialogue game played in an opened parent game. The emitter of the sub-game can be different from the one of the parent game. Conditions for playing a sub-dialogue game can be hard to specify [START_REF] Maudet | Modéliser l'aspect conventionnel des interactions langagières: la contribution des jeux de dialogue[END_REF] . Dialogical action commitment. A dialogical action commitment is an action commitment contextualized in a dialogue game. It means that in a dialogue game, a participant is committed to produce dialogical actions conventionally expected relatively to an opened dialogue game. For example, in the context of the offer dialogue game, if x plays offer(x, α), the dialogical action commitment C(y, acceptOffer(y, α)|declineOffer(y, α), Crt) will be created, showing that the receiver of the dialogical action can accept or decline the offer. Model to specify a collaborative task This section describes the model we use to specify the task using commitments, conversational gameboard and dialogue games. First of all, we consider that a task can be split into subtasks that we call steps. Each step of the task is described by a table (named step table) divided in three parts: The name of the step, the access conditions to this step and a list of expected behaviors of each participant to the dialogue. Expected behaviors are alternatives and can be played in any order. An expected behavior is a description of: • A conventionally expected dialogue game, with its emitter and content (action or proposition); • The possible outputs of this dialogue game; • Trigger (optional) that is conditions that must be fulfilled to play the expected game. To define the trigger ϕ that emits the predicate E, we use the notation 2a, ϕ being a formula. To express that the conversational gameboard T i fulfills the conditions for a trigger to emit E, we use the notation 2b. E : T i → ϕ (2a) T i E (2b) Prior to play an expected game, the emitter must respect the entry conditions of this game. To shorten the writing of a step table, we do not repeat these conditions. 1. Instances of this this model for our specific task can be found in Tables 3 and4. It describes the access conditions and the expected dialogue games of the step and the modifications they bring to the conversational gameboard T i . In our example, access conditions are that T i contains C(z, {α, p}, s) and triggers the predicate E 0 we give an example of expected behavior with DG(z, {β 1 , p 1 }) as dialogue game and T i+1 C(z, {β 1 , p 1 }, s) as a modification to the conversational gameboard brought by DG. For games played by a software agent, the emission of a predicate (e.g. T i E 1 for the first dialogue game of our example) has to be triggered in order to play the expected dialogue game. For games played by a human, there's no trigger row in the table as the decision to play a dialogue game only depends on his own cognitive process. Some expected dialogue games can be played several times, noted with * . This symbol is propagated in the output to show the commitments that can be generated several times. This is shown with the first dialogue game of our example. Sub-dialogue games (like SDG(z', {γ, p 2 }) in the Table 1) that can be played are indicated under their parent game (played by any of the participants), headed by a " ". The step table is a complete description of what can conventionally happen. A generic example of step table is given in Table This model gives a clear definition of what is conventionally expected from each participant in one step of the task. It is also possible to see triggers as defining the intentions of the agent. As a matter of fact, most of the agent's decisions are done thanks to the triggers, and the agent's behavior can be adapted by modifying these triggers. Name of the step Access ∧ T i C(z, {α, p}, s) T i E 0 Expected game DG(z, {β 1 , p 1 }) * Trigger T i E 1 (only if z is a software agent) Output T i+1 C(z, {β 1 , p 1 }, s) * Expected game DG(-, {β 2 , p 2 }) SDG(z, {γ, p 2 }) Trigger T i E 2 (only if z is a software agent) Output T i+1 C(-, {β 2 , p 2 }, Crt) T i+1 C(z, {γ, p 2 }, s) Expected game DG(z, δ) Trigger T i E 3 (only if z is a software agent) Output ∨ DA1(z', δ) ⇒ T j C(z', δ, s 1 ) DA2(z', δ) ⇒ T j C(z', δ, s 2 ) Expected behaviors We use a specific syntax for dialogue games implying dialogical action commitments. To reduce the size of the tables, we directly map the expected dialogical action commitments to the expected dialogical action of the dialogue game. For example, the third part of Table 1 shows the writing for the dialogue game DG(z, δ): Playing the dialogue game DG(z, δ) creates the dialogical action commitment C(z', DA1(z', δ)|DA2(z', δ), Crt) (DA1 and DA2 being dialogical actions), playing the dialogical action DA1(z', δ) creates the commitment C(z', δ, s 1 ) and playing the dialogical action DA2(z', δ) creates the commitment C(z', δ, s 2 ). Analysis of the h-h collaborative document retrieval process Information Retrieval This section introduces some models of information retrieval (IR) processed by an isolated person and in a collaborative framework. These models can be applied to DR. IR is generally considered as a problem solving process 8 implying a searcher having an identified information need. The problem is then to fulfill this lack of information. Once the information need is specified, the searcher chooses a plan he will execute during the search itself. He evaluates the results found to possibly repeat the whole process. IR is considered as an iterative process that can be split into a series of steps [START_REF] Broder | A taxonomy of Web search[END_REF][START_REF] Marchionini | Find What You Need, Understand What You Find[END_REF][START_REF] Sutcliffe | Towards a cognitive theory of information retrieval[END_REF] : (i) information need identification, (ii) query specification (information need formulation and expression in the search engine, etc.), (iii) query launch, (iv) results evaluation, (v) if needed, query reformulation and repetition of the cycle until obtaining satisfying results or abandoning the search. The standard model is limited by two aspects. On the one hand, the information need of this process is seen as static. On the other hand, the searcher refines repeatedly his query until finding a set of documents fitting his initial information need. Some studies showed that, on the contrary, the information need is not static and that the goal is not to determine a unique query returning a set of documents matching with the information need [START_REF] Bates | The Design of Browsing and Berrypicking Techniques for the Online Search Interface[END_REF][START_REF] O'day | Orienteering in an information landscape: how information seekers get from here to there[END_REF] . Bates proposes the model of "berrypicking" [START_REF] Bates | The Design of Browsing and Berrypicking Techniques for the Online Search Interface[END_REF] which lays the emphasis on two points. The first one is that the information need of the searcher evolves thanks to the resources found during the search. Encountered information can lead the search in a new and unforeseen direction. The second one is that the information need not satisfied by a unique set of documents obtained at the end of the search, but by a selection of resources collected all along the process. To sum up, the IR process is opportunistic and its progression influences the final result. Study of the h-h collaborative document retrieval process To understand the collaborative aspect of the DR process of a user assisted by a human expert, we carried out a study on a h-h interaction. This study is based on the analysis of the corpus collected during the Cogni-CISMeF project [START_REF] Loisel | A conversational agent for information retrieval based on a study of human dialogues[END_REF][START_REF] Loisel | An Issue-Based Approach to Information Search Modelling: Analysis of a Human Dialog Corpus[END_REF] . The Cogni-CISMeF corpus The corpus consists in assistance dialogues about the DR task between an expert and a user in a co-presence situation. The user expresses his information need. The expert has access to the CISMeF portal and has to lead the search cooperating with the user. The CISMeF portal has a graphical user interface and a query language enabling to decompose a query into MeSH ("Medical Subject Headings") lexicon elements. The CISMeF terminology contains keywords, qualifiers (symptoms, treatments. . . ), meta-terms (medical specialties) and resources types (databases, periodicals, images. . . ). The system also allows for extended queries, although many users are not comfortable with them. The experiment was carried out with 21 participants (e.g., researchers, students, secretaries of the laboratory) submitting a query to one of the two CISMeF experts (researchers of the project who learned to use the CISMeF terminology). The corpus includes the transcript of the 21 dialogues (12 for the first expert and 9 for the second) and contains around 37 000 words. Yes normally it's a diagnostic / ok / let's try like this A10 We will try like this otherwise we will remove extra things to have some / so I launch the search again with the "cancerology" thematic access, the CISMeF keyword "colon" and the qualifier "diagnostic" without specifying the type of searched resources Table 2. Translated extract of a dialogue from the corpus (VD06). A is the expert and B the searcher. Figure 1. Scenario presenting the phases (squares) and the steps (ellipses) in a collaborative document retrieval task. The collaborative document retrieval scenario The analysis of the corpus enabled us to identify and characterize the different phases of the dialogues of the Cogni-CISMeF corpus playing a role in the task progress. Five phases were distinguished: • Verbalization: It is the establishment of the search subject between both participants. It always starts by a request formulation from the user and can be followed by spontaneous precision. The expert can then start the query construction if he considers that the verbalization contains enough keywords, ask precision if not or try to reformulate the user's verbalization; • Query construction: It is the alignment of the terms of the user's verbalization with the CISMeF terminology in order to fill in the query form; • Query launch: It is the execution of the current query by the expert. This phase is often implicit; • Results evaluation: The expert evaluates the results of the query. If they are not satisfying, he decides to directly repair the query. Otherwise he presents them to the user. If the latter finds them satisfying, the goal is reached and the search is over; If he finds them partially satisfying (not adapted to his profile, or not related totally to the information need) or not satisfying, the query must be repaired. If the results are rejected by the user, it is also possible to abandon the search; • Query repair: The expert and the user try to use tactics to modify the query while respecting the information need. Three tactics were observed: Precision (to refine the query), reformulation (using synonyms for example) and generalization (to simplify the query). However, these tactics are not mutually exclusive: It is possible to combine precision or generalization with reformulation. In addition to these phases, an opening and a closing phases were observed. The opening phase is optional and consists simply in greetings (information demand about the user's name, age. . . ). At last, the closing phase may give ideas for a new search. The analysis of this corpus showed that the DR task fulfilled by the participants is iterative, opportunistic, strategic and interactive [START_REF] Bates | Information search tactics[END_REF][START_REF] Bates | Where should the person stop and the information search interface start?[END_REF] . The iterative aspect of this process is illustrated by the systematic repetition of the pattern launch/evaluation/repair. On top of that, we remarked that it is clearly lead by the expert. The dialogue in Table 2 is an example of a query repair showing the iterative, opportunistic, strategic and interactive aspects. The expert suggests to widen (generalization) the query (utterance A1). The partners elaborate jointly a plan to modify the query. In this case, it is mainly the user who suggests the moves to carry out (utterances B4 and B6) and the expert agrees (utterances A5 and A7). Then, the expert suggests to add (precision) the qualifier "diagnostic" (utterance A8). The user accepts and suggests the plan execution (utterance B9). The plan execution is accepted and done by the expert (utterance A10), who eventually launches the query. The scenario presented in Figure 1 synthesizes the phases (squares) split into several steps (ellipses) and the possible runs. The dashed ellipses correspond to actions that can be carried out implicitly by the participants of the interaction. Discussion It is possible to take inspiration from the h-h interaction to the h-m interaction. Thus, the presented scenario in the context of the h-h interaction (Figure 1) can be transposed to a h-m context. However, it has to be adapted in order to fit the constrains of the h-m interaction. The h-m interaction framework changes the collaboration situation as far as it gives to the user the ability to lead the search and to modify the query, without requiring the system's agreement. It is an important change which gives to the user new privileges and more control over the interaction. It implies a restriction of the software assistant's permissions when compared to the human expert. As a matter of fact, the system can take initiatives to modify the query by suggesting modifications that will be applied only if they are accepted by the user. However, this inversion of the query modification rights allows each participant to take part in the interaction: As far as the opportunistic aspect are concerned, the user and the system can participate to the interaction at any moment. Despite the lack of cognitive abilities that has a human expert, the software agent has some edges that can be beneficial for a collaborative DR task. The system can access online dictionaries of synonyms, hyponyms, hypernyms and "see also" links, allowing it to find terms related to the user's verbalization. For the reformulation, the system can make use of the lexical resources on the query terms [START_REF] Audeh | Semantic query expansion for fuzzy proximity information retrieval model[END_REF] . In the context of CISMeF, Soualmia [START_REF] Soualmia | Strategies for Health Information Retrieval[END_REF] offers tools to correct, precise and enrich queries. On top of that, the assistant agent can store the previous DR sessions and take advantage of them (by linking information needs, queries and documents) to find terms related to the current search [START_REF] Guedria | Customized Document Research by a Stigmergic Approach Using Agents and Artifacts[END_REF] . It also has the ability to launch queries "in background" (i.e. without notifying the user), beforehand suggesting any query modification or launch. It makes possible, before suggesting a modification to the user, to check if it brings interesting results. Application to the Cogni-CISMeF project We described a h-h collaboration for DR process from which we want to draw in a h-m framework. The model described in Section 3 can be used for a h-m collaboration. As a matter of fact, this model makes possible to express the different characteristics of a h-m collaborative DR: • Iterative: It is possible to give a circular sequencing to step tables using their access conditions (to represent the launch/evaluation/repair loop, for example); • Opportunistic: Depending on the current state of the conversational gameboard, the most relevant dialogue games can be played; • Interactive: Each participant can take part to the interaction at any moment; • Strategic: It is possible to describe dialogue games combinations in order to reach given states of the conversational gameboard. An interesting aspect of our model is that the assistant agent can behave according different cognition levels. This can be done thanks to the triggers that capture the deliberative process of the agent. The agent's reasoning can be very reactive with simple rules or, on the contrary, it can entail a more high level decision process. In this section, we describe the steps of the DR scenario (see Section 4) in terms of our model (see Section 3). Our goal is to show the expressiveness of our model applied to a h-m collaborative DR task drawn from an h-h one. Only some relevant step tables are presented to illustrate the interest of the model. Verbalization precision step This step takes place in the verbalization phase. The user has done a first verbalization of his information need but he has to give more details about it, so he is committed to perform the action of verbalization precision (T i C(x, preciseVerbalization, Crt) present in the "Access" part). In this situation, it is conventionally expected from the user to add verbalization expressions e to his verbalization with an "inform" dialogue game (inform(x, verbalizationExpression(e))). This action can be repeated as often as the user needs (expressed by the * following the dialogue game). The consequences of this dialogue game on the conversational gameboard are: (i) Every time the dialogue game is played, a new commitment on a verbalization expression is created (Crt) and (ii) The commitment on the action of verbalization precision is fulfilled (Ful). The * in the output is propagated for the creation of the commitment on a verbalization expression but not on the action fulfillment commitment (it is performed only the first time the dialogue game is played). It is also expected that the user tells the agent when his precision is finished. This can be performed by playing the dialogue game inform(x, verbalizationComplete) that creates the commitment C(x, verbalizationComplete, Crt) in the conversational gameboard. The model of this step is shown in Table 3. Query precision step This step takes place in the query repair phase. Query q were launched (T i lastQueryLaunched(q)) and its results evaluated by the user, who turned them down (T i C(x, ¬queryResultsSatisfying(q), Crt)). We place ourselves in the context where query is too general (T i queryTooGeneral(q)) and must be precised. This implies some conventionally expected behavior from each participants: The user is expected to add keywords to the query or request a query launch. The agent is expected to offer the user to add a keyword or to specify one, the goal being to reduce the number of results. It can inform the user that the keyword it is offering to specify the query is actually a specification of a keyword of the query. It can also offer the user to launch the query. All these expected behaviors are explained in the following of this section. The addition of keywords to the query by the user corresponds to the inform(x, queryKeyWord(kw)) * dialogue game. The user requests to the agent to launch the query with a request(x, launchQuery(q)) dialogue game. In this case, our collaborative agent will always accept the requests formulated by the user (that's why we only put the acceptRequest(y, launchQuery(q)) dialogical action commitment in the output row). The agent offers to add keywords to the query with an offer(y, addKeyWord(kw)) * and to specify the query with an offer(y, specifyKeyWord(kw, skw)) * . In each case, the user can accept or decline the offer. If he accepts the offer, the conversational gameboard is updated with the commitments generated by execution of the action (keyword addition that generates C(y, queryKeyWord(kw), Crt) * or query specification that generates C(y, queryKeyWord(skw), Crt) * and C(y, queryKeyWord(kw), Ina) * , as skw replaces kw). If the user declines the offer, the action fails (the commitment on the action becomes Fal: C(y, addKeyWord(kw), Fal) * or C(y, specifyKeyWord(kw, skw), Fal) * ). In the case where the agent proposes to specify a keyword, it can play a sub-dialogue game (inform(y, isSpecification(kw, skw))) to inform the user that the new keyword is a specification of one keyword. The trigger of this sub-dialogue game is empty, because all the conditions needed to play this dialogue game (i.e. checking that skw is actually a specification of kw) are already reached in the parent dialogue game. When the query has been specified enough, the agent can offer to launch it, if the user did not already decline the offer to launch it. For illustrative purposes, we defined arbitrarily in Equations 3 the trigger predicates used in Table 4. These definitions depend on the expected behavior we want our agent to adopt. Trigger 3a makes sure that the query q is the last launched. Trigger 3b points that the query q is too general (i.e. the results are too numerous). The predicate queryResultsNb(q) gives the number of results returned by the search engine when launching query q. Trigger 3c Query precision Access ∧ T i C(x, ¬queryResultsSatisfying(q), Crt) T i lastQueryLaunched(q) T i queryTooGeneral(q) Expected game inform(x, queryKeyWord(kw)) * Output T j C(x, queryKeyWord(kw), Crt) * Expected game request(x, launchQuery(q)) Output acceptRequest(y, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Crt) Expected game offer(y, launchQuery(q)) * Trigger T j C(y, launchQuery(q), Fal) ∧ T i queryPreciseEnough(q) Output ∨ acceptOffer(x, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Crt) declineOffer(x, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Fal) * means that a keyword kw, related to the verbalization of the user (relatedToVerbalization(kw)), brings precision to the current query (currentQuery(q)). Trigger 3d gives a keyword (skw) more specific than a current one (kw). The predicate isMeSHHyponym(kw, skw) gives the information that skw is an hyponym (a more specific word) of kw according to the MeSH lexicon. Trigger 3e expresses that query q is precise enough to be proposed to the user for launch. lastQueryLaunched(q) : T i → ∃n ≤ i, ∧ T n C(y, queryLaunched(q), Crt) T n-1 C(y, queryLaunched(q), Crt) ∧ m > n, ∀q' q, ∧ T m C(y, queryLaunched(q'), Crt) T m-1 C(y, queryLaunched(q'), Crt) queryTooGeneral(q) : T i → queryResultsNb(q) ≥ 75 (3b) relevantForPrecision(kw) : T i → ∃q, currentQuery(q), queryResultsNb(q) > queryResultsNb(q + kw), relatedToVerbalization(kw) (3c) specification(kw, skw) : T i → isMeSHHyponym(kw, skw) (3d) queryPreciseEnough(q) : T i → ∃q, currentQuery(q), queryResultsNb(q) < 75 (3e) Conclusion and future work Our work is based on a cognitive study of a corpus of h-h collaborative DR task for the quality-controlled health portal CISMeF. Starting with a scenario of a collaborative DR task, built from the analysis of this corpus, we adapt it in a h-m context, where an assistant agent (software) helps a user in his task. In this article, we described a model to specify a collaborative task in terms of social commitments. We shown how these social commitments link the task itself to the interaction with the user. This model has been applied to the CISMeF portal to specify each steps of the scenario in terms of social commitments. The notion of triggers in our model implements the deliberative process of the assistant agent. The agent's reasoning can be very reactive with simple rules or, on the contrary, it can entail a more high level decision process. We currently work on these triggers to express the decision process of our assistant agent. As a matter of fact, it concerns the reasons of the agent both to enter a step of the scenario and to choose the dialogue game to play. The validation of our system consists in evaluating the added value brought to CISMeF. The idea is to compare the queries made by the user with and without our assistant agent. This comparison would be made by calculating queries precision and recall. Finally, to prove the genericity of our approach (the scenario, the model, the dialogue games, ...), we have started to investigate collaborative document retrieval on a transport law database (http://www.idit.asso.fr). Table 1 . 1 Generic step table. DG is a dialogue game and SDG is a sub-dialogue game. α, β k , γ are actions, p, p k and p k ' are propositions, DA1 and DA2 are dialogical actions and E k are predicates. z and z' stand for the participants to the interaction.s, s 1 and s 2 are states. Table 2 2 presents a translated extract of a dialogue (explained in Section 4.2.2). A1 [. . . ] Perhaps we can try to widen the search in our case if we consider the words we used B2 We didn't use that much already A3 Ummm, forget it then B4 Why remove / we can remove "analysis" A5 So let's remove "analysis" B6 And "diagnostic" A7 Yes [. . . ] A8 [. . . ] I am almost tempted to put diagnostic anyway because / because we will see what it yields B9 Table 3 . 3 Verbalization precision step, i < j. Verbalization precision Access T i C(x, preciseVerbalization, Crt) Expected game inform(x, verbalizationExpression(e)) * Output ∧ T j C(x, verbalizationExpression(e), Crt) * T j C(x, preciseVerbalization, Ful) Expected game inform(x, verbalizationComplete) Output T j C(x, verbalizationComplete, Crt) Table 4 . 4 Query precision step, i < j. x stands for the expert agent, y for the user and z either for the user or the expert agent declineOffer(x, specifyKeyWord(kw, skw)) ⇒ T j C(y, specifyKeyWord(kw, skw), Fal) * Expected game offer(y, addKeyWord(kw)) * Trigger T i relevantForPrecision(kw) Output ∨ acceptOffer(x, addKeyWord(kw)) ⇒ T j C(y, queryKeyWord(kw), Crt) * declineOffer(x, addKeyWord(kw)) ⇒ T j C(y, addKeyWord(kw), Fal) * offer(y, specifyKeyWord(kw, skw)) Expected game offer(y, specifyKeyWord(kw, skw)) Expected game inform(y, isSpecification(kw, skw)) Trigger ∅ Output T * Trigger T i C(z, queryKeyWord(kw), Crt) ∧ T i specification(kw, skw) Output ∨ acceptOffer(x, specifyKeyWord(kw, skw)) ⇒ ∧ T j C(y, queryKeyWord(skw), Crt) * T j C(y, queryKeyWord(kw), Ina) * j C(y, isSpecification(kw, s), Crt)
35,795
[ "18736", "950633", "8251", "18708", "174775" ]
[ "405811", "23832", "23832", "174087", "23832" ]
01744850
en
[ "phys" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01744850/file/Azar2018.pdf
Elise Azar Doru Constantin Dror E Warschawski The effect of gramicidin inclusions on the local order of membrane components Keywords: gramicidin, membranes, NMR, WAXS, order parameter published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction The eect on the cell membrane of inclusions (membrane proteins, antimicrobial peptides etc.) is a highly active eld of study in biophysics [1]. A very powerful principle employed in describing the interaction between proteins and membranes is that of hydrophobic matching [2,3]. It states that proteins with a given hydrophobic length insert preferentially into membranes with a similar hydrophobic thickness [4]. Many studies of the interaction used as inclusion the antimicrobial peptide (AMP) Gramicidin A (GramA), which is known [5,6] to deform (stretch or compress) host membranes to bring them closer to its own hydrophobic length, so the hydrophobic matching mechanism is likely relevant. This perturbation of the membrane prole induces between the GramA pores in bilayers with various compositions [7] a repulsive interaction that can be explained based on a complete elastic model [8]. This large-scale description raises however fundamental questions about the microscopic eect of the inclusion, at the scale of the lipid or surfactant molecules composing the membrane. To what extent is their local arrangement perturbed by the inclusion? Is the continuous elastic model employed for bare membranes still valid? In this paper, our goal is to investigate the inuence of GramA inclusions on the local order of the lipid or surfactant chains. We combine two complementary techniques: wide-angle X-ray scattering (WAXS) gives access to the positional order between neighboring chains, while nuclear magnetic resonance (NMR) is sensitive to the orientational order of chain segments, thus yielding a comprehensive picture of the state of the membrane as a function of the concentration of inclusions. We study GramA inserted within bilayers composed of lipids with phosphocholine heads and saturated lipid chains: 1,2-dilauroyl-sn -glycero-3-phosphocholine (DLPC) and 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) or of single-chain surfactants with zwitterionic or nonionic head groups: dodecyl dimethyl amine oxide (DDAO) and tetraethyleneglycol monododecyl ether (C 12 EO 4 ), respectively. the hydrophobic length of DLPC (20.8 Å) [5] DDAO (18.4 Å) [6] and C 12 EO 4 (18.8 Å) [7] is shorter than that of GramA (22 Å) [9], while DMPC (25.3 Å) [5] is longer. Since all these molecules form bilayers, and their hydrophobic length is close to that of GramA, the latter is expected to adopt the native helical dimer conguration described by Ketchem et al. [10], and not the intertwined double helices observed in methanol [11] or in SDS micelles [12]. As for many molecules containing hydrocarbon chains, the WAXS signal of lipid bilayers exhibits a distinctive peak with position q 0 ∼ 14 nm -1 , indicative of the packing of these chains in the core of the membrane. Although a full description of the scattered intensity would require an involved model based on liquid state theory [13], the width of the peak provides a quantitative measurement for the positional order of the lipid chains: the longer the range of order, the narrower the peak The eect of peptide inclusions on the chain peak has been studied for decades [14]. Systematic investigations have shown that some AMPs (e. g. magainin) have a very strong disrupting eect on the local order of the chains: the chain signal disappears almost completely for a modest concentration of inclusions [15,16,17]. With other peptides, the changes in peak position and width are more subtle [18] and can even lead to a sharper chain peak (as for the SARS coronavirus E protein [19]). To our knowledge, however, no WAXS studies of the eect of GramA on the chain signal have been published. NMR can probe global and local order parameters in various lipid phases and along the lipid chain. Deuterium ( 2 H) NMR has been the method of choice since the 1970s and has proven very successful until today [20,21,22,23]. The eect of GramA on the order parameter of the lipid (or surfactant) chains has already been studied by deuterium ( 2 H) NMR in membranes composed of DMPC [21,24,25,26], DLPC [26] and DDAO [6], but not necessarily at the same temperature, concentration or lipid position as studied here. Here, we use a novel application of solid-state NMR under magic-angle spinning (MAS) and dipolar recoupling, called the Dipolar Recoupling On-Axis with Scaling and Shape Preservation (DROSS) [27]. It provides similar information as 2 H NMR, by recording simultaneously the isotropic 13 C chemical shifts (at natural abundance) and the 13 C- 1 H dipolar couplings at each carbon position along the lipid or surfactant chain and head group regions. The (absolute value of the) 13 C-1 H orientation order parameter S CH = 3 cos 2 θ -1 /2, with θ the angle between the internuclear vector and the motional axis, is extracted from those dipolar couplings, and the variation of order proles with temperature or cholesterol content has already been probed, with lipids that were dicult to deuterate [28,29]. Using the same approach, we monitor the lipid or surfactant order prole when membranes are doped with dierent concentrations of gramicidin. The main advantages of 13 C over 2 H are: the possibility to study natural lipids, with no isotopic labeling, and the high spectral resolution provided by 13 C-NMR, allowing the observation of all carbons along the lipid in a single 2D experiment. Segmental order parameters are deduced, via a simple equation, from the doublet splittings in the second dimension of the 2D spectra. The data treatment is simple for nonspecialists and the sample preparation is very easy since there is no need for isotopic enrichment. All these facts make this technique ideal to probe and study new molecules and to be able to compare the results with the ones obtained with other similar particles. The downsides are the reduced precision in the measurement and the impossibility to extract data from lipids in the gel phase. In particular, carbons at the interfacial region of the lipids (at the glycerol backbone and at the top of the acyl chains) are less sensitive to changes in membrane rigidity, and while subtle changes can be detected with 2 H-NMR, they are dicult to interpret with 13 C-NMR at these positions. Furthermore, the ineciency of the DROSS method in the gel phase would theoretically allow measuring the lipid order in uid phases coexisting with gel phases and quantifying the amount of lipids in each phase. In our measurements, lipids in the gel phase were not abundant enough to be detected. 2 Materials and methods Sample preparation The samples were prepared from stock solutions of lipid or surfactant and respectively Gram A in isopropanol. We mix the two solutions at the desired concentration and briey stir the vials using a tabletop vortexer. The resulting solutions are then left to dry under vacuum at room temperature until all the solvent evaporates, as veried by repeated weighing. The absence of residual isopropanol was cheked by 1 H NMR. We then add the desired amount of water and mix the sample thoroughly using the vortexer and then by centrifuging the vials back and forth. For WAXS, we used a microspatula to deposit small amounts of sample in the opening of a glass X-ray capillary (WJM-Glas Müller GmbH, Berlin), 1.5 or 2 mm in diameter and we centrifuged the capillary until the sample moved to the bottom. We repeated the process until reaching a sample height of about 1.5 cm. The capillary was then either ame-sealed or closed using a glue gun. For NMR, approximately 100 mg of GramA/lipid or GramA/surfactant dispersion in deuterated water were introduced in a 4 mm-diameter rotor for solid-state NMR. NMR NMR experiments with DMPC, DLPC and C 12 EO 4 were performed with a Bruker AVANCE 400-WB NMR spectrometer ( 1 H resonance at 400 MHz, 13 C resonance at 100 MHz) using a Bruker 4-mm MAS probe. NMR experiments with DDAO were performed with a Bruker AVANCE 300-WB NMR spectrometer ( 1 H resonance at 300 MHz, 13 C resonance at 75 MHz) using a Bruker 4-mm MAS probe. All experiments were performed at 30 • C. The DROSS pulse sequence [27] with a scaling factor χ = 0.393 was used with carefully set pulse lengths and refocused insensitive nuclei enhanced by polarization transfer (RINEPT) with delays set to 1/8 J and 1/4 J and a J value of 125 Hz. The spinning rate was set at 5 kHz, typical pulse lengths were 13 C (90 • ) = 3 µs, 1 H (90 • ) = 2. 5 µs and 1 H two-pulse phase-modulation (TPPM) decoupling was performed at 50 kHz with a phase modulation angle of 15 • . 1D spectra were acquired using the simple 13 C-RINEPT sequence with the same parameters. For the 2D spectra, 64 free induction decays were acquired, with 64 to 512 scans summed, a recycle delay of 3 s, a spectral width of 32 kHz and 8000 complex points. The total acquisition time was between 2 and 14 h. The data were treated using the Bruker TopSpin 3.2 software. Resonance assignments followed that of previously published data [24,27,22,30,31], using the C ω-n convention, where n is the total number of segments, decreasing from the terminal methyl segment, C ω , to the upper carbonyl seg- ment C 1 . This representation permits a segment-by-segment comparison of the chain regions. Backbone regions are assigned according to the stereospecic nomenclature (sn) convention for the glycerol moiety. Phosphocholine head group carbons are given greek (α, β, γ) letter designations. The internal reference was chosen to be the acyl chain terminal 13 CH 3 resonance assigned to 14 ppm for all lipids and surfactants studied here. Order parameters were extracted from the 2D DROSS spectra by measuring the dipolar splittings of the Pake doublet at each carbon site. This splitting was converted into a dipolar coupling by taking the scaling factor χ into account. The absolute value of the segmental order parameter is an additional scaling factor χ of the static dipolar coupling into the measured dipolar coupling. Since the static dipolar coupling, on the order of 20 kHz, is not known with high precision for each carbon, we have adjusted it empirically in the case of DMPC, by comparing it to previously determined values [27,22,30]. WAXS We recorded the scattered intensity I as a function of the scattering vector q = 4π λ sin(θ), where λ is the X-ray wavelength and 2θ is the angle between the incident and the scattered beams. Lipids X-ray scattering measurements on the GramA/DLPC and GramA/DMPC systems were performed at the ID02 beamline (ESRF, Grenoble), in a SAXS+WAXS conguration, at an X-ray energy of 12.4 keV (λ = 1Å). The WAXS range was from 5 to 53 nm -1 . We recorded the integrated intensity I(q) and subtracted the scattering signal of an empty capillary, as well as that of a water sample (weighted by the water volume fraction in the lipid samples). We used nine peptide-to-lipid molar ratios P/L ranging from 0 to 1/5 and three temperature points: 20, 30 and 40 • C. The chain peak was tted with a Lorentzian function: I(q) = I 0 q-q0 γ 2 + 1 . We are mainly interested in the parameter γ, the half-width at half maximum (HWHM) of the peak. Surfactants The GramA/DDAO and GramA/C 12 EO 4 systems were studied using an in-house setup using as source a molybdenum rotating anode [32]. The X-ray energy is 17.4 keV (λ = 0.71Å) and the sample-to-detector distance is 75 cm, yielding an accessible q-range of 0.3 to 30 nm -1 . We used ve peptideto-surfactant molar ratios (also denoted by P/L) ranging from 0 to 1/5.5 and eight temperature points, from 0 to 60 • C. The best t for the peak was obtained using a Gaussian function: I(q) = I 0 exp - (q -q 0 ) 2 2σ 2 . For coherence with the measurements on lipid systems, we present the results in terms of the HWHM γ = √ 2 ln 2 σ. We emphasize that the dierence in peak shape (Lorentzian vs. Gaussian) is intrinsic to the systems (double-chain lipids vs. single-chain surfactants) and not due to the resolution of the experimental setups, which is much better than the typical HWHM values measured. 3 Results and discussion NMR We acquired twelve 2D spectra for various surfactants and GramA concentration. Figure 1 shows the 2D DROSS NMR spectrum of C 12 EO 4 with a molar GramA concentration P/L = 0.118. For each 2D spectrum, slices were extracted at each carbon position and order parameters were deduced. Figure 2 shows a set of such representative slices (at the position C ω-2 ). As already explained, carbons at the glycerol backbone and at the rst two positions along the acyl chains were discarded. Figure 3 shows the order proles determined for each lipid and surfactant, with variable amounts of GramA. As shown in Figure 3, there is hardly any change for the headgroup region (C α , C β and C γ ), which is expected, considering the high mobility of this region, except in DDAO (CH 3 , C 2 and C 3 ). In the aliphatic region, in DMPC (Figure 3(a)), the order parameter increases for a ratio of P/L = 0.06 and then decreases for the P/L = 0.115. In DLPC and C 12 EO 4 mixtures (Figure 3 (b) !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) ) ')0,3+4'503.5 ') %# 6 7 89: ' ' ! %$ ' ! ;" !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) ) )0,3+4'503.5 <:8) 89: ' ' ! $" ' !%%# !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , )* ) ) ) ) ) )0,3+4'503.5 <<=> 89: ' ' ! $" ' !%%# !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) ) )0,3+4'503.5 <?8) 89: ' ' ! @ ' !%%$ A'B'#'''''A'B'%''''A A'B'#'''''A'B'%''''A A'B'#'''''A'B'%''''A C'''''''''''D''''''''''E C'''''''''''D''''''''''E A'B'#'''''A'B'%''''A C'''''''''''D'''''''''''''''''''''''''''''''''''% "'''''''''#''''''''''" Overall, we conclude that the order proles signicantly increase along the acyl chains with the concentration of gramicidin, except in the case of DMPC where the order prole globally increases with the addition of P/L = 0.05 of gramicidin and then decreases at P/L = 0.11. for DDAO where we show that gramicidin has the same eect as on the acyl chains. Consequently, we show that gramicidin generally rigidies the acyl chains of DLPC, C 12 EO 4 and DDAO, as well as the head group region of DDAO. In the case of DMPC, gramicidin rst rigidies the acyl chains, but more peptide tend to return the membrane to its original uidity. WAXS The chain peak has long been used as a marker for the ordered or disordered state of the hydrocarbon chains within the bilayer [34]. For lipids, an important parameter is the main transition (or chain melting) temperature, at which the chains go from a gel to a liquid crystalline (in short, liquid) phase [35]. The main transition temperature of pure DLPC is at about -1 For the lipids, in the liquid phase the peak width increases slightly with P/L for all temperatures (Figure 5). In the gel phase of DMPC at 20 • C (Figure 5 right and Figure 4) this disordering eect is very pronounced, in agreement with the results of several dierent techniques, reviewed in Ref [41] ( V-A). The linear increase in HWHM with P/L can be interpreted as a broadening (rather than a shift) of the transition. The liquid crystalline phase value of the HWHM is reached only at the highest investigated P/L, amounting to one GramA molecule per 5 or six lipids. For surfactants, which we only studied in the liquid crystalline phase, changes to the chain peak are slight. In C 12 EO 4 membranes, the peak po- sition q 0 decreases very slightly with temperature (Figure 7), while the peak width is almost unchanged by temperature or gramicidin content (Figure 9 right). As an example, we observe a small decrease of q 0 with the temperature at P/L = 0.073 (Figure 7 This conclusion is conrmed by the very modest change in the HWHM values presented in Figure 9 on the right. At P/L = 0, the HWHM is very close to 2.6 nm -1 for all temperatures. As the gramicidin content increases, we observe a small gap between the dierent temperatures: the width stays constant or increases for the lower temperatures (up to about 40 • C) and decreases for the higher ones. This gap widens at high gramicidin content (P /L > 0.07). Fig. 9 HWHM as a function of the concentration P/L, for all measured temperatures. DDAO bilayers (left) and C 12 EO 4 bilayers (right). In the case of DDAO, the inuence of gramicidin content is more notable than for C 12 EO 4 and the behavior is richer, especially in the presence of choles- terol. Without cholesterol, the DDAO WAXS peaks coincide for the dierent temperatures at a given inclusion concentration (e.g. in Figure 6 left at P/L = 0.178) whereas the proles dier according to the gramicidin concentration for a given temperature (see Figure 6 right). These observations dier in presence of cholesterol where for one concentration of gramicidin inclusions (e.g: case of P/L = 0.082 in Figure 8 For the DDAO system, the peak occurs at much lower q 0 with cholesterol than without: q 0 = 12.77 nm -1 at 20 • C, 12.62 nm -1 at 30 • C and 12.28 nm -1 at 50 • C. Thus, the cholesterol expands DDAO bilayers, in contrast with the condensing eect observed in lipid membranes [42,43]. More detailed molecularscale studies would be needed to understand this phenomenon. Without cholesterol, the width of the main peak in DDAO membranes is little aected by a temperature change, at least between 0 and 60 • C. Without gramicidin, we observe two distinct HWHM values: ∼ 2.38 nm -1 at the lower temperatures (between 0 • C and room temperature) and ∼ 2.5 nm -1 for higher temperatures (between 30 and 60 • C), but this gap closes with the addition of gramicidin, and at high P/L only an insignicant dierence of 0.05 nm -1 persists (Figure 9 left). On the other hand, at a given temperature the HWHM does vary as a function of P/L. This change is sigmoidal, with an average HWHM of ∼ 2.4 nm -1 for P/L < 0.05 and ∼ 2.7 nm -1 for P/L > 0.11. Thus, above this concentration, the gramicidin decreases slightly the positional order of the chains. An opposite eect is observed in the presence of cholesterol (Figure 10), where at high temperature (40 -60 • C) the HWHM drops with the P/L: for instance, from 2.37 nm -1 to 2.08 nm -1 at 60 • C. At low temperature (0-30 • C) there is no systematic dependence on P/L. Overall we can conclude that gramicidin addition has an eect that diers according to the membrane composition. The temperature has a signicant inuence only in the presence of cholesterol. In all surfactant systems and over the temperature range from 0 to 60 • C, the peak is broad, indicating that the alkyl chains are in the liquid crystalline state. There are however subtle dierences between the dierent compositions, as detailed below. In C 12 EO 4 membranes, the peak position q 0 decreases very slightly with temperature, while the HWHM is almost unchanged by temperature or gramicidin content. For DDAO (without cholesterol), q 0 also decreases with temperature at a given P/L, but increases with P/L at xed temperature. On adding gram- icidin, the HWHM increases slightly with a sigmoidal dependence on P/L. Thus, a high gramicidin concentration P/L ≥ 0.1 reduces the positional order of the chains in DDAO bilayers. The opposite behavior is measured in DDAO membranes with cholesterol. Adding gramicidin inclusions have two distinct behaviors depending on the temperature. For low temperatures (between 0 • C and 30 • C) we have a small peptide concentration dependence and a clear temperature correlation, whereas at high temperatures (between 40 • C and 60 • C) we have a strong decrease in the HWHM in presence of inclusions depending only with the P/L content without any variation with the temperature rise. Since at P/L = 0 the HWHM value is very close for the dierent temperatures then we can conclude that adding gramicidin to a membrane containing cholesterol helps rigidify it. Comparing the NMR and WAXS results Although the orientational and positional order parameters are distinct physical parameters, one would expect them to be correlated (e.g. straighter molecules can be more tightly packed, as in the gel phase with respect to the uid phase.) This tendency is indeed observed in our measurements, with the exception of DDAO. We measured by NMR that the orientational order parameter for DMPC increases when adding P/L = 0.05 and slightly decreases at P/L = 0.1 (Figure 3(a)). This behavior was also measured by WAXS for the positional order parameter at both P/L values (Figure 5 right). Similarly, we measured for DLPC acyl chains the same orientational and positional order proles where the order increases for P/L = 0.05 and remains the same when adding P/L = 0.1 gramicidin (Figure 3(b) and 5 left). As for the C 12 EO 4 surfactant acyl chains, we found a modest raise in both the orientational and the positional order parameters when adding the gramicidin peptide with no dependence on the P/L molar ratio (Figure 3(c) and 9 right). In the case of DDAO we found that adding gramicidin signicantly increases the orientational order (Figure 3(d)) and decreases the positional order (Figure 9 left). Solid-state NMR also shows an abrupt change in the headgroup region when little GramA is added, followed by a more gradual ordering of the acyl chain when more GramA is added. This may imply a particular geometrical reorganisation of DDAO around the GramA inclusion that could be tested with molecular models. Conclusions Using solid-state NMR and wide angle X-ray scattering, we showed that inserting Gramicidin A in lipid and surfactant bilayers modies the local order of the constituent acyl chains depending on multiple factors. In particular, we studied the inuence of membrane composition and temperature on the local order. The behavior of this local order is quite rich, with signicant dierences between lipids, on the one hand, and single-tail surfactants, on the other, but also between DDAO and all the other systems. We showed that adding gramicidin inuences the orientational order of the acyl chains and we nd a similar behavior for the orientational order and the positional order, except in the particular case of DDAO. In this system, GramA content seems to notably inuence the DDAO acyl chains by decreasing their positional order and increasing their orientational order. GramA also inuences the orientational order of the head groups. Also in DDAO, we showed by WAXS that the temperature has a signicant inuence on the positional order only in the presence of cholesterol. In the gel phase of DMPC, GramA addition leads to a linear decrease in positional order, saturating at the liquid phase value for a molar ratio P/L between 1/6 and 1/5. In the liquid phase, we measure relatively small modications in the local order in terms of position and orientation when adding Gramicidin A, especially in the case of DMPC, DLPC and C 12 EO 4 . This is a very signicant result, which allows further elaboration of elastic models in the presence of inclusions by using the same elastic constants obtained for bare membranes. As seen above for DDAO, in some membranes the presence of inclusions inuences dierently the positional and orientational order of the acyl chains. Consequently, combining both techniques (NMR and WAXS) on the same system is very useful in obtaining a full image of the local order. A more detailed analysis could be performed by comparing our results with molecular dynamics simulations. The correlation between changes in the chain order and larger-scale parameters of the bilayer (e.g. the elastic properties) could be established by using dynamic techniques, such as Neutron spin echo. Fig. 1 1 Fig. 1 Example of a 2D 1 H -13 C DROSS spectrum for GramA/C 12 EO 4 with P/L = 0.118. Fig. 2 2 Fig. 2 Dipolar coupling slices of the C ω- at 30 • C. FFig. 3 3 Fig. 3 Orientational order parameter |S CH | for DMPC (a), DLPC (b), C 12 EO 4 (c) and DDAO (d) bilayers embedded with GramA pores for dierent P/L at 30 • C. Error bars are smaller than symbol size. Fig. 4 4 Fig. 4 Chain peak for DMPC in bilayers doped with varying amounts of GramA at 20 • C. Fig. 5 5 Fig.5 Width of the chain peak for DLPC (left) and DMPC (right) bilayers as a function of the GramA doping at three temperatures. left), as well as a very slight increase with P/L at 20 • C, as seen in Figure7right. If we take the overall WAXS peak position shift as a function of temperature and for all inclusions concentration (data not shown) we have a small temperature dependence for each P/L. Comparing the value in absence of inclusion, the peak position slightly shifts after adding gramicidin at a P/L = 0.015 but remains almost the same for the dierent gramicidin content, showing no signicant inuence of the inclusions on the C 12 EO 4 membranes. Fig. 6 6 Fig. 6 Scattered signal I(q) for DDAO bilayers, as a function of temperature for the most concentrated sample, with P/L = 0.178 (left) and for all concentrations at T = 40 • C (right). Fig. 7 7 Fig. 7 Scattered signal I(q) for C 12 EO 4 bilayers, as a function of temperature for a sample with P/L = 0.073 (left) and as a function of concentration at room temperature: T = 20 • C (right). 8 Scattered signal I(q) for DDAO/Cholesterol bilayers, as a function of temperature for a sample with P/L = 0.082 (left) and as a function of concentration at T = 50 • C (right). left) at dierent temperatures, we observe two families in which the spectra are quasi identical: one group at low temperatures (0-30 • C) and another distinct group at higher temperatures (40 -60 • C). At 20 • C, the peaks for DDAO Cholesterol tend to superpose for P/L > 0.028 (data not shown), whereas at 50 • C (Figure 8 right) the peak proles dier and vary with P/L. Fig. 10 10 Fig. 10 HWHM as a function of the concentration P/L, for all measured temperatures in the GramA/DDAO+Cholesterol/ H 2 O system. • C[36,37,[START_REF] Cevc | Phospholipid bilayers: physical principles and models[END_REF][START_REF] Ku£erka | [END_REF] and that of pure DMPC is between 23 • C and 24 • C[40,36,37,[START_REF] Cevc | Phospholipid bilayers: physical principles and models[END_REF][START_REF] Ku£erka | [END_REF]. 65 Gram/DMPC 20°C 60 P/L 55 50 1/5.21 1/6.26 1/7.82 45 1/10.43 I [10 ] -1 mm -3 40 35 30 25 1/14.6 1/21 1/31 1/52 0 (pure DMPC) 20 15 10 5 0 5 10 15 20 q [nm -1 ] 25 30 35 Acknowledgements We thank the CMCP (UPMC, CNRS, Collège de France) for the use of their Bruker AVANCE 300-WB NMR spectrometer. The ESRF is acknowledged for the provision of beamtime (experiment SC-2876) and Jérémie Gummel for his support. This work was supported by the ANR under contract MEMINT (2012-BS04-0023). We also acknowledge B. Abécassis and O. Taché for their support with the WAXS experiment on the MOMAC setup at the LPS.
27,157
[ "184573" ]
[ "134", "1005029" ]
01759998
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01759998/file/paper.pdf
Bastien Confais Adrien Lebre Benoît Parrein Improving locality of an object store working in a Fog environment Introduction The Cloud Computing model relying on few datacenters located far from the users cannot satisfy the new constraints of the Internet of Things (IoT), especially in terms of latency and reactivity. Fog and Edge computing infrastructures have been proposed as an alternative [START_REF] Bonomi | Fog Computing and Its Role in the Internet of Things[END_REF]. This new paradigm consists of deploying small data-centers close to the users located at the edge to provide them low-latency computing. Figure 1 shows that the Fog platform is composed of a significant number of that can be geographically spread over a large area. IoT devices and users are mobile and are connected to the closest site of Fog Computing. We consider the network latency between the client and the closest Fog site to be lower than the network latency between the Fog sites. We work on a storage solution for this infrastructure. Our goal is to create a seamless storage experience towards different sites, capable of working in a disconnected mode by containing the network traffic to the sites solliciting the data. We previously proposed to use the object store Interplanetary FileSystem [START_REF] Benet | IPFS -Content Addressed, Versioned, P2P File System[END_REF][START_REF] Confais | Performance Analysis of Object Store Systems in a Fog and Edge Computing Infrastructure[END_REF] as a Fog storage system because it enables users and IoT devices to write locally on their site but also to automatically relocate the accessed objects on the site they are requested. After improving the locality of IPFS by adding a Scale-Out NAS [START_REF] Confais | An object store service for a Fog/Edge Computing infrastructure based on ipfs and a scale-out NAS[END_REF] on each site and proposing to manage location of object in a tree, we evaluated our proposals on the Grid'5000 system by adding virtual network latencies between Fog nodes and clients. We now want to evaluate it in a more realistic environment. We propose to evaluate the performance of the storage system using the couple G5K/FIT. The G5K platform emulates the Fog layer whereas the Edge layer is emulated on the FIT one. Locations like Grenoble, Lyon or Lille are appropriate for our experiment because they host both a G5K and a FIT site, so that we can expect the network latency between IoT nodes and Fog nodes to be low. Grid'5000 and the FIT platforms. Figure 2 shows the general topology used for the experiment. We developed a RIOT application to enable IoT nodes to access an IPFS server. The scenario consists in reading a value from a sensor, and storing it in an object of the IPFS server located on the Fog. The challenge for a such scenario is to connect the two platforms. Reaching the G5K platform from the IoT node is not easy, because G5K nodes and FIT nodes are connected to the Internet through IPv4-NAT and the G5K platform does not provide any IPv6 connectivity. Because of the lack of end-to-end IP connectivity between the two platforms, we encapsulate messages in SSH tunnels between a A8-node on the FIT platform and the IPFS node. Instead of accessing directly the IPFS node (plain arrow), the client accesses it from the introduced A8-node that acts like a proxy (dashed arrow). This solution works but is not ideal not only because the tunnel degrades the performance but also because the IP routing is not direct and the path from the two platforms in Grenoble goes through Sophia, increasing the network latency. Conclusion This experiment was one of the first involving simultaneously the two platforms and we pointed out the difficulties to interconnect them. Once these problems will be solved, we plan to perform a more advanced scenario involving node mobility or the possibility for a node to choose the Fog site to use. Fig. 1 : 1 Fig. 1: Overview of a Cloud, Fog and Edge infrastructure. Fig. 2 : 2 Fig.2: General architecture of our experiment and interconnection between the Grid'5000 and the FIT platforms. Figure2shows the general topology used for the experiment. We developed a RIOT application to enable IoT nodes to access an IPFS server. The scenario consists in reading a value from a sensor, and storing it in an object of the IPFS server located on the Fog. The challenge for a such scenario is to connect the two platforms. Reaching the G5K platform from the IoT node is not easy, because G5K nodes and FIT nodes are connected to the Internet through IPv4-NAT and the G5K platform does not provide any IPv6 connectivity. Because of the lack of end-to-end IP connectivity between the two platforms, we encapsulate messages in SSH tunnels between a A8-node on the FIT platform and the IPFS node. Instead of accessing directly the IPFS node (plain arrow), the client accesses it from the introduced A8-node that acts like a proxy (dashed arrow). This solution works but is not ideal not only because the tunnel degrades the performance but also because the IP routing is not direct and the path from the two platforms in Grenoble goes through Sophia, increasing the network latency.
5,217
[ "17303", "3254", "3931" ]
[ "481376", "473973", "473973", "489559", "525233", "473973", "481376" ]
01480538
en
[ "phys" ]
2024/03/05 22:32:13
2016
https://hal.science/hal-01480538/file/machado2016.pdf
Guilherme Machado email: [email protected] Arthur Stricher email: [email protected] Grégory Chagnon email: [email protected] Denis Favier email: [email protected] Mechanical behavior of architectured photosensitive silicone membranes: experimental data and numerical analysis Keywords: architectured membranes, photosensitive silicone, biocompatible silicone, bulge test, bi-material hyperelastic solid Introduction Facing increasing demands for multifunctional solutions, architectured materials take an increasingly important place in many applications in order to design specific mechanical properties for a given purpose. Very often the function is not provided by the local property only (grain size, precipitation, polymer chain design and interchain bonding, state of crystallization), but by the interplay between the shape, the properties, and possible association of materials [START_REF] Brechet | Embury Architectured materials: Expanding materials space[END_REF]. For instance, some polymers are architectured due to their chain design [START_REF] Zuo | Effects of block architecture on structure and mechanical properties of olefin block copolymers under uniaxial deformation[END_REF]. Usually, this strategy operates at scales between 1 nm and 10 µm. Architectured silicone materials can also be composed with an association of materials (also called hybrid material) for example fiberreinforced [START_REF] Bailly | In-plane mechanics of soft architectured fibre-reinforced silicone rubber membranes[END_REF] or NiTi-reinforced membranes [START_REF] Rey | An original architectured NiTi silicone rubber structure for biomedical applications[END_REF]. Even if such composites are very good candidates for biomimetic membranes, this solution involves the integration of different synthetic materials often associated with local mismatches of mechanical properties and adhesion difficulties. Local mismatches may cause excessive stress concentrations within the structure and thus premature failure of the composite upon stretching. Thus, one of the challenges of the architectured material is in ensuring efficient stress transfer and in avoiding local failure between regions of different mechanical properties. Other materials can be considered as architectured because of their geometry. [START_REF] Meunier | Anisotropic large deformation of geometrically architectured unfilled silicone membranes[END_REF] and [START_REF] Rebouah | Development and modeling of filled silicone architectured membranes[END_REF] developed crenelated membranes with an unfilled and filled silicone rubber. The main advantage of these membranes is that they present an anisotropic behavior without any interface in the material. Indeed, the crenels and their orientations allow to induce and control the anisotropy, but this fact is limited by the out-plane geometry and the process to obtain the reinforced membrane. This paper focuses on the mechanical behavior of architectured silicone membranes where the membrane architecture is controlled by the in-plane intrinsic properties but also by a desired topology at scale between the microstructure and the application. The concept is to create a heterogeneous material with locally tuned mechanical properties by changing the local crosslink density. The approach can be exploited, for example, to create bioinspired membranes that mimic anisotropic structural properties of soft tissues. In this context, Section 2 presents all precautions concerning the experimental mechanical testing procedures and strain field measurements techniques. Experimental data and analyzes are presented into two parts. First, in Section 3, the three deformations modes (uniaxial, planar and equibiaxial tensile tests) for each phase are independently tested. Second, in Section 4, the bulge test of graded membrane containing two phases. In Section 5, a finite element analysis (FEA) is carried out using a hyperelastic model fitted simultaneously on the three previous tensile tests. Then, the numerical model was used to try to predict the bimaterial and results are discussed. Finally, Section 6 contains some concluding remarks and outlines some future perspectives. Testing procedures background and strain field measurements techniques A series of mechanical tests were carried out to characterize the silicone mechanical behavior in its soft and hard phases. First, for the three deformations modes: uniaxial, planar (pure shear) and equibiaxial tensile tests; second, for the bulge test of graded membrane containing two phases. Preparation of the silicone specimens Samples were prepared in the IMP laboratory (Ingénierie des Matériaux Polymères -Villeurbanne, France), using the polydimethylsiloxane (PDMS) elastomer in addition of an UV-sensitive photoinhibitor. The membrane was selectively exposed to UV radiation then the cross-linking of the UV exposed elastomer is inhibited, leading to a softer material than the unexposed zone. From this point forward, the soft phase denotes the UV exposed material and the hard phase the unexposed one. In-plane tests In-plane quasi-static experiments were conducted on a Gabo Explorer testing machine with ±25 N and ±500 N load cells for uniaxial tension and planar tension respectively. A 2D digital image correlation system (DIC) was used during the test to obtain 2D fields at the surface of plane specimens. The commercial VIC-2D 2009 software package from Correlated solutions was used to acquire images. The images were recorded at 1 Hz with a Pike F-421B/C CCD camera with a sensor resolution of C r = 7.4 µm/pixel. The reason for this large sensor format is the goal to achieve high resolution images with low noise. The 50 mm camera lens was set to f /22 using a 50 pixels extension ring. Grayscale 8 bit images were captured using a full scan of 2048 pixels × 2048 pixels. After all, a cross-correlation function was used and displacement vectors were calculated by correlating square facets (or subsets) of f size = 21 pixels and grid spacing G s = 10 pixels to carry out the correlation process for the undeformed and deformed images. To achieve a sub-pixel accuracy, optimized 8-tap splines were used for the gray value interpolation. As the optimization criteria for the subset matching, a zero-normalized squared difference was adopted, which is insensitive to offset and scale in lighting. For the weighting function, the Gaussian distribution was selected, as it provides the best compromise between spatial and displacement resolution [START_REF] Sutton | Image correlation for shape, motion and deformation measurements[END_REF]. In uniaxial tensile experiment, the spatial resolution (the physical size per pixel) was S r = 15 µm. Likewise, for the planar tension S r = 7 µm. Out-plane tests The bulge test was conducted in order to determinate an equibiaxial state of both phases and also tested the soft-hard bimaterial. A syringe driver was used and the internal pressure is measured by an AZ-8215 digital manometer. Seeing that the material is partially transparent, milk was used as hydrostatic fluid to increase the gray contrast for DIC and to avoid internal reflections. Inflation was sufficient slow to obtain a quasi-static load. Under the assumption of material isotropy over the circumferential direction, the principal directions of both stretch and stress tensors at each material particle are known ab initio to be the meridional and circumferential directions of the membrane surface. From this point forward, these directions will be denoted by the subscripts m and c respectively. Assuming quasi-static motion, the equilibrium equations for a thin axisymmetric isotropic membrane, as adopted by Hill [START_REF] Hill | A theory of the plastic bulging of a metal diaphragm by lateral pressure[END_REF], can be expressed as σ m = p 2h κ c (1) σ c = p 2h κ c 2 - κ m κ c (2) where (σ m , σ c ) are the meridional and circumferential stresses and (κ m , κ c ) are the meridional and circumferential curvatures. h is the current thickness and p is the time-dependent normal pressure acting uniformly (dp/dR = 0) over the radius R. As mentioned in [START_REF] Wineman | Large axisymmetric inflation of a nonlinear viscoelastic membrane by lateral pressure[END_REF] and [START_REF] Humphrey | Computer methods in membrane biomechanics[END_REF] a remarkable consequence of membrane theory is that it admits equilibrium solutions without explicitly requiring a constitutive equation, since the equilibrium equations are derived directly by balancing forces of a deformed element shape. As a consequence, they are valid for all classes of in-plane isotropic materials. Recently, Machado et al. [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF] presented a methodology to compute the membrane curvature of the bulge test from 3D-DIC measurements. A very convenient calculation scheme was proposed based on the surface representation in curvilinear coordinates. From that scheme, the circumferential and meridional curvatures, and also the respective stresses, can be computed. In [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF] authors presented an evaluation scheme for the bulge test based on the determination of the surface curvature tensor and the membrane stress tensor. With this method, the circumferential as well as the meridional stress can be determined at every stage and position of the specimen. The commercial VIC-3D 7 software package from Correlated solutions was used to acquire images using two digital Pike cameras described in Section 2.3. Both cameras were set up at D = 150 mm distance and 35 • angle to the specimen using 28 mm focal length lenses opened at f /16. Previous to the test, a good calibration of the 3D-DIC system is required. The following correlation options were chosen: 8-tap splines were used for the gray value interpolation, zeronormalized squared difference for the subset matching and the Gaussian distribution for the weighting function. Square facets of f size = 15 pixels and grid spacing G s = 5 pixels to carry out the correlation process for the undeformed and deformed images. The obtained spatial resolution is S r = 15 µm. Note that the spatial resolution (S r ) of the discretized surface depends essentially on camera sensor resolution (C r ) and on the choice of the grid spacing (G s ) that defines the distance between the data points on the object. The grid spacing is the distance between the grid points in pixel. Thus, grid spacing limits the spatial resolution, as each grid point represents one single data point of the result. The facet size controls the area of the image that is used to track the displacement between images. The minimal facet size (f size ) is limited by the size and roughness of the stochastic pattern on the object surface. Each facet must be large enough to ensure that there is a sufficiently distinctive pattern, with good contrast features, contained in the area-of-interest used for correlation. Soft and Hard phases: experimental results and analysis Uniaxial tension test Uniaxial tensile tests were performed on small dog-bone shaped specimens. The samples had an initial gage length l 0 = 12 mm, width w 0 = 4 mm and thickness h 0 = 0.8 mm. During the test, using an elongation rate of λ = 3.0 × 10 -2 s -1 , the nominal stress tensor P (First Piola-Kirchhoff stress tensor) is assumed to be homogeneous within the gauge region as well as the deformation gradient tensor F. Since the current thickness is not measured, the material is assumed to be incompressible, i.e., det (F) = 1. A cyclic loading-unloading test was realized for soft and hard phases, the curves are presented in Fig. 1. In the same figure, the first load of both phases are plotted. Different phenomena are highlighted, first a large stress-softening appears by comparing the two first loading at each strain level. A little hysteresis after the first cycle is observed. Moreover, few residual elongation is observed for both phases. for soft and hard phases. Planar tension test The pure shear strain state was approached by performing planar tension test. The initial height l 0 , the constant width w 0 and the thickness h 0 of samples were 4.5 mm, 40 mm and 0.8 mm, respectively. The width of the specimen used for planar tension test must be at least ten times greater than its length. These dimensions have as objective to create an experiment where the specimen is constrained in the lateral direction such that all specimen thinning occurs in the thickness direction. A cyclic planar loading test was realized for both phases at λ = 1.0 × 10 -2 s -1 . The results are presented in Fig. 2. Planar tensile response, likewise uniaxial traction, presents the same phenomena. For the soft phase, the maximum principal stretch experienced by the planar specimens are smaller if compared with uniaxial tensile test specimens. In general, this limitation lies in the fact that the planar tensile specimens must be constrained in the lateral direction without slipping. In this manner, the annoying premature tearing at the grips is observed. This is the major difficulty in planar tensile tests of thin specimens. Equibiaxial tension using the bulge test The equibiaxial tension state is approached by the bulge test. Due to the axial-symmetry of the experimental configuration the equibiaxiality of the stress and strain is obtained at the top the inflated sample. The elongation rate was not controlled, but the pressure p is slowly increased. The stress-strain curve for the central area are presented in Fig. 3 for a cyclic loading. The response are qualitatively similar to uniaxial loading with hysteresis and stress-softening. Analysis Table 1 presents a comparative for soft and hard phases for the three different experimental load cases (uniaxial, plane shear and equibiaxial). It is easily determined from classical isotropic elasticity theory that, during quasi-linear stages, the ratio of the stress-strain slopes between uniaxial tension (E) and equibiaxial tension ( E e ) is E e /E = 1/(1 -ν). The ratio E e /E was determined experimentally to be 2 for both phases. This is compatible with an incompressible material with Poisson's ratio ν = 0.5. The Young's modulus ratio between the two phases (E H/S ) was about 3.5 for uniaxial and equibiaxial states and 2.5 for plane shear. At the beginning of loading, the stress ratio between phases at λ = 1.1 of strain (P 10% S ) are closer than stress ratio at λ = 2.0. The mean stress ratio P calculates over all load history is kept practically constant for all loading cases. The specimen disk effective dimensions are 18.5 mm of radius and 0.4 mm of thickness. The UV-irradiated zone is concentric circle of 10 mm diameter. Cross-linking of the UV exposed elastomer is inhibited, leading to softer region than the surrounding unexposed part,as illustrated in Fig. 4. The bulge test was chosen to test the bimaterial for two main reasons: stress-concentrations can be easily access the soft-hard interface is far from boundary conditions; each inflation state involves a heterogeneous stress-strain state which can be determined analytically. Having said that, bulge offers a valuable data for modeling benchmark. Inflations were performed from 1 kPa to a maximum pressure of 25 kPa, therefore, for clarity, three levels were chosen to present the results: 6, 15 and 25 kPa. These inflations yielded principal stretches at the pole of about 1.16, 1.43 and 2.21 respectively. Fig. 6 presents principal stretches values (λ m , λ c ) obtained from 3D-DIC system. Save the pole (R = 0) and the clamped boundary (R = 1), all material points involve a heterogeneous strain state. As expected, the circumferential stretch λ c tends to one, i.e., a pure planar stretching behavior towards the clamped boundary (R → 1). However, most of the hard phase deformation is on the circumferential direction since λ c is less than 1.2 even for the maximal pressure level. Principal curvatures (κ m , κ c ) and principal stresses (σ m , σ c ) were computed as explained in [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF]. Fig. 7 shows the experimental curves. With respect to principal curvature distributions, note that equibiaxial membrane deformations near the membrane pole (R = 0) are associated with an approximately spherical geometry, i.e., κ m ≈ κ c . The small difference may be explained by the fact that the umbilical point may not lie exactly on the Z direction axis. Note that for all pressure levels, the meridional curvature κ m presents an inflection point representing changes from convex to concave curvature on the soft-hard interface (R = 0.27). •• • • •• • • •• • • •• • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •• • • • • •• • • •• •••• • • • ••••••• • •• • • • (b) With regard to the stress plots in Figs. 7c and7d, the stress state can be assumed to be equibiaxial at the pole (R = 0). Both stresses, σ m and σ c , experience an increasingly upward turn for R 0.27 when pressure increases. Two inflections points can be observed around R = 0.27 and R = 0.36 in all directions for all pressure loads. Finite element simulations In the previous section, bulge test was used to obtain stress and strain fields for a nonhomogeneous material. Stress were calculated without explicitly specifying a constitutive relation for the material. The aim of this section is to compare these results with usual finite element analysis using the classical Mooney-Rivlin hyperelastic constitutive equation. Stress-softening and hysteresis are not regarded, thus only the first loading behavior was investigated. Hyperelastic fitting using the Mooney-Rivlin hyperelastic model Assuming an incompressible isotropic hyperelastic material behavior, the two parameters Mooney-Rivlin model is expressed as W Ī1 , Ī2 = C 10 Ī1 -3 + C 01 Ī2 -3 (3) where C 10 and C 01 are material parameters that must be identified. Ī1 and Ī2 are the first and second strain invariants of the the isochoric elastic response. Due to its mathematical simplicity as well as its prediction accuracy in the range of moderately large strains, Mooney-Rivlin model has been widely employed in the description of the behavior of rubbery materials. It is known that different deformation modes are required to obtain the parameters that define the stressstrain relationship accurately. Uniaxial, planar and equibiaxial data acquired in Section 3 are simultaneously involved in a least square minimization in order to extract the sets of material parameters for each material phase. Table 2 summarizes the two sets of C 10 and C 01 parameters. Fig. 8 shows the stress-strain curves of the first loading experimental data and the Mooney-Rivlin model fitting for each deformation mode. The adopted fitting procedure allows a material model description that is valid for a general deformation state. As expected, the model shows a good agreement with uniaxial and planar tensile tests data up to 100% of strain, i.e., λ = 2.0. Moreover, the Mooney-Rivlin model starts to fail to account strains larger than λ = 1.3 for the equibiaxial tensile test, in particular for the hard phase. Nevertheless, as pointed out in [START_REF] Marckmann | Comparison of hyperelastic models for rubberlike materials[END_REF], there are very few hyperelastic constitutive models able to simultaneously simulate both the multi-dimensional data with an unique set of material parameters. Experimental and numerical comparison of bulge test results The non-homogeneous bulge test was simulated by an axisymmetric using Abaqus commercial finite element code. Continuum eight-node biquadratic hybrid fully integrated elements (CAX8H) were used. Based on the result of the mesh sensitivity study, the optimal global element size for membrane mesh was h = 35 µm. At the soft-hard interface, a non-adaptive h-refinement was used to improve mesh quality employing a mesh size gradient used only on the R direction, resulting an element edge size about h/6 over the interface neighborhood. Over the membrane thickness, 30 Gauss integration points were used. Results of FEA using the Mooney-Rivlin model are superposed with the experimental fields (principal stretches and principal stresses) in Fig. 9. Two pressure levels were chosen to present the results: 6 and 25 kPa. All numerical predictions follow qualitatively the trends of the experimental data. It is possible to observe a discontinuity in the model response at R = 0.27 even if the an h-refinement was used in this zone. Considering the principal stresses plot in Figs. 9c and9d, numerical simulations do not correspond well with experimental ones in both load cases. This result can be related to the limitations of the Mooney-Rivlin model to fit complex stress-strain states at large strain levels. Fig. 10a presents the FEA errors (e m , e c ) with respect to principal stresses over the bulge profile in both deformed configurations. Using a confidence interval of 95%, the mean error are êm = 31%, êc = 30% for the lower pressure level; and êm = 35%, êm = 29% for the highest pressure level. Regardless of the soft-hard interface, stress discrepancies are independent of the deformation level. This fact is also observed in Fig. 10b where the deviations of the stress ratio σ m /σ c are very close for both load levels. Modeling analysis Presented results show the limitations of the classical finite element method to tackle heterogeneous systems with moderate modulus mismatch across the material interface undergoing large strains, with an incompressible non-linear hyperelastic material behavior. Results reveals that a more sophisticate representation of the soft-hard interface must be taken into account by numerical modeling. For example, [START_REF] Srinivasan | Generalized finite element method for modeling nearly incompressible bimaterial hyperelastic solids[END_REF] proposed an extension of the generalized finite element method to tackle heterogeneous systems with non-linear hyperelastic materials. However, it must be recognized that outside the context of this study. Independently of the numerical treatment, a more detailed knowledge of the influence the material interface on the macroscopical mechanical behavior is necessary. For this, considering the Mooney-Rivlin model in Eq. 3 in terms of the principal stretches σ m = 2 C 10 (λ m ) 2 - 1 λ m λ c 2 + C 01 1 λ m λ c 2 -(λ m ) 2 (4) σ c = 2 C 10 (λ c ) 2 - 1 λ m λ c 2 + C 01 1 λ m λ c 2 -(λ c ) 2 . (5) Replacing the measured principal stretches in Eqs. 4 and 5 and the previous identified parameters Finally, keeping in mind the experimental spatial resolution of 15 µm, the phase transition is estimated to be about 3.15 mm, i.e., 17% total sample radius. Within the tested loading range, the size of soft-to-hard transition can be assumed independent of the stress-strain level. F i t o f C 0 1 C 1 0 F i t o f C 1 0 R C 1 0 , C 0 Conclusions Results show the mechanical behavior of photosensitive silicone membranes with a variable set of mechanical properties within the same material. With a reversible in-plane stretchability up to 200%, the soft-to-hard transition was expressed by a factor 3.57 in the Young's modulus within a single continuous silicone membrane combined with a mean stress factor about 2.5 times. The approach was tested using the bulge test and the presented results using a bimaterial are distinct from previous investigations of the classic circular homogeneous membrane inflation problem. The mechanical response of the soft-hard interface was observed by inflections on the principal curvatures fields along the meridional and circumferential directions. Analysis of the stress distribution throughout the meridional-section of the membrane revealed low stress peaks at soft-to-hard transition. The results demonstrate that under high strains levels no macroscopic damage was detected. The local cross-linking control eliminates the interfaces between different materials, leading to heterogeneous membrane with efficient stress transfer throughout the structure. The numerical investigation provided information on the respective contributions of each material phase on its effective behavior under the inflation. Therefore, presented results show the limitations of the classical finite element method to tackle heterogeneous systems with moderate modulus mismatch across the material interface undergoing large strains, with an incompressible non-linear hyperelastic material behavior. Using the experimental local stress-strains values, it was possible to characterize the macroscopic influence of the soft-to-hard interface with spatial resolution of 15 µm. Later, a more sophisticated numerical strategy can be used to describe the soft-hard interface and then the graded membrane global behavior, based on the presented results. Further work to create, test and optimize more complex architectures, is ongoing using the experimental approaches described in the present paper. Figure 1 : 1 Figure 1: Nominal stress-strain curves resulting from cyclic loading-unloading tensile test at λ = 3.0 × 10 -2 s -1 for soft and hard phases. Figure 2 : 2 Figure 2: Nominal stress-strain curves resulting from cyclic loading-unloading planar tension test at λ = 1.0 × 10 -2 s -1 for soft and hard phases. Figure 3 : 3 Figure 3: Nominal stress-strain curves resulting from cyclic loading-unloading equibiaxial test: (a) hard phase; (b) soft phase. Figure 4 : 4 Figure 4: The photosensitive material sample and the bulge test configuration. (See online version for color figure.) Figure 5 : 5 Figure 5: Bulge test setup using 3D-DIC technique. (a) Experimental image superposed with the Green-Lagrange major principal strain field (meridional strain); (b) Profiles of the inflated membrane composed by soft and hard phases for different pressure loads. (See online version for color figure.) Figure 6 : 6 Figure 6: The strain distribution of a deformed foil vs. normalized radius of the circular membrane. (a) meridional direction λm; (b) circumferential direction λc. Figure 7 : 7 Figure 7: Distributions of principal direction of experimental fields corresponding to three different inflation states: (a) meridional curvature κm; (b) circumferential curvature κc; (c) meridional Cauchy stress σm; (c) circumferential Cauchy stress σc. Figure 8 : 8 Figure 8: Experimental data for soft and hard phases and the hyperelastic fitting using the Mooney-Rivlin (MR) hyperelastic model: (a) Uniaxial (b) Planar and (c) equibiaxial tensile tests. Figure 9 : 9 Figure 9: Principal stretches (λm, λc) and Cauchy stress (σm, σc) confronted with the finite element analysis (FEA), corresponding to 6 kPa and 25 kPa inflation states. Figure 10 : 10 Figure 10: (a) FEA errors with respect to principal stresses in both deformed configurations; (b) Principal stress ratio (σm/σc) confronted with the finite element results (FEA). CFigure 11 : 11 Figs.11a and 11bshows a good agreement between the experimental (Exp) and Mooney-Rivlin (MR) stresses for both inflation states.In other to determinate the macroscopic influence of the soft-to-hard transition the parameters C 10 and C 01 were evaluated using the Eqs. 4 and 5 using the the measured principal stretches and the experimental stresses obtained by Eqs. 1 and 2, using the different directions and different load levels experimental information. The same ratios C 10 /C 01 of soft and hard phases from previous identification (Table2) were kept. Thus, one obtains a description of the spatial distribution of these parameters, as presented in Fig.12a. It is possible to observe that the soft-to-hard transition transition is almost symmetric with respect the position R = 0.27 and the material parameters gradient extends over the R = [0.21, 0.38] interval.The Mooney-Rivlin (MR) stresses in Eqs. 4 and 5 were recalculated, but now using the functions C 10 (R) and C 01 (R), fitted on experimental results using a sigmoid function. Results 1 (Figure 12 : 112 Figure 12: (a) Mooney-Rivlin parameters gradient obtained using the the experimental local stress-strain states. (b) Errors with respect to principal stresses in both deformed configurations using the Mooney-Rivlin with a material parameters gradient over the R = [0.21, 0.38] interval. Table 1 : 1 A comparative for soft and hard phases for the three different experimental loading cases. Parameter Unit Uniaxial Plane shear Equibiaxial Hard phase elastic modulus E H MPa 2.50 3.50 4.90 Soft phase elastic modulus E S MPa 0.70 1.40 1.40 Elastic modulus ratio E H/S - 3.57 2.50 3.50 Hard stress at 10% Soft stress at 10% P 10% H P 10% S MPa MPa 0.24 0.08 0.31 0.13 0.45 0.13 Hard stress at 100% Soft stress at 100% Mean stress ratio P 100% H P 100% S P MPa MPa - 1.43 0.61 2.44 1.67 0.68 2.44 1.93 0.85 2.90 4. Bulge test with soft-hard phase sample Table 2 : 2 Fitted parameters (in MPa) of the Mooney-Rivlin constitutive equation for soft and hard phases. Parameters hard soft C 10 0.35 0.18 C 01 0.10 0.01 Acknowledgment The authors wish to acknowledge the financial support of the French ANR research program SAMBA: Silicone Architectured Membranes for Biomedical Applications (Blanc SIMI 9 2012). We thank Laurent Chazeau, Renaud Rinaldi and Franois Ganachaud for fruitful discussions.
29,986
[ "172306", "1102454", "14058", "172344" ]
[ "1042068", "203996", "1042068", "1042068" ]
01760135
en
[ "shs" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01760135/file/Lindseyetal2018.pdf
Robin Lindsey email: [email protected] André De Palma email: [email protected] Hugo E Silva email: [email protected] Equilibrium in a dynamic model of congestion with large and small users $ Keywords: Jel Classifications: C61, C62, D43, D62, R41 departure-time decisions, bottleneck model, congestion, schedule delay costs, large users, user heterogeneity, existence of Nash equilibrium Individual users often control a significant share of total traffic flows. Examples include airlines, rail and maritime freight shippers, urban goods delivery companies and passenger transportation network companies. These users have an incentive to internalize the congestion delays their own vehicles impose on each other by adjusting the timing of their trips. for the case of symmetric large users. We also develop some examples to identify under what conditions a PSNE exists. The examples illustrate how self-internalization of congestion by a large user can affect the nature of equilibrium and the travel costs that it and other users incur. Introduction Transportation congestion has been a growing problem for many years, and road traffic congestion is now a blight in most large cities worldwide. [START_REF] Couture | [END_REF] estimate that the deadweight loss from congestion is about US$30 billion per year in large US cities. 1 [START_REF] Hymel | Does traffic congestion reduce employment growth[END_REF] shows that high levels of congestion dampen employment growth, and that congestion pricing could yield substantial returns in restoring growth. Congestion delays are also a problem at airports, on rail lines, at seaports and in the hinterland of major transportation hubs. [START_REF] Ball | Total delay impact study: a comprehensive assessment of the costs and impacts of flight delay in the united states[END_REF] estimate that in 2007 air transportation delays in the US imposed a cost of US$25 billion on passengers and airlines. Research on congestion dates back to [START_REF] Pigou | The Economics of Welfare[END_REF]. Yet most economic and engineering models of congestible transportation facilities still assume that users are small in the sense that each one controls a negligible fraction of total traffic (see, e.g., [START_REF] Melo | Price competition, free entry, and welfare in congested markets[END_REF]. This is a realistic assumption for passenger trips in private vehicles. Yet large users are prevalent in all modes of transport. They include major airlines at their hub airports, railways, maritime freight shippers, urban goods delivery companies, large taxi fleets and postal services. In some cases large users account for an appreciable fraction of traffic. 2 Furthermore, major employers such as government departments, large corporations, and transportation service providers can add substantially to traffic on certain roads at peak times. 3 So can large shopping centres, hotels, and major sporting events. 4 Unlike small users, large users have an incentive to internalize the congestion delays 1 Methods of estimating the costs of congestion differ, and results vary widely. The Texas Transportation Institute estimated that in 2014, congestion in 471 urban areas of the US caused approximately 6.9 billion hours of travel delay and 3.1 billion gallons of extra fuel consumption with an estimated total cost of US$160 billion [START_REF] Schrank | 2015 urban mobility scorecard[END_REF]. It is unclear how institutional and technological innovations such as ridesharing, on-line shopping, electric vehicles, and automated vehicles will affect traffic volumes. The possibility that automated vehicles will increase congestion is raised in National Academies of Sciences, Engineering, and Medicine (2017) and The Economist (2018). 2 For example, the world market for shipping is relatively concentrated. According to [START_REF] Statista | Leading ship operator's share of the world liner fleet as of december 31[END_REF], as of December 31, 2017, the top five shipping operators accounted for 61.5% of the world liner fleet. The top ten accounted for 77.7%, and the top 15 for 85.5%. The top five port operators had a 29.9% global market share (Port Technology, 2014). The aviation industry is another example. The average market share of the largest firm in 59 major US airports during the period 2002-2012 was 42% [START_REF] Choo | Factors affecting aeronautical charges at major us airports[END_REF]. Similar shares exist in Europe. 3 For example, [START_REF] Ghosal | Advanced manufacturing plant location and its effects on economic development, transportation network, and congestion[END_REF] describe how the Kia Motors Manufacturing plant, a large automobile assembler in West Point, Georgia, affects inbound and outbound transportation flows on highway and rail networks, and at seaports. 4 Using data from US metropolitan areas with Major League Baseball (MLB) teams, [START_REF] Humphreys | Professional sporting events and traffic: Evidence from us cities[END_REF] estimate that attendance at MLB games increases average daily vehicle-miles traveled by about 6.9%, and traffic congestion by 2%. their own vehicles impose on each other. This so-called "self-internalization" incentive can affect large users' decisions 5 and raises a number of interesting questions -some of which are discussed further in the conclusions. One is how much a large user gains from selfinternalization. Can it backfire and leave the large user worse off after other users respond? Second, do other users gain or lose when one or more large users self-internalize? Does it depend on the size of the large users and when they prefer to schedule their traffic? Are mergers between large users welfare-improving? What about unions of small users that create a large user? There is now a growing literature on large users and self-internalization -notably on large airlines and airport congestion. Nevertheless, this body of work is limited in two respects. First, as described in more detail below, most studies have used static models. Second, much of the theoretical literature has restricted attention to large users. In most settings, however, small users are also present. Automobile drivers and most other road users are small. Most airports serve not only scheduled commercial service, but also general aviation movements by recreational private aircraft and other non-scheduled users. Lowcost carriers with small market shares serve airports where large legacy airlines control much of the overall traffic. 6 We contribute to the literature in this paper by developing and analyzing a dynamic model of congestion at a transportation facility with both large users and small users. More specifically, we use the Vickrey bottleneck model to study how large users schedule departure times for their vehicle fleets when small users use the facility too. As we explain in the literature review below, to the best of our knowledge, we are the first to study trip-timing decisions in markets with a mix of large and small users. Several branches of literature have developed on large users of congestible facilities. 7 They include studies of route-choice decisions on road networks and flight scheduling at congested airports. There is also a literature directed to computer and telecommunications 5 For example, some seaports alleviate congestion by extending operating hours at truck gates, and using truck reservation systems at their container facilities (Weisbrod and Fitzroy, 2011). Cities and travel companies are also attempting to spread tourist traffic by making off-peak visits more attractive and staggering the arrivals of cruise ships [START_REF] Sheahan | Europe works to cope with overtourism[END_REF]. Airports, especially in Europe, restrict the number of landings and takeoffs during specific periods of time called slot windows (see [START_REF] Daniel | The untolled problems with airport slot constraints[END_REF] for a discussion of this practice). 6 Using data from Madrid and Barcelona, Fageda and Fernandez-Villadangos (2009) report that the market share of low-cost carriers is generally low (3-5 carriers with 3-18% of market share). Legacy carriers themselves sometimes operate only a few flights out of airports where another legacy carrier has a hub. For example, at Hartsfield-Jackson Atlanta International (ATL) American Airlines has a 3% market share while Delta's is 73% (Bureau of Transportation Statistics, 2017). 7 See [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] for a brief review. networks on atomic congestion games. However, most studies have adopted static models that disregard the timing decisions of users despite the fact that congestion delays tend to be highly concentrated at peak times (see, e.g., [START_REF] Naroditskiy | Maximizing social welfare in congestion games via redistribution[END_REF]. The relatively small body of work that does address the temporal dimension of congestion has taken three approaches to incorporate dynamics. One approach has used dynamic stochastic models designed specifically to describe airport congestion (see, e.g., [START_REF] Daniel | Congestion pricing and capacity of large hub airports: A bottleneck model with stochastic queues[END_REF]. A second approach, also directed at studying airport congestion, features deterministic congestion and a sequential decision-making structure in which an airline with market power acts as a Stackelberg leader and schedules its flights before other airlines in a competitive fringe (see [START_REF] Daniel | Distributional consequences of airport congestion pricing[END_REF][START_REF] Silva | Airlines' strategic interactions and airport pricing in a dynamic bottleneck model of congestion[END_REF]. As [START_REF] Daniel | The untolled problems with airport slot constraints[END_REF] discusses, the presence of slot constraints at airports makes the Stackelberg approach relevant. The slots are allocated twice a year with priority for the incumbent airlines; slots allocation for new entrants, which are modeled as followers in this approach, occur only after the incumbents have committed to a slot schedule, and normally come from new airport capacity. In these cases, adopting a sequential decision-making structure seems to be accurate. Nevertheless, at most US airports, the capacity is assigned in a first-come, first-served basis, which makes the simultaneous structure, and Nash as an equilibrium concept, more relevant. These two approaches lead to outcomes broadly consistent with those of static models. Two results stand out. First, self-internalization of congestion by large users tends to result in less concentration of traffic at peak times, and consequently lower total costs for users in aggregate. Second, the presence of small users limits the ability of large users to reduce congestion. This is because reductions in the amount of traffic scheduled by large users, either at peak times or overall, are partially offset by increases in traffic by small users. The Stackelberg equilibrium concept adopted in the second approach rests on the assumptions that the leader can schedule its traffic before other agents, and also commit itself to abide by its choices after other agents have made theirs. These assumptions are plausible in some institutional settings (e.g., Stackelberg leadership by legacy airlines at hub airports), but by no means in all settings. The third approach to incorporating trip-timing decisions, which we adopt, instead takes Nash equilibrium as the solution concept so that all users make decisions simultaneously. Our paper follows up on recent work by Verhoef and Silva (2017) and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] who focus on determining under what conditions a Pure Strategy Nash Equilibrium (PSNE) in departure-time decisions exists. These two studies employ different deterministic congestion models that are best suited to describe road traffic congestion. Verhoef and Silva (2017) use the flow congestion model developed by [START_REF] Henderson | Road congestion: a reconsideration of pricing theory[END_REF], and modified by [START_REF] Chu | Endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach[END_REF]. In this model, vehicles travel at a constant speed throughout their trips with the speed determined by the density of vehicles prevailing when their trip ends. Verhoef and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] show that, if there are two or more large users and no small users, a PSNE always exists. Self-internalization of congestion by the large users results in less concentration of trips at peak times and, not surprisingly, higher efficiency compared to the equilibrium without large users. However, this result is tempered by two well-known drawbacks of the [START_REF] Lindsey | Congestion modelling[END_REF]. [START_REF] Henderson | Road congestion: a reconsideration of pricing theory[END_REF] originally assumed that vehicle speed is determined by the density of traffic encountered when a vehicle starts its trip. This formulation has the additional disadvantage that a vehicle departing when density is low may overtake a vehicle that departed earlier when density was higher. As [START_REF] Lindsey | Congestion modelling[END_REF] explain, overtaking has no behavioral basis if drivers and vehicles are identical, and it is physically impossible under heavily congested conditions. By contrast, in [START_REF] Chu | Endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach[END_REF] reformulated model overtaking does not occur in equilibrium. 9 A few experimental economics studies have tested the theoretical predictions of the bottleneck model; see [START_REF] Dixit | Understanding transportation systems through the lenses of experimental economics: A review[END_REF] for a review. The studies used a variant of the bottleneck model in which vehicles and departure times are both discrete. In all but one study, players controlled a single vehicle. The exception is [START_REF] Schneider | Against all odds: Nash equilibria in a road pricing experiment[END_REF] who ran two sets of experiments. In the first experiment each player controlled one vehicle, and in the second experiment each player controlled 10 vehicles which were referred to as trucks. Compared to the first experiment, the aggregate departure-time profile in the second experiment was further from the theoretical Nash equilibrium and closer to the system optimum. Schneider and Weimann conclude (p.151) that "players with 10 trucks internalize some of the congestion externality". In this paper we extend [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] by investigating the existence and nature of PSNE in the bottleneck model for a wider range of market structures and under more general assumptions about trip-timing preferences. Unlike both Verhoef and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF], we allow for the presence of small users as well as large users. As in the standard bottleneck model, small users each control a single vehicle and seek to minimize their individual trip cost. Each large user operates a vehicle fleet that comprises a positive fraction or measure of total traffic, and seeks to minimize the aggregate trip costs of its fleet. 10 Each vehicle has trip-timing preferences described by a trip-cost function C(t, a), where t denotes departure time and a denotes arrival time. Trip cost functions can differ for small and large users, and they can also differ for vehicles in a large user's fleet. Our analysis consists of several parts. After introducing the basic model and assumptions in Section 2, in Section 3 we use optimal control theory to derive a large user's optimal fleet departure schedule as a best response to the aggregate departure rate profile of other users. We show that the optimal response can be indeterminate, and the second-order condition for an interior solution is generally violated. Consequently, a candidate PSNE departure schedule may exist in which a large user cannot gain by rescheduling any single vehicle in its fleet, yet it can gain by rescheduling a positive measure of vehicles. These difficulties underlie the non-existence of a PSNE in [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. We then show in Section 4 that if vehicles in the large user's fleet have sufficiently diverse trip-timing preferences, a PSNE may exist in which some -or even all -of the large user's vehicles do queue. The fact that a PSNE exists given sufficient user heterogeneity parallels the existence of equilibrium in the Hotelling model of location choice given sufficient preference heterogeneity [START_REF] De Palma | The principle of minimum differentiation holds under sufficient heterogeneity[END_REF]. Next, in Section 5 we revisit the case of symmetric large users that [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] consider, and derive the minimum degree of preference heterogeneity required to support a PSNE. We show that relative to the PSNE in which large users disregard self-imposed congestion, self-internalization results in substantial efficiency gains from reduced queuing delays even when the number of users is fairly large. Then, in Section 6 we modify the example of symmetric large users by assuming that part of the traffic is controlled by a single large user, and the rest by a continuum of small users. We derive conditions for existence of Unfortunately, they do not provide information on how the players distributed their vehicles over departuretime slots. Thus, it is not possible to compare their results with the predictions of our model as far as when large users choose to depart. 10 In game theory, small agents or players are sometimes called "non-atomic" and large agents "atomic". In economics, the corresponding terms are "atomistic" and "non-atomistic". To avoid confusion, we do not use these terms. However, we do refer to the PSNE in which large users do not internalize their self-imposed congestion externalities as an "atomistic" PSNE. a PSNE and show how the order in which users depart depends on the flexibility implied by the trip-timing preferences of large and small users. We also show that self-internalization of congestion can have no effect on the PSNE at all. The model The model is a variant of the classical bottleneck model. (t) = Q(t)/s, or q(t) = t -t + s -1 R(t) -R( t) . (1) A user departing at time t arrives at time a = t + q(t). The cost of a trip is described by a function C (t, a, k), where k denotes a user's index or type. 12 Function C (t, a, k) is assumed to have the following properties: Assumption 1: C (t, a, k) is differentiable almost everywhere with derivatives C t < 0, C a > 0, C tt ≥ 0, C tt + C aa > 0, C ta = C at = 0, C tk ≤ 0, C ak ≤ 0, C tkk = 0, and C akk = 0. The assumption C t < 0 implies that a user prefers time spent at the origin to time spent in transit. Similarly, assumption C a > 0 implies that a user prefers time spent at the destination to time spent in transit. User types can be defined in various ways. For much of the analysis, type is assumed to denote a user's preferred time to travel if a trip could be made instantaneously (i.e, with a = t). For type k, the preferred time is t * k = Arg min t C (t, t, k). Given Assumption 1, t * k is unique. Types are ordered so that if k > j, t * k ≥ t * j . As explained in the Appendix, Assumption 1 is satisfied for various specifications of the cost function including the piecewise linear form introduced by Vickrey (1969): 11 The bottleneck model is reviewed in [START_REF] Arnott | Recent developments in the bottleneck model[END_REF], [START_REF] Small | The Economics of Urban Transportation[END_REF], de Palma andFosgerau (2011), and[START_REF] Small | The bottleneck model: An assessment and interpretation[END_REF]. The exposition in this section draws heavily from [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. Literal excerpts are not marked as such and are taken to be acknowledged by this footnote. 12 As explained in the Appendix, the trip cost function can be derived from functions specifying the flow of utility or payoff received at the origin, at the destination, and while in transit. C (t, a, k) = α k (a -t) + β k (t * k -a) for a < t * k α k (a -t) + γ k (a -t * k ) for a > t * k . (2) In (2), parameter α k is the unit cost of travel time, β k < α k is the unit cost of arriving early, and γ k is the unit cost of arriving late. The first term in each branch of (2) denotes travel time costs, and the second term denotes schedule delay costs. We refer to this specification of costs as "step preferences". 13 Because step preferences have a kink at t * k , the derivative C a is discontinuous at a = t * k . This turns out to affect some of the results in this paper, and makes step preferences an exception to some of the propositions. It is therefore useful to know whether step preferences are reasonably descriptive of reality. Most studies that have used the bottleneck model have adopted step preferences, but this may be driven in part by analytical tractability and convention. Empirical evidence on the shape of the cost function is varied. [START_REF] Small | The scheduling of consumer activities: work trips[END_REF] found that step preferences describe morning commuter behaviour fairly well, but he did find evidence of discrete penalties for arriving late beyond a "margin of safety." Nonconvexities in schedule delay costs have been documented (e.g., Matsumoto, 1988), and there is some empirical evidence that the marginal cost of arriving early can exceed the marginal cost of travel time (Abkowitz, 1981a,b;[START_REF] Hendrickson | The flexibility of departure times for work trips[END_REF]Tseng and Verhoef, 2008) which violates the assumption β k < α k . The paper features examples with step preferences where the results depend on the relative magnitudes of parameters α, β, and γ. Estimates in the literature differ, but most studies of automobile trips find that β < α < γ. Small (1982, Table 2, Model 1) estimates ratios of β:α:γ = 1:1.64:3.9. These rates are representative of average estimates in later studies. 14 For benchmark values we adopt β:α:γ = 1:2:4. In the standard bottleneck model, each user controls a single vehicle of measure zero and decides when it departs. A Pure Strategy Nash Equilibrium (PSNE) is a set of departure 13 These preferences have also been called "α -β -γ " preferences. 14 Estimates of the ratio γ/β vary widely. It is of the order of 8 in Geneva (Switzerland), and 4 in Brussels (Belgium), where tolerance for late arrival is much larger (see [START_REF] De Palma | Impact of adverse weather conditions on travel decisions: Experience from a behavioral survey in geneva[END_REF][START_REF] Khattak | The impact of adverse weather conditions on the propensity to change travel decisions: a survey of brussels commuters[END_REF]. Tseng et al. (2005) obtain a ratio of 3.11 for the Netherlands. [START_REF] Peer | Long-run versus short-run perspectives on consumer scheduling: Evidence from a revealed-preference experiment among peak-hour road commuters[END_REF] show that estimates derived from travel choices made in the short run can differ significantly from estimates derived from long-run choices when travelers have more flexibility to adjust their schedules. Most studies of triptiming preferences have considered passenger trips. Many large users transport freight rather than people. Trip-timing preferences for freight transport can be governed by the shipper, the receiver, the transportation service provider, or some combination of agents. There is little empirical evidence for freight transportation on the relative values of α, β, and γ. The values are likely to depend on the type of commodity being transported, the importance of reliability in the supply chain, and other factors. Thus, it is wise to allow for a wide range of possible parameter values. times for all users such that no user can benefit (i.e., reduce trip cost) by unilaterally changing departure time while taking other users' departure times as given. For brevity, the equilibrium will be called an "atomistic PSNE". If small users are homogeneous (i.e., they all have the same type), then in a PSNE they depart during periods of queuing when the cost of a trip is constant. Their departure rate will be called their atomistic equilibrium departure rate, or "atomistic rate" for short. The atomistic rate for type k is derived from the condition that C (t, a, k) is constant. Using subscripts to denote derivatives, this implies C t (t, a, k) + C a (t, a, k) 1 + dq (t) dt = 0. Given (1), the atomistic rate is r (t, a, k) = - C t (t, a, k) C a (t, a, k) s. (3) Since C t < 0 and C a > 0, r (t, a, k) > 0. Using Assumption 1, it is straightforward to establish the following properties of r (t, a, k): 15 ∂ r (t, a, k) ∂k ≥ 0, ∂ 2 r (t, a, k) ∂k 2 ≥ 0, Sgn ∂ r (t, a, k) ∂a = -C aa . (4) For given values of t and a, the atomistic rate increases with a user's type, and at an increasing rate. In addition, the atomistic rate is increasing with arrival time if C aa < 0, and decreasing if C aa > 0. With step preferences, C aa = 0 except at t * k , and the atomistic rate is: r (t, a, k) = α k α k -β k s for a < t * k α k α k +γ k s for a > t * k . (5) [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] consider a variant of the standard bottleneck model in which users are "large". A large user controls a vehicle fleet of positive measure, and internalizes the congestion costs its vehicles impose on each other. A PSNE entails a departure schedule for each user such that no user can reduce the total trip costs of its fleet by unilaterally changing its departure schedule while taking other users' departure schedules as given. We will call the equilibrium the "internalized PSNE". [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] focus on the case of two large users with step preferences. In the next section we derive the departure schedule of a large user with general trip-timing preferences when other large users and/or small users may be departing too. 15 See the Appendix. Fleet departure scheduling and equilibrium conditions Optimal departure schedule for a large user This section uses optimal control theory to derive and characterize the optimal departure schedule of a large user with a fleet of vehicles. Call the large user "user A", and let N A be the measure of vehicles in its fleet. Vehicle k has a cost function C A (t, a, k). Vehicles are indexed in order of increasing preferred arrival times so that t * k ≥ t * j if k > j. It is assumed that, regardless of the queuing profile, it is optimal for user A to schedule vehicles in order of increasing k. The departure schedule for user A can then be written r A (t) with argument k suppressed. If R A (t) denotes cumulative departures of A, vehicle k = R A (t) departs at time t. User A chooses r A (t) to minimize the total trip costs of its fleet while taking as given the aggregate departure rate of other users, r -A (t). Trips are assumed to be splittable: they can be scheduled at disjoint times (e.g., some vehicles can travel early in the morning while others travel at midday). Let t As and t Ae denote the first and last departure times chosen by user A. User A's optimization problem can be stated as: M in t As ,t Ae ,r A (t) t Ae t=t As r A (t) C A (t, t + q (t) , R A (t)) dt, (6) subject to the equations of motion: dq (t) dt + = s -1 (r -A (t) + r A (t)) -1 if q (t) > 0 or r -A (t) + r A (t) > s 0 otherwise (7) (costate variable λ (t) ≥ 0), and dR A (t) dt = r A (t) (costate variable µ (t) ), (8) and the following constraints16 : r A (t) ≥ 0 (multiplier ξ (t) ≥ 0), (9a) R A (t As ) = 0, R A (t Ae ) = N A , (9b) q (t As ) = q (t As ) (multiplier φ), (9c) t As , t Ae chosen freely. (9d) Costate variable λ (t) for Eq. ( 7) measures the shadow cost to user A of queuing time. Eq. ( 8) governs how many vehicles in user A's fleet have left the origin. Costate variable µ (t) measures the shadow cost of increasing the number of vehicles in the fleet that have started their trips. Condition (9a) stipulates that the departure rate cannot be negative. Condition (9b) specifies initial and terminal values for cumulative departures. Condition (9c) describes how queuing time is evolving when departures begin. Finally, (9d) indicates that the choice of departure period is unconstrained. The Hamiltonian for the optimization problem is H (t) = r A (t) C A (t, t + q (t) , R A (t)) + µ (t) dR A (t) dt + λ (t) dq (t) dt + , ( 10 ) and the Lagrangian is L (t) = H (t) + r A (t) ξ (t) . ( 11 ) Costate variable λ (t) for queuing time evolves according to the equation of motion dλ (t) dt = - ∂H ∂q = -r A (t) C A a (t, t + q (t) , R A (t)) ≤ 0. ( 12 ) Variable λ (t) decreases as successive vehicles in the fleet depart because fewer vehicles remain that can be delayed by queuing. Costate variable µ (t) for cumulative departures evolves according to the equation of motion dµ (t) dt = - ∂H ∂R A = -r A (t) C A k (t, t + q (t) , R A (t)) ≥ 0. ( 13 ) If vehicles in the fleet are homogeneous, µ is independent of time. With t Ae chosen freely, transversality conditions at t Ae are: λ (t Ae ) = 0, (14) H (t Ae ) = 0. ( 15 ) According to condition ( 14), the shadow cost of queuing time drops to zero when the last vehicle departs. Condition (15) dictates that the net flow of cost is zero when the last vehicle departs. Substituting ( 14) into (10), and applying (15) yields µ (t Ae ) = -C A (t Ae , t Ae + q (t Ae ) , N A ) . ( 16 ) Condition ( 16) states that the benefit from dispatching the last vehicle in the fleet is the cost of its trip that has now been incurred, and is no longer a pending liability. With t As chosen freely, a transversality condition also applies at t As . Following Theorem 7.8.1 in [START_REF] Leonard | Optimal control theory and static optimization in economics[END_REF], the transversality condition is: H (t As ) -φ dq (t) dt t As = 0, ( 17 ) where φ is a multiplier on the constraint (9c). By continuity, φ = λ (t As ). Using ( 10) and ( 8), condition (17) reduces to r A (t As ) C A (t As , t As + q (t As ) , 0) + µ (t As ) = 0. ( 18 ) It remains to determine the optimal path of r A (t). The optimality conditions governing r A (t) depend on whether or not there is a queue. Attention is limited here to the case with a queue.17 If q (t) > 0, the optimal departure rate is governed by the conditions ∂L ∂r A (t) = C A (t, t + q (t) , R A (t)) + ξ (t) + µ (t) + λ (t) s = 0, (19) ξ (t) r A (t) = 0. If r A (t) is positive and finite during an open time interval containing t, then ξ (t) = 0 and ( 19) can be differentiated with respect to t: d dt ∂L ∂r A (t) = C A t (t, t + q (t) , R A (t)) + C A a (t, t + q (t) , R A (t)) 1 + dq (t) dt + +C A k (t, t + q (t) , R A (t)) r A (t) + dµ (t) dt + 1 s dλ (t) dt = 0. Using Eqs. ( 7) and ( 12), this condition simplifies to C A t (t, t + q (t) , R A (t)) + C A a (t, t + q (t) , R A (t)) r -A (t) s = 0. ( 20 ) The left-hand-side of (20) depends on the aggregate departure rate of other users, r -A (t), but not on r A (t) itself. In general, derivatives C A t (t, t + q (t) , R A (t)) and C A a (t, t + q (t) , R A (t)) depend on the value of q (t), and hence the value of R (t), but not directly on r A (t). Condition (20) will therefore not, in general, be satisfied regardless of user A's choice of r A (t). This implies that the optimal departure rate may follow a bang-bang solution between zero flow and a mass departure.18 This is confirmed by inspecting the Hessian matrix of the Hamiltonian:     ∂ 2 H ∂r 2 A (t) ∂ 2 H ∂r A (t)∂q(t) ∂ 2 H ∂r A (t)∂R A (t) ∂ 2 H ∂r A (t)∂q(t) ∂ 2 H ∂q 2 (t) ∂ 2 H ∂q(t)∂R A (t) ∂ 2 H ∂r A (t)∂R A (t) ∂ 2 H ∂q(t)∂R A (t) ∂ 2 H ∂R 2 A (t)     =     0 C A a (t, t + q (t) , R A (t)) C A k (t, t + q (t) , R A (t)) C A a (t, t + q (t) , R A (t)) r A (t) C A aa (t, t + q (t) , R A (t)) r A (t) C A ak (t, t + q (t) , R A (t)) C A k (t, t + q (t) , R A (t)) r A (t) C A ak (t, t + q (t) , R A (t)) r A (t) C A kk (t, t + q (t) , R A (t))     . Since the Hessian is not positive definite, the second-order sufficient conditions for a local minimum are not satisfied. As we will show, if users are homogeneous the necessary condition (20) cannot describe the optimal schedule unless C A aa = 0. In summary, user A will not, in general, depart at a positive and finite rate when a queue exists. To understand why, consider condition (20). Given C A t < 0 and C A a > 0, if r -A (t) is "small" the left-hand side of (20) is negative. The net cost of a trip is decreasing over time, and user A is better off scheduling the next vehicle in its fleet later. Contrarily, if r -A (t) is "large", the left-hand side of ( 20) is positive. Trip cost is increasing, and user A should dispatch a mass of vehicles immediately if it has not already done so. In either case, the optimal departure rate is not positive and finite. In certain cases, described in the next section, condition (20) will be satisfied. The condition can then be written as a formula for the departure rate of other users: r -A (t) = - C A t (t, t + q (t) , R A (t)) C A a (t, t + q (t) , R A (t)) s ≡ r A (t, t + q (t) , R A (t)) . ( 21 ) Condition ( 21) has the same functional form as Eq. ( 3) for the atomistic rate of small users. Thus, with step preferences, the right-hand side exceeds s for early arrival and is less than s for late arrival. Moreover, the condition depends only on the aggregate departure rate of other users, and not their composition (e.g., whether the other users who are departing are large or small. However, condition ( 21) is only necessary, not sufficient, to have r A (t) > 0 because the second-order conditions are not satisfied. This leads to: Lemma 1. Assume that a queue exists at time t. A large user will not depart at a positive and finite rate at time t unless the aggregate departure rate of other users equals the large user's atomistic rate given in Eq. ( 21). Lemma 1 requires qualification in the case of step preferences because the atomistic rate is discontinuous at the preferred arrival time. If vehicles in a large user's fleet differ sufficiently in their individual t * k , it is possible to have a PSNE in which the fleet departs at a positive and finite rate with each vehicle arriving exactly on time. The aggregate departure rate of other users falls short of the atomistic rate of each vehicle in the fleet just before it departs, and exceeds it just after it departs. This is illustrated using an example in Section 6. Equilibrium conditions with large users We now explore the implications of Lemma 1 for the existence of a PSNE in which a large user departs when there is a queue and the atomistic rates of all users are continuous. Conditions for a PSNE depend on whether or not small users are present, and the two cases are considered separately below. Multiple large users and no small users Suppose there are m ≥ 2 large users and no small users. User i has an atomistic rate ri (t, t + q (t) , R i (t)). For brevity, we write this as ri (t) with arrival time and the index k for vehicles both suppressed. Suppose that a queue exists at time t, and user i departs at rate r i (t) > 0, i = 1...m. 19 Necessary conditions for a PSNE to exist are r -i (t) = ri (t) , i = 1...m. ( 22 ) This system of m equations has a solution r i (t) = 1 m -1 j =i rj (t) - m -2 m -1 ri (t) = 1 m -1 j rj (t) -ri (t) , i = 1...m. ( 23 ) With m = 2, the solution is r 1 (t) = r2 (t), and r 2 (t) = r1 (t). With m > 2, the solution is feasible only if all departure rates are nonnegative. A necessary and sufficient condition for this to hold at time t is M ax i ri (t) ≤ 1 m -2 j =i rj (t) . (24) Condition ( 24) is satisfied if large users have sufficiently similar atomistic rates. Multiple large users and small users Assume now that, in addition to m ≥ 1 large users, there is a group of homogeneous small users comprising a positive measure of total traffic with an atomistic rate ro (t). Suppose that large user i departs at rate r i (t) > 0, i = 1...m, and small users depart at an aggregate rate r o (t) > 0. If a queue exists at time t, necessary conditions for a PSNE are r -i (t) = ri (t) , i = 1...m, ( 25 ) j r j (t) + r o (t) = ro (t) . ( 26 ) The solution to this system of m + 1 equations is r i (t) = ro (t) -ri (t) , i = 1...m, (27) r o (t) = j rj (t) -(m -1) ro (t) . ( 28 ) The solution is feasible only if all departure rates are nonnegative. With m = 1, the necessary and sufficient condition is r1 (t) < r0 (t). With m > 1, necessary and sufficient 19 If a user does not depart at time t, it can be omitted from the set of m "active" users at t. conditions for nonnegativity are 1 m -1 j rj (t) > ro (t) , ( 29) ri (t) < ro (t) , i = 1...m. ( 30 ) Condition ( 30) requires that all large users have lower atomistic rates than the small users. However, condition [START_REF] Daniel | Distributional consequences of airport congestion pricing[END_REF] dictates that the average atomistic rate for large users be close enough to the atomistic rate of small users. Together, ( 29) and ( 30) impose relatively tight bounds on the ri (t) . Existence of PSNE with queuing by a large user Silva et al. ( 2017) consider two identical large users with homogeneous vehicle fleets and step preferences. They show that a PSNE with queuing does not exist. In addition, they show that if γ > α, a PSNE without queuing does not exist either so that no PSNE exists. In this section we build on their results in two directions. First, we prove that if a large user has a homogeneous vehicle fleet, and C A aa = 0 at any time when the large user's vehicles arrive, a PSNE in which the large user queues does not exist for any market structure. Second, we show that if a large user has a heterogeneous vehicle fleet, and the derivative C A ak is sufficiently large in magnitude, a PSNE in which the large user queues is possible. We illustrate the second result in Section 5. Consider a large user, "user A", and a candidate PSNE in which queuing time is q (t) > 0 at time t. (A bar denotes quantities in the candidate PSNE.) User A never departs alone when there is a queue because it can reduce its fleet costs by postponing departures. Thus, if rA (t) is positive and finite, other users must also be departing. The aggregate departure rate of other users must equal user A's atomistic rate as per Eq. ( 22) or (25): r-A (t) = rA t, t + q (t) , RA (t) . In addition, user A must depart at a rate rA (t) consistent with equilibrium for other users as per Eq. ( 23), or Eqs. ( 27) and (28). Figure 1 Cumulative departures of user A, RA (t) = R (t) -R-A (t), are measured by the distance between the two curves. Suppose that user A deviates from the candidate PSNE during the interval (t A , t B ) by dispatching its vehicles slightly later so that section ADB of R (t) shifts rightwards to R (t) ∆C A (k) = C A (t E , t E + q (t E ) , k) -C A (t D , t D + q (t D ) , k) , where q (t D ) is queuing time at t D with the candidate equilibrium departure schedule R (t), and q (t E ) is queuing time at t E with the deviated schedule R (t). The path from point D to point E can be traversed along the dashed blue curve running parallel to R-A (t) between points y and z. Let q (t) denote queuing time along this path. The change in cost can then be written ∆C A (k) = t E t=t D C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) 1 + dq (t) dt dt. = t E t=t D C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) s dt = 1 s t E t=t D C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) dt = 1 s t E t=t D C A a (t, t + q (t) , k)    r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) -( r A (t, t + q (t) , k) -r A (t, t + q (t) , k))    dt. ( 31 ) The sign of this expression depends on how r A varies with arrival time and vehicle index. We begin by showing that, if vehicles are homogeneous, ( 31) is negative so that ∆C A (k) < 0 and the candidate is not a PSNE. Homogeneous vehicle fleets If user A has a homogeneous fleet, the first line in braces in (31) is zero. Given C A aa > 0 and q (t) < q (t) for t ∈ (t D , t E ), rA (t, t + q (t) , k) > rA (t, t + q (t) , k) and the second line in braces is negative. Hence ∆C A (k) < 0, and rescheduling the vehicle from D to E reduces its trip cost. Since point D is representative of all points between A and B, all the rescheduled vehicles except those at the endpoints, A and B, experience a reduction in costs. User A therefore gains from the deviation, and the candidate schedule is not a PSNE. In the Appendix we show that if C A aa < 0, user A can benefit by accelerating departures of its fleet. Deviation is therefore beneficial both when C A aa > 0 and when C A aa < 0. This result is formalized in Lemma 2. Consider large user A with a homogeneous vehicle fleet. If a queue exists at time t, and C A aa (t, t + q(t)) = 0, user A will not depart at a positive and finite rate at time t. Lemma 2 shows that although the candidate PSNE is robust to deviations in the departure time of a single vehicle, it is not robust to deviations by a positive measure of the fleet. If C A aa > 0, the departure rate of other users must decrease over time in order for user A to maintain a positive and finite departure rate. By delaying departures, user A enables vehicles in its fleet to benefit from shorter queuing delays. Conversely, if C A aa < 0, the departure rate of other users must increase over time in a PSNE, and user A can benefit by accelerating departures of its fleet. Lemma 2 contrasts sharply with the results of Verhoef and Silva (2017) who show that, given a set of large users with homogeneous vehicle fleets, a PSNE always exists in the Henderson-Chu model. As noted in the introduction, in the Henderson-Chu model vehicles that arrive (or depart) at different times do not interact with other. In particular, a cohort of vehicles departing at time t is unaffected by the number or density of vehicles that departed before t. Thus, if a large user increases or decreases the departure rate of its fleet at time t, it does not affect the costs incurred by other vehicles in the fleet that are scheduled after t. Equilibrium is determined on a point-by-point basis, and there is no state variable analogous to the queue in the bottleneck model that creates intertemporal dependence in costs. Heterogeneous vehicle fleets Suppose now that user A has a heterogeneous fleet. By (4), ∂ rA (t, a, k) /∂k ≥ 0 so that the first line in braces in (31) is positive. Expression ( 31) is then positive if the first line outweighs the second line. We show that this is indeed the case under plausible assumptions. Towards this, we introduce the following two-part assumption: Assumption 2: (i) The trip cost function depends only on the difference between actual arrival time and desired arrival time, and thus can be written C A (t, a, k) = C A (t, a -t * k ). (ii) t * k is distributed ) ≤ s ∀ t * k ∈ [t * s , t * e ] ), a PSNE in which user A queues may exist. Theorem 1 identifies necessary conditions such that a large user may queue in a PSNE. In light of Lemma 2 the key requirement is evidently sufficient heterogeneity in the triptiming preferences of vehicles in the large user's fleet. Condition f (t * k ) ≤ s stipulates that the desired arrival rate of vehicles in the fleet never exceeds bottleneck capacity. Put another way, if user A were the only user of the bottleneck, it could schedule its fleet so that every vehicle arrived precisely on time without queuing delay. The assumption f (t * k ) ≤ s is plausible for road transport. Freight shippers such as Fedex or UPS operate large vehicle fleets out of airports and central warehouses, and they can make hundreds of daily trips on highways and connecting roads in an urban area. Nevertheless, deliveries are typically made to geographically dispersed customers throughout the day so that the fleet rarely comprises more than a small fraction of total traffic on a link at any given time. Thus, for any t * k , f (t * k ) is likely to be only a modest fraction of s. In concluding this section it should be emphasized that Theorem 1 only states that a PSNE in which a large user queues may exist. A large user may prefer to avoid queuing by traveling at off-peak times. To determine whether this is the case, it is necessary to consider the trip-timing preferences of all users. We do so in Section 5 for the case of large users studied by [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. Section 6 examines a variant with both large users and small users. Existence of PSNE and self-internalization: multiple large users In this section we analyze the existence of PSNE with m ≥ 2 symmetric large users. We begin with m = 2: the case considered by [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. Consider two symmetric large users, A and B, that each controls N/2 vehicles with step preferences. Such a market setting might arise with two airlines that operate all (or most of) the flights at a congested airport. This section revisits Proposition 1 in [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] which states that a PSNE does not exist with homogeneous vehicles when γ > α. Their proof entails showing that with γ > α, a PSNE without queuing does not exist. The proof that a PSNE with queuing does not exist either follows the general reasoning used to prove Lemma 2 above. Here we relax the assumption that vehicles are homogeneous, and suppose that in each vehicle fleet, ] and height N . The two users schedule vehicles at the same rate. During the initial interval (t s , t q ), both users depart at an aggregate rate of s without creating a queue. t * k is uniformly distributed with a density f (t * k ) = N/ (2∆) Queuing begins at time t q , and ends at t e when the last vehicle in each fleet departs. Queuing time reaches a maximum at t for a vehicle arriving at t * = β β+γ t * s + γ β+γ t * e . Total departures after time t q are shown by the piecewise linear curve ALC. Cumulative departures by user B starting at t q are given by the piecewise linear curve AP E, and cumulative departures by user A are measured by the distance between AP E and ALC. If a PSNE exists, total costs in the internalized PSNE, T C i , are lower than total costs in the atomistic PSNE, T C n . As shown in the Appendix, the total cost saving from internalization with m users is T C n -T C i = (m -1) β (α + γ) + mαγ 2m (m -1) βγ + 2mαγ Ψ • T C nH , where T C nH = βγ β+γ N 2 s denotes total costs in the atomistic PSNE with homogeneous vehicles. The composite parameter Ψ depends on parameters α, β, and γ only through the ratios β/α and γ/α. Given the benchmark ratios of β:α:γ = 1:2:4, Ψ = 7m-3 4m(1+m) , which varies with m as shown in Table 1. With two users (m = 2) the saving is nearly as great as with a single user. Even with 10 users the savings is over 15 percent of the atomistic costs T C nH . These results are similar to those obtained by Verhoef and Silva (2017) with the Henderson-Chu model of congestion and a single desired arrival time, as they also find significant savings from selfinternalization. Moreover, with heterogeneity in t * , total costs in the atomistic PSNE are less than T C nH so that the proportional cost saving from internalization is actually larger than shown in Table 1. The example shows that self-internalization of congestion can boost efficiency appreciably even if no user controls a large fraction of total traffic. This is consistent with [START_REF] Brueckner | Airport congestion when carriers have market power[END_REF] who showed, using a Cournot oligopoly model, that internalization of self-imposed delays leads to an equilibrium that is more efficient than the atomistic equilibrium, and correspondingly offers smaller potential efficiency gains from congestion pricing. Existence of PSNE and self-internalization: large and small users In this section we modify the example in Section 5. We now assume that traffic is controlled by one large user, user A, with a vehicle fleet of measure N A , and a group of homogeneous small users with a measure N o . For ease of reference, vehicles in user A's fleet are called "large vehicles" and vehicles driven by small users are called "small vehicles". Large vehicles have the same trip-timing preferences as in Section 5. Their unit costs are denoted by α A , β A , and γ A . Their desired arrival times are uniformly distributed over the interval [t * s , t * e ] with a range of ∆ ≡ t * e -t * s . For future use we define δ ≡ s∆/N A . The existence and nature of PSNE depend on how the trip-timing preferences of small vehicles compare with those of large vehicles. We adopt a specification that allows the preferences to be either the same, or different in a plausible and interesting way. Small vehicles have step preferences with unit costs of α, β, and γ. The cost of late arrival relative to early arrival is assumed to be the same as for large vehicles so that γ/β = γ A /β A . The distribution of desired arrival times is also the same as for large vehicles. 22 Small vehicles and large vehicles are allowed to differ in the values of β/α and β A /α A . The ratio β A /α A measures the cost of schedule delay relative to queuing time delay for large vehicles. It determines their flexibility with respect to arrival time, and hence their willingness to queue to arrive closer to their desired time. If β A /α A is small, large vehicles are flexible in the sense that they are willing to reschedule trips in order to avoid queuing delay. Conversely, if β A /α A is big, large vehicles are inflexible. Ratio β/α has an analogous interpretation for small vehicles. To economize on writing, we use the composite parameter θ ≡ β A /α A β/α to measure the relative flexibility of the two types. We consider two cases. In Case 1, θ ≤ 1 so that large vehicles are (weakly) more flexible than small vehicles. To fix ideas, small vehicles can be thought of as morning commuters with fixed work hours and relatively rigid schedules. Large vehicles are small trucks or vans that can make deliveries within a broad time window during the day. We show below that for a range of parameter values, a PSNE exists in which large vehicles depart at the beginning and end of the travel period without queuing. Small vehicles queue in the middle of the travel period in the same way as if large vehicles were absent. In Case 2, θ > 1 so that large vehicles are less flexible than small vehicles. This would be the case if large vehicles are part of a just-in-time supply chain, or have to deliver products to receivers within narrow time windows. 23 We show that for a range of parameter values a PSNE exists in which large vehicles depart simultaneously with small vehicles and encounter queuing delays. The PSNE is identical to the atomistic PSNE in which user A disregards the congestion externalities that its vehicles impose on each other. Cases 1 and 2 are analyzed in the following two subsections. Case 1: Large vehicles more flexible than small vehicles In Case 1, large vehicles are more flexible than small vehicles. In the atomistic PSNE, large vehicles depart at the beginning and end of the travel period, and small vehicles travel in the middle. A queue exists throughout the travel period, but it rises and falls more slowly while large vehicles are departing than when small vehicles are departing just before and after the peak. 24 One might expect the same departure order to prevail with self-internalization, but with user A restricting its departure rate to match capacity so that queuing does not occur. The candidate PSNE with this pattern is shown in Figure 3. Large vehicles depart during the intervals (t As , t os ) and (t oe , t Ae ), where t As = t * -23 Another possibility is that large vehicles are commercial aircraft operated by airlines with scheduled service, while small vehicles are private aircraft used mainly for recreational purposes. 24 This departure pattern was studied by [START_REF] Arnott | Schedule delay and departure time decisions with heterogeneous commuters[END_REF] and [START_REF] Arnott | The welfare effects of congestion tolls with heterogeneous commuters[END_REF]. . 25 Small vehicles depart during the central interval (t os , t oe ). The departure schedule for small vehicles and the resulting queue are the same as if user A were absent. If the candidate departure schedule in Figure 3 is a PSNE, neither small vehicles nor any subset of large vehicles can reduce their travel costs by deviating. The requisite conditions are identified in the two-part assumption: Assumption 3: (i) θ ≤ 1. (ii) α A ≥ (β A + γ A ) (1 -δ). The following proposition identifies necessary and sufficient conditions for the pattern in Figure 3 to be a PSNE. Proposition 2. Let Assumption 3 hold. Then the departure pattern in Figure 3 is a PSNE. The key to the proof of Proposition 2 is to show that user A cannot profitably deviate from the candidate PSNE by rescheduling vehicles departing after t oe to a mass departure at t os . Forcing vehicles into the bottleneck as a mass just as small vehicles are beginning to depart allows user A to reduce the total schedule delay costs incurred by its fleet. Doing so at t os is preferable to later because, with θ ≤ 1, large vehicles have a lesser willingness to queue than small vehicles. Queuing delay is nevertheless unavoidable because vehicles that depart later in the mass have to wait their turn. This trade-off is evident in the condition α A ≥ (β A + γ A ) (1 -δ). Moreover, the more dispersed desired arrival times are, the lower the fleet's costs in the candidate PSNE, and hence the less user A stands to gain from rescheduling. If δ > 1, rescheduling vehicles actually increases their schedule delay costs because they arrive too quickly relative to their desired arrival times. Rescheduling then cannot possibly be beneficial. Given the benchmark parameter ratios β:α:γ = 1:2:4, condition α A ≥ (β A + γ A ) (1 -δ) simplifies to δ ≥ 3/5, or ∆ ≥ (3/5) (N A /s). In words: the range of desired arrival times for vehicles in the fleet must be at least 60 percent of the aggregate time required for them to traverse the bottleneck. This condition is plausible, at least for road users. As noted above, the atomistic PSNE features the same order of departures and arrivals as the internalized PSNE, but with queuing by large vehicles as well as small vehicles. It is easy to show that both large vehicles and small vehicles incur lower travel costs with self-internalization. Thus, self-internalization achieves a Pareto improvement. [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] show that a PSNE without queuing exists for a symmetric duopoly and homogeneous users if α ≥ γ. We have effectively replaced one of the duopolists with a continuum of small users. The condition for a PSNE here (with δ = 0) is α A ≥ β A +γ A . This is more stringent than for the duopoly with the same unit costs. Hence, counterintuitively, the mixed market with a large user and small users may not have a PSNE even if a PSNE exists for both the less concentrated atomistic market and the more concentrated duopoly. While this nonmonotonic variation in behavior is intriguing, it complicates the analysis of equilibrium with large users. Case 2: Large vehicles less flexible than small vehicles In Case 2, θ > 1 so that large vehicles are less flexible than small vehicles. Large vehicles prefer to travel in the middle of the travel period to reduce their schedule delay costs. However, queuing will be inevitable because small vehicles prefer the same range of arrival times. To meet the requirements of Theorem 1 for an internalized PSNE with queuing, it is necessary to assume that ∆ > N A /s. Given an additional assumption identified in Assumption 4 below, the internalized PSNE turns out to be identical to the atomistic PSNE. All large vehicles thus travel during a queuing period, and depart at the same time as in the atomistic PSNE.28 Thus, in contrast to Case 1, the large user's incentive to internalize self-congestion has no effect on either its fleet or small users. The candidate departure schedule in Figure 4 is an internalized PSNE if and only if neither small vehicles nor any subset of large vehicles can reduce their travel costs by deviating. The three requisite conditions are identified in Assumption 4: Assumption 4: (i) θ > 1. (ii) ∆ > N A /s. (iii) N A s < (θ -1) βγ α (β + γ) N A + N o s -∆ . (32) Using Assumption 4, the internalized PSNE is stated as: Proposition 3. Let Assumption 4 hold. Then the departure pattern in Figure 4 is a PSNE. Large users depart during the queuing period and all arrive on time. Small vehicles arrive at a complementary rate so that the bottleneck is fully utilized. The aggregate departure rate and queuing time are the same as if all vehicles were small. Proof: See the Appendix. The roles of Conditions (i) and (ii) in Assumption 4 were explained above. Condition (iii) assures that user A's fleet is small enough that it prefers to schedule all its vehicles on-time during the queuing period, rather than scheduling some vehicles before queuing begins at t os .29 Conclusions In this paper we have studied trip-timing decisions by large users in the Vickrey bottleneck model of congestion. We believe that the model is representative of many transportation settings including airlines scheduling flights at airports, rail companies operating on rail networks, and freight shippers using congested roads. We build on previous studies of trip-timing decisions by large users in three ways: (i) we allow for the presence of small users; (ii) we consider general trip-timing preferences; and (iii) we allow for heterogeneity of trip-timing preferences within a large user's fleet as well as between large and small users. Our paper makes two main contributions. First and foremost, it identifies conditions under which a Nash equilibrium in pure strategies exists in a setting in which large users make trip-timing decisions simultaneously and queue in a dynamic model of congestion with realistic propagation of delays. More specifically, we show that if vehicles in a large user's fleet have sufficiently diverse trip-timing preferences, a PSNE in which the large user queues may exist. We also provide an example in which the conditions for existence of a PSNE become less stringent as the number of large users increases. Second, we illustrate how self-internalization can affect equilibrium travel costs. In two of the three examples presented, self-internalization reduces costs for all users. In the first example with symmetric large users (Section 5), the cost savings are substantial and can be nearly as large as for a monopolistic user that controls all the traffic. In the second example with one large user and a group of small users, all parties also gain if the large user schedules its fleet during the off-peak period without queuing. However, in the third example in which the large user travels during the peak, the equilibrium is identical to the atomistic PSNE so that no one benefits. The three examples illustrate that the effects of self-internalization depend on both market structure and the trip-timing preferences of users. The analysis of this paper can be extended in various directions. One is congestion pricing: either in the form of an optimal fine (i.e., continuously time-varying) toll that eliminates queuing, or a more practically feasible step-tolling scheme. Although the gains from self-internalization can be substantial, there is still scope to improve welfare by implementing congestion pricing. Indeed, this is what Verhoef and Silva (2017) find using the Henderson-Chu model for the case of large users with homogenous trip-timing preferences. A second topic is mergers or other measures to enable users to coordinate their trip-timing decisions gainfully without intervention by an external authority using either tolls or direct traffic control measures. It is not obvious from our preliminary results which users, if any, stand to gain by merging, how a merger would affect other users, and whether there is a case for regulation. A third extension is to explore more complex market structures and different types of user heterogeneity. Ride-sharing companies or so-called Transportation Network Companies (TNCs) have become a major mode of passenger transportation in some cities and evidence is emerging that they are contributing to an increase in vehicle-km and congestion [START_REF] Clewlow | Disruptive transportation: the adoption, utilization, and impacts of ride-hailing in the united states[END_REF]The New York Times, 2017). In Manhattan, the number of TNCs exceeds the number of taxis. Transportation services are offered by six types of operators in all: yellow cabs that must be hailed from the street, for-hire vehicles or black cars that must be booked, and four TNC companies: Uber, Lyft, Via, and Juno [START_REF] Schaller | Empty seats, full streets: Fixing manhattan's traffic problem[END_REF]. 30 The firms differ in their operations and fare structures. Their trip-timing preferences are also dictated by those of their customers. The size of a firm's fleet is not fixed, but varies by time of day and day of week according to when drivers choose to be in service. The simple Vickrey model would have to be modified to incorporate these user characteristics. A fourth topic that we are studying is whether self-internalization by a large user can make other users worse off, or even leave the large user itself worse off. Such a result is of policy interest because it suggests that the welfare gains from congestion pricing of roads, airports and other facilities in which large users operate could be larger than previously thought. Figure A.5: Candidate PSNE with C A aa < 0 ∂ 2 r (t, a, k) ∂k 2 = C a (C t C akk -C a C tkk ) + 2C ak (C a C tk -C t C ak ) C 3 a ≥ 0, ∂ r (t, a, k) ∂a = s C 2 a (C t C aa -C a C ta ) = s C 2 a C t C aa s = -C aa , where s = means identical in sign. Appendix A.3. Proof of Lemma 2 with C A aa < 0 Consider Figure A.5, which depicts a candidate PSNE similar to that in Figure 1, but with C A aa < 0 so that curve R-A (t) is convex rather than concave. Suppose that user A deviates from the candidate PSNE during the interval (t A , t B ) by dispatching its vehicles earlier so that section ADB of R (t) shifts leftwards to R (t). Vehicle k = RA (t D ) originally scheduled to depart at point D and time t D is rescheduled earlier to point E and time t E such that distance Ey equals distance Dz. Vehicle k experiences a change in costs of ∆C A (k) = C A (t E , t E + q (t E ) , k) -C A (t D , t D + q (t D ) , k) . Let q (t) denote queuing time along the path from point D to point E shown by the dashed blue curve that runs parallel to R-A (t) between points y and z. The change in cost can be written ∆C A (k) = - t D t=t E C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) 1 + dq (t) dt dt = - t D t=t E C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) s dt = - 1 s t D t=t E C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) dt. Since q (t) > q (t) for t ∈ (t A , t B ), with C A aa < 0 and for any j, rA (t, t + q (t) , j) < rA (t, t + q (t) , j). If user A's fleet is homogeneous, the expression in braces is negative, ∆C A (k) < 0, and rescheduling the vehicle from D to E reduces its trip cost. Appendix A.4. Proof of Theorem 1 (Section 4) Using Eq. ( 1), the term in braces in (31) can be written Z = RA (t) j=k   ∂ r A (t, t + q (t) , j) ∂j + 1 s ∂ r A t, t + q (t) -j-k s , k ∂a   dj. (A.6) A sufficient condition for Z to be positive is that the integrand be positive for all values of j. Given Assumption 2, there is a one-to-one monotonic correspondence between j and t * j . The integrand in (A.6), z, can therefore be written z = ∂ r A (t, t + q (t) , j) ∂t * j 1 f t * j - 1 s ∂ r A t, t + q (t) -j-k s , k ∂t * j . (A.7) Now ∂ r A (t, t + q (t) , j) ∂t * j - ∂ r A t, t + q (t) -j-k s , k ∂t * j = j n=k ∂ 2 r A t, t + q (t) -n-k s , n ∂ (t * n ) 2 1 f (t * n ) + 1 s ∂ 2 r A t, t + q (t) -n-k s , n ∂a∂t * n dn = j n=k ∂ 2 r A t, t + q (t) -n-k s , n ∂ (t * n ) 2 1 f (t * n ) - 1 s dn (A.8) By (4), the second derivative is positive, and by assumption, f (t * n ) ≤ s for all t * n . Hence (A.8) is positive. Using this result in (A.7) we have z ≥ ∂ r A t, t + q (t) -j-k s , k ∂t * j   1 f t * j - 1 s   > 0. This establishes that Z > 0 in (A.6), and hence that ∆C A (k) > 0. s (t q -t s ) + m m -1 α α -β s t -t q = s (t * -t s ) . (A.12) Eq. (A.9) stipulates that all vehicles complete their trips. Eq. (A.10) states that the first and last vehicles incur the same private cost. Eq. (A.11) stipulates that cumulative departures equal N . Finally, according to eq. (A.12) total departures from t s to t equals the number of vehicles that arrive early. Solving (A.9)-(A.12), it is possible to show after considerable algebra that total costs in the candidate PSNE are T C i = (m -1) (2m -1) βγ + mαγ -(m -1) αβ 2mγ (α + (m -1) β) βγ β + γ N 2 s . Total costs in the atomistic PSNE are T C n = βγ β + γ N 2 s . When vehicles differ in their desired arrival times, schedule delay costs are reduced by the same amount in the two PSNE. The departure rate is unchanged in the candidate PSNE with internalization. The difference in total costs is thus the same with and without heterogeneity so that, as stated in the text T C n -T C i = (m -1) β (α + γ) + mαγ 2m (m -1) βγ + 2mαγ βγ β + γ N 2 s . Appendix A.5.3. Proof of Proposition 2 It is necessary to show that neither user A nor a small user can gain by deviating from the candidate PSNE. In all, seven types of deviations need to be considered-Deviation 1. A small user cannot gain by deviating. Small users incur the same cost throughout the candidate departure interval (t os , t oe ). Hence, they cannot gain by retiming their trips within this interval. Rescheduling a trip either before t os or after t oe would clearly increase their cost. Thus, no small user can benefit by deviating. During the no-queuing period, the bottleneck is used to capacity. It is therefore necessary to distinguish between the cost that user A saves by removing a vehicle from the departure schedule (which does not affect the costs of other vehicles in the fleet) and the cost user A incurs by adding a vehicle (which creates a queue unless the vehicle is added at t Ae ). The respective costs are 33 : C - A (t) = β A • (t * -t) , t ∈ [t As , t os ] γ A • (t -t * ) , t ∈ [t oe , t Ae ] , C + A (t) =        β A • (t * -t) + α A -β A s • tos t r A (u) du + α A +γ A s • t Ae toe r A (u) du, t ∈ [t As , t oe ] γ A • (t -t * ) + α A +γ A s • t Ae t r A (u) ∆C A = -C - A (t) + C + A t = -γ A • t -t + α A + γ A s • t t r A (u) du = -γ A • t -t + α A + γ A s • s t -t = α A t -t > 0. Since fleet costs increase, the deviation is not gainful. (ii). Rescheduling late to early: The best time to reschedule a vehicle is t os because this minimizes the vehicle's early-arrival cost as well as the queuing delay imposed on the rest of the fleet. But rescheduling the vehicle to t os is no better (or worse) than rescheduling it to t oe , which is not beneficial as per case (i). (iii). Rescheduling early to late: The best option in this case is to reschedule a vehicle from t As . However, the gain is the same as (or worse than) from rescheduling a vehicle from t Ae , and this is not beneficial as per case (i). Rescheduling early to late therefore cannot be beneficial. (iv). Rescheduling early to early: The best option in this case is to reschedule a vehicle from t As to t os . Again, this is not beneficial for the same reason as in case (iii). Deviation 4. User A cannot gain by rescheduling a single vehicle to a time within the queuing period, (t os , t oe ). For any vehicle in user A's fleet that is scheduled to depart early at t, there is another vehicle scheduled to depart late at t that incurs the same cost (this follows from symmetry of the t * A distribution). Removing either vehicle saves the same cost: C - A (t) = C - A (t ). However, removing the early vehicle and inserting it at any time during the queuing period creates a (small) queue that persists until t Ae . Removing the late vehicle creates a queue only until t because the queue disappears during the departure-time slot opened up by the rescheduled vehicle. Rescheduling a late vehicle is therefore preferred. The best choice is to reschedule the first late-arriving vehicle at t oe so that no later vehicles in the fleet are delayed. Rescheduling a vehicle from t > t oe would reduce that vehicle's cost by more, but a queue would persist from t oe until t . The fleet's schedule delay costs would therefore not be reduced, and a greater queuing cost would be incurred as well. Given θ ≤ 1, rescheduling a vehicle from t oe to any time t ∈ (t os , t oe ) will (weakly) increase its cost. So rescheduling it not gainful. But if θ > 1, the vehicle will benefit. Hence the candidate can be a PSNE only if θ ≤ 1 as per Proposition 2. Deviation 5. User A cannot gain by rescheduling a positive measure of its fleet (i.e., a mass of vehicles) to times within the departure period when there is no queue. If user A reschedules a positive measure of vehicles to depart during (t As , t os )∪(t oe , t Ae ), queuing will occur during some nondegenerate time interval. By Lemma 1, user A is willing to depart at a positive and finite rate during early arrivals only if r -A (t) = rA = α A • s/ (α A -β A ) > 0. Since no other users depart at t, r -A (t) = 0 and user A is better off scheduling vehicles later. Similarly, for late arrivals user A is willing to depart at a positive and finite rate only if r -A (t) = α A • s/ (α A + γ A ). Since r -A (t) = 0, user A is again better off scheduling vehicles later. Deviation 6. Any deviation by user A involving multiple mass departures is dominated by a deviation with a single mass departure. Suppose that user A deviates from the candidate PSNE by scheduling multiple mass departures. All vehicles in the fleet are assumed to depart in order of their index, including vehicles within the same mass. (This assures that fleet costs in the deviation cannot be reduced by reordering vehicles.) We show that such a deviation is dominated by a single mass departure. The proof involves establishing three results: (i) Fleet costs can be reduced by rescheduling any vehicles that are not part of a mass, but suffer queuing delay, to a period without queuing. (ii) Fleet costs can be reduced by rescheduling any vehicles in a mass departure after t to a period without queuing. (iii) Any deviation with multiple mass departures launched before t entails higher fleet costs than a deviation with a single mass departure at t os . These three results show that the candidate PSNE need only be tested against a single mass departure launched at t os . Result (i): When a queue exists, user A is willing to depart at a positive and finite rate only if condition (21) is satisfied; i.e. r -A (t) = rA (t). For any vehicle that arrives early this requires r -A (t) = rAE = α • s/ (α -θβ) > 0, and for any vehicle that arrives late, r -A (t) = rAL = α • s/ (α + θγ) > 0. During the departure period (t As , t os ), r -A (t) = 0, so user A is better off scheduling all vehicles in the mass later. During the departure period t os , t , r -A (t) = α • s/ (α -β). Since θ ≤ 1, r -A (t) ≥ rAE > rAL and user A is (weakly) better off scheduling all vehicles in the mass earlier. During the departure period t, t oe , , r -A (t) = α • s/ (α + γ) ≤ rAL < rAE . User A is (weakly) better off scheduling all vehicles later. Finally, during the departure period (t oe , t Ae , ), r -A (t) = 0 and user A is again better off scheduling vehicles later. Result (ii): Assume that user A launches the last mass departure after t. We show that user A can reduce its fleet costs by rescheduling vehicles in the mass to a later period in which they avoid queuing delay. This is true whether or not each vehicle in the mass is destined to arrive early or late relative to its individual t * . By induction, it follows that all mass departures launched after t can be gainfully rescheduled. In what follows it is convenient to use the auxiliary variable λ -A t,t ≡ t t r -A (u) du/ (s • (t -t )) which denotes average departures of small users as a fraction of capacity during the period [t , t]. Suppose user A launches the last mass departure at time t L with M vehicles. Assume first that at t L there is a queue with queuing time q (t L ). We show that postponing the mass departure to a later time when a queue still exists reduces user A's fleet costs. By induction, it follows that postponing the mass until the queue disappears is gainful. Let j be the vehicle that departs in position m of the mass, m ∈ [0, M ]. Let D j [•] be the schedule delay cost function of vehicle j, and c (j, t) its trip cost if the mass departs at time t. If the mass departs at time t L , vehicle j incurs a cost of c (j, t L ) = α • q (t L ) + m s + D j t L + q (t L ) + m s . (A.13) If the mass departure is postponed to time t L > t L , and a queue still exists at t L , vehicle j incurs a cost of .14) By Result (i), user A does not depart during (t L , t L ) because a queue persists during this period. Hence .15) Substituting (A.15) into (A.14), and using (A.13), one obtains c j, t L = α • q t L + m s + D j t L + q t L + m s . ( A q t L = q (t L ) + t L t L r -A (u) -s s du = q (t L ) -t L -t L 1 -λ -A t L ,t L . ( A c j, t L -c (j, t L ) = -α • 1 -λ -A t L ,t L t L -t L (A.16) +D j t L + q (t L ) + m s + λ -A t L ,t L t L -t L -D j t L + q (t L ) + m s . The value of λ ) so that λ -A t L ,t L = α α+γ . If t L > t oe , λ -A t L ,t L < α α+γ . Hence, λ -A t L ,t L ≤ α α+γ for all values of t L and the first line of (A.16) is negative. For the second line there are three possibilities to consider according to when vehicle j arrives: (a) early both before and after the mass is postponed, (b) early before postponement and late after, and (c) late both before and after postponement. In case (a), the second line of (A.16) is negative, in case (c) it is positive, and in case (b) the sign is a priori ambiguous. To show that (A.16) is negative it suffices to show this for case (c). The second line is an increasing function of λ -A t L ,t L . Using λ -A t L ,t L ≤ α α+γ and D j [x] = γx for x > 0, (A.16) yields c j, t L -c (j, t L ) ≤ -α • γ α + γ t L -t L + θγ • α α + γ t L -t L < 0. This proves that postponing the mass departure (weakly) reduces the cost for every vehicle in the mass. We conclude that if there is a queue when the last mass departs, user A can (weakly) reduce its fleet costs by postponing the mass departure to the time when the queue just disappears (user A's later vehicles are not affected by postponing the mass). To see this, let j be the index of the vehicle that departs in position m in the mass. In the mass departure, vehicle j incurs a cost of c (j, t L ) = D j t L + m s + α • m s . In the deviation where vehicle j delays departure until t L = t L + m/s, it incurs a cost of c j, t L = D j t L + m s . Its cost changes by The remaining vehicles in the first mass also incur lower queuing costs since they no longer queue between t E and t E . Vehicles in the second mass that departs at t E still depart and arrive at the same time because the same number of vehicles depart before them, and the bottleneck operates at capacity throughout. c j, t L -c (j, t L ) = -α • m s < 0. Case 2 : t E < t os < t E . The second mass is scheduled after small users start to depart. If the queue from the first mass disappears before t os , the reasoning for Case 1 applies. If the queue from the first mass does not disappear before t os , the queue will not dissipate until after small users have stopped departing at t oe . However, user A can still reduce its fleet costs by rescheduling some of the M vehicles in the first mass to t os , and rescheduling the remainder to the head of the second mass at t E . Case 3 : t os ≤ t E < t E . The first mass departs when, or after, small users begin to depart. In this case, user A can reduce its fleet costs by rescheduling the second mass to depart immediately after the first mass. To show this, let q (t), t ≥ t E , denote queuing time after the first mass of M vehicles departs. Let j be the index of the vehicle that departs in position m of the second mass, where m ∈ [0, M ]. Vehicle j arrives at time a j = t E + q (t E ) + m/s and incurs a cost of c j, t E = α q t E + m s + D j a j . If the second mass is instead dispatched immediately after the first mass at t E , vehicle j arrives at time a j = t E + q (t E ) + m/s and incurs a cost of c (j, t E ) = α q (t E ) + m s + D j [a j ] . The cost saving is c j, t E -c (j, t E ) = α q t E -q (t E ) + D j a j -D j [a j ] . (A.17) Now a j = a j + t E -t E + q t E -q (t E ) , (A.18) and (A.20) where ∆q -A ≡ α α-β (t E -t E ) is the gross contribution of small users to queuing time during the period (t E , t E ). The weak inequality in (A.20) holds as an equality if vehicle j arrives early when the second mass departs at t E . The inequality is strict if vehicle j arrives late. q t E -q (t E ) = β α -β t E - Since this conclusion holds for all vehicles in the second mass, user A can reduce its costs by merging the later mass with the earlier mass. By induction, all but one of any mass departures launched before t can be eliminated in a way that decreases user A's fleet costs. Using similar logic, it is straightforward to show that user A can do no better than to schedule the single mass at t os rather than later. In summary, results (i)-(iii) show that, of all deviations from the candidate PSNE entailing mass departures, a deviation with a single mass departure launched at t os is the most viable. Deviation 7. User A cannot gain by rescheduling a positive measure of its fleet to times during the queuing period (t os , t oe ) . To prove that Deviation 7 is not gainful, we must determine whether total fleet costs can be reduced by deviating from the candidate PSNE. Since user A has weaker preferences for on-time arrival than small users, user A prefers not to schedule departures in the interior of (t os , t oe ). User A's best deviation is to schedule a mass departure at t os . Let N Am be the measure of vehicles in the mass. If N Am is small, the best choice is to reschedule the first vehicles departing late during the interval (t oe , t oe + N Am /s). (As explained in proving that Deviation 4 is not beneficial, this strategy avoids queuing delay for large vehicles that are not part of the mass.) The first of the rescheduled vehicles has a preferred arrival time of t * . In the candidate PSNE, this vehicle incurs a cost The deviation is unprofitable if T C d dev ≥ T C c dev ; that is, if .28) When condition (A.28) is met, user A cannot profit by rescheduling some vehicles from the early-departure interval (t As , t os ) in addition to all large vehicles from the late-departure interval (t oe , t Ae ). To see why, note that the net benefit from rescheduling the vehicle at t As is the same as the net benefit from rescheduling the vehicle at t Ae . The benefit from rescheduling vehicles after t os is lower. C A t α A ≥ (β A + γ A ) (1 -δ) . (A Appendix A.5.4. Proof of Proposition 3 The aggregate departure rate is given by Eq. ( 5) The last large vehicle imposes no delay on others in the fleet, whereas the first large vehicle imposes a delay of 1/s on all the others. The first vehicle can be rescheduled to just before the travel period at a lower cost than the other vehicles. Thus, if deviation from the candidate PSNE is profitable, it must be profitable to reschedule the vehicle departing at t As to t os . It is straightforward to show that user A can retime departures of the remaining large vehicles so that they continue to arrive on time and incur no schedule delay cost. The net gain to the other large vehicles is therefore α A N A /s. The first vehicle incurs a cost of (A.29) in the candidate PSNE, and a cost of (β A β/α) (t * s -t os ) if it rescheduled. The net change in costs for the fleet is r (t ∆T C A = β A -α A β α γ β + γ N A + N o s -∆ -α A N A s = (θ -1) α A βγ α (β + γ) N A + N o s -∆ -α A N A s . Deviation is not profitable if this difference is positive, which is assured by condition (32). depicts a candidate PSNE on the assumption that C A aa > 0. (The case C A aa < 0 is considered below.) Cumulative departures of other users, R-A (t), are shown by the blue curve passing through points y and z. Cumulative total departures, R (t), are shown by the black curve passing through points A, D and B. Figure 1 : 1 Figure 1: Candidate PSNE with C A aa > 0 Figure 2 : 2 Figure 2: PSNE with two large users Figure 3 : 3 Figure 3: PSNE in which large user does not queue (Case 1) Figure 4 : 4 Figure 4: PSNE in which large user does not queue (Case 2). of the rescheduled vehicles is the unweighted mean of eqs. (A.24) and (A.25). Total costs for the rescheduled vehicles are therefore T C d dev = β A t * -t os + α A -β A (1denotes the deviation. Given (A.23) and (A.26), the change in total costs is T C d dev -T C c dev = [α A -(β A + γ A ) (1 - Henderson-Chu model. First, vehicles departing at any given time never interact with vehicles departing at other times. 8 Second, compared to the bottleneck model discussed below, the Henderson-Chu model is less analytically tractable, and for most functional forms it can only be solved numerically. The second paper to adopt Nash equilibrium, by Silva et al. (2017), uses the Vickrey (1969) bottleneck model in which congestion takes the form of queuing behind a bottleneck with a fixed flow capacity. Silva et al. consider two large users controlling identical vehicles with linear trip-timing preferences. In contrast to Verhoef and Silva (2017), Silva et al. find that under plausible parameter assumptions a PSNE in departure times does not exist. They also prove that a PSNE never exists in which large users queue. These results readily generalize to oligopolistic markets with more than two large users. Silva et al. also show that more than one PSNE may exist in which no queuing occurs, and that ex ante identical users can incur substantially different equilibrium costs. These results are disturbing given the fundamental importance of existence and uniqueness of equilibrium for equilibrium models. The unease is heightened by the facts that the bottleneck model is widely used, and that when all users are small a unique PSNE with a deterministic and finite departure rate exists under relatively unrestrictive assumptions. 9 8 In essence, this means that every infinitesimal cohort of vehicles travels independently of other cohorts and is unaffected by the volume of traffic that has departed earlier -contrary to what is observed in practice. The Henderson-Chu model is a special case of the Lighthill-Whitham-Richards hydrodynamic model in which shock waves travel at the same speed as vehicles and therefore never influence other vehicles (see Theorem 1. Consider large user A with a heterogeneous vehicle fleet that satisfies Assumption 2. If the density of desired arrival times in user A's fleet never exceeds bottleneck capacity (i.e., f (t * k according to a density function f (t * k ) over a range [t * s , t * e ]. The following result is proved in the Appendix: over the interval [t * s , t * e ] where ∆ ≡ t * e -t * s . It can be shown that introducing heterogeneity in this way does not upset the proof in Silva et al. (2017) that a PSNE without queuing does not exist. However, a PSNE with queuing does exist if the conditions of Theorem 1 are met. Both conditions of Assumption 2 are satisfied. The remaining condition, f (t * candidate PSNE with queuing is shown in Figure 2. 21 The cumulative distribution of desired arrival times for users A and B together is shown by the straight line W with The domain [t * s , t * e k ) ≤ s, is also met if N/ (2∆) ≤ s, or ∆ ≥ N/ (2s). Table 1 : 1 Proportional cost saving from internalization as a function of m 22 Within limits, this assumption can be relaxed. Suppose that t * is uniformly distributed over the interval [t * so , t * eo ] . The existence and nature of PSNE with self-internalization are unaffected if two conditions are satisfied. First, t β β+γ t * so + γ β+γ t * eo = β β+γ t * s + γ β+γ t * * eo -t * so ≤ No/s. This condition assures that small vehicles queue in the PSNE. Second, e . This condition assures that small vehicles and large vehicles adopt the same queuing pattern in the atomistic PSNE. Given this assumption, the atomistic PSNE is as shown in Figure 4. Large vehicles depart during the interval (t As , t Ae ) and arrive at rate N A /∆ over the interval [t * s , t * e ]. Each large vehicle arrives on time. Small vehicles arrive at rate s -N A /∆ during this interval, and at rate s during the rest of the interval [t os , t oe ]. The aggregate departure rate and queuing time are the same as if all vehicles were small. 27 Deviation 2. User A cannot gain by rescheduling vehicles outside the departure period (t As , t Ae ). User A does not queue in the candidate PSNE. Large vehicles therefore do not delay each other. Moreover, the highest costs are borne by the first and last vehicles departing at t As and t Ae , respectively. Rescheduling any vehicles either before t As or after t Ae would increase user A's fleet cost. Deviation 3. User A cannot gain by rescheduling a single vehicle to another time within the departure period when there is no queue; i.e. to any time t ∈ (t As , t os ) ∪ (t oe , t Ae ). 32 32 Much of the following text is drawn, verbatim, from Silva et al. (2017). du, t ∈ [t oe , t Ae ] Rescheduling late to late: Rescheduling a late vehicle to a later time is never beneficial because the vehicle's trip cost increases, and other vehicles in the fleet do not benefit. Suppose a vehicle is rescheduled earlier from t to t where t oe ≤ t < t. User A's fleet costs change by an amount: . A vehicle can be rescheduled in four ways: (i) late to late, (ii) late to early, (iii) early to late, and (iv) early to early. Consider each possibility in turn. (i). -A t L ,t L depends on the timing of t L and t L . If t L ≤ t oe , small users depart at rate α α+γ • s throughout the interval (t L , t L Every vehicle enjoys a reduction in queuing time cost with no change in schedule delay cost. Hence, in any deviation from the candidate PSNE, fleet costs can be reduced by eliminating the last mass departure. By induction, any mass departure launched after t can be rescheduled without increasing fleet costs.Next, we show that any deviation entailing multiple mass departures before t is dominated by scheduling a single mass departure at t os . Suppose that more than one mass departure is scheduled before t. Assume the first mass is launched at time t E with M vehicles, and the second mass is launched at time t E with M vehicles. There are three cases to consider depending on the timing of t E and t E . Case 1 : t E < t E ≤ t os . Both masses are scheduled before small users start to depart. Since r -A (t) = 0 for t < t E , by Result (i), there is no queue at t E . If the queue from the first mass disappears before t E , as in the proof of Result (ii), user A can reduce its fleet costs simply by rescheduling vehicles in the first mass to depart at a rate of s during Result (iii): t ∈ (t E , t E ). Since user A does not depart in the original deviation until the first queue has dissipated, the rescheduled vehicles in the alternative deviation avoid queuing and arrive at the same time -thereby reducing their queuing delay costs without affecting their schedule delay costs. If the queue from the first mass does not disappear before t E , user A can still reduce its fleet costs by rescheduling s • (t E -t E ) vehicles at a rate s during (t E , t E ), and letting the remaining M -s • (t E -t E ) vehicles join the head of the second mass at t E . The first set of vehicles in the first mass avoids queuing and incur the same schedule delay costs. oe , t oe , t * = γ A t oe -t * .(A.21) The last of the rescheduled vehicles has a preferred arrival time of t * + δN Am /s. It incursThe average cost of the rescheduled vehicles is the unweighted mean of eqs. (A.21) and(A.22). Total costs for the N Am vehicles before they are displaced are thereforeT C c dev = γ A t oe -t * + γ A (1 -δ) N Am 2s N Am , (A.23)where superscript c denotes the candidate PSNE.The first of the rescheduled vehicles departs at t os and incurs a costC A t os , t os , t * = β A t * -t os . (A.24)The last of the rescheduled vehicles incurs a cost of C A t os , t os + N a cost C A t oe + N Am s , t oe + N Am s , t * + δ s N Am = γ A t oe -t * + (1 -δ) N Am s . (A.22) Am s , t * + δ N Am s = β A t * + δ N Am s -(t os + N Am s ) + α A N Am s = β A t * -t os + δ N Am s + (α A -β A ) N Am Clearly, user A cannot reduce the cost for any single vehicle in its fleet by rescheduling it to another time. It is necessary to check that user A cannot reduce its fleet cost by rescheduling a positive measure of vehicles. The first and last large vehicles to depart incur the same travel cost of C A (t * s ) = α α-β s for t os < t < α α+γ s for t < t < t oe t . Large vehicles depart at rate r A (t) =            0 α-β α N A ∆ for t As < t < for t < t As α α+γ N A ∆ for t < t < t Ae t 0 for t > t Ae . Critical travel times are t os = t * - γ β + γ N A + N o s , t As = t * s - βγ α (β + γ) N A + N o s -∆ , t = t * - βγ α (β + γ) N A + N o s , t * = β β + γ t * s + γ β + γ t * e , t Ae = t * e - βγ α (β + γ) N A + N o s -∆ , t oe = t * + β β + γ N A + N o s . The nonnegativity constraint on queuing time, q (t) ≥ 0, is guaranteed by (7). The optimality conditions with no queue, which involve multiple cases, are not very instructive. See Leonard and Long (1992, Chapter 8). If vehicles are homogeneous, the order of departure does not matter. The assumption that they depart in the same order is useful for accounting purposes. Figure2is a variant of Figure2in[START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. The main difference is that desired arrival times have a nondegenerate distribution rather than being the same for all vehicles. Recall that γA/βA = γ/β. In the candidate PSNE, large vehicles travel in the tails of the departure period. In the system optimum there is no queuing, and the optimal order of departure depends on the ranking of βA and β. If βA < β, large vehicles still travel in the tails, but if βA > β they would travel in the middle. Hence the PSNE may be inefficient not only because queuing occurs, but also because total schedule delay costs are excessive. [START_REF] Newell | The morning commute for nonidentical travelers[END_REF] analyzed a more general version of this arrival pattern in the bottleneck model with small users. See also[START_REF] De Palma | Comparison of morning and evening commutes in the vickrey bottleneck model[END_REF]. Recall Condition (30) which requires that in a PSNE with queuing, all large users have lower atomistic rates than the small users. This condition is satisfied in Case 2 because each large vehicle has a lower atomistic rate than small users after its preferred arrival time. In the candidate PSNE, large vehicles arrive at their individually preferred arrival times because they are less flexible than small vehicles. In the system optimum there is no queuing and, as in Case 1, the optimal order of departure depends on the ranking of βA and β. If βA > β, large vehicles would still be scheduled at their individually preferred arrival times, but if βA < β they would travel in the tails. In addition, in 2013 Street Hail Liveries or green taxies began providing service in Northern Manhattan, theBronx, Brooklyn, Queens, and Staten Island (Taxi and Limousine Commission, 2016). The world's top 5 terminal operators. December 4, https://www.porttechnology.org/news/the worlds top 5 terminal operators. The New York Times (2017). Your uber car creates congestion. should you pay a fee to ride? December 26 (by Winnie Hu), https://www.nytimes.com/2017/12/26/nyregion/ubercar-congestion-pricing-nyc.html?smid tw-nytimes&smtypcur. This formulation of scheduling preferences is due toVickrey (1969Vickrey ( , 1973) ) and has been used in several studies since; see de[START_REF] De Palma | Dynamic traffic modeling[END_REF]. Defining preferences in terms of utility is appropriate for commuting and certain other types of trips. For trips involving freight transport, the utility function can be interpreted as profit or some other form of payoff or performance metric. i (t) can be derived by integrating (12) and applying transversality condition (14). $ This research was partially funded by FONDECYT project No 11160294, the Complex Engineering Systems Institute (CONICYT -PIA -FB0816) and the Social Sciences and Humanities Research Council of Canada (Grant 435-2014(Grant 435- -2050)). Tseng, Y., Ubbels, B., and Verhoef, E. (2005). Value of time, schedule delay, and reliabilityestimation results of a stated choice experiment among dutch commuters facing congestion. In Department of Spatial Economics, Free University of Amsterdam. (1973). Pricing, metering, and efficiently using urban transportation facilities. Highway Research Record, 476:36-48. Weisbrod, G. and Fitzroy, S. (2011). Traffic congestion effects on supply chains: Accounting for behavioral elements in planning and economic impact models. In Renko, S., editor, Supply Chain Management-New Perspectives. INTECH Open Access Publisher. where t h and t w are such that all trips take place within the interval [t h , t w ]. Function u h (•) > 0 denotes the flow of utility at the origin (e.g., home), and function u w (•) > 0 denotes utility at the destination (e.g., work). It is assumed that u h (•) and u w (•) are continuously differentiable almost everywhere with derivatives u h ≤ 0 and u w ≥ 0, and ) for some time t * . Utility from time spent traveling is normalized to zero. The cost of a trip is the difference between actual utility and utility from an idealized instantaneous trip at time t Various specification are possible for the flow-of-utility functions. Vickrey (1969) adopted a piecewise constant form: where u h > 0, 0 < u E w < u h , and u L w > u h . The cost function corresponding to (A.2) is: where α = u h , β = u h -u E w , and γ = u L w -u h . Another specification adopted by [START_REF] Fosgerau | The value of travel time variance[END_REF], and called the "slope" model by [START_REF] Börjesson | Valuations of travel time variability in scheduling versus mean-variance models[END_REF], features linear flow-of-utility functions: Preferred travel time is t * = u ho -uwo u h1 +u w1 , and the cost function is where α = u ho -u h1 t * . To assure that the model is well-behaved, departure and arrival times are restricted to values such that u h (t) > 0 and u w (a) > 0. A third specification -used in early studies by Vickrey (1973), [START_REF] Fargier | Effects of the choice of departure time on road traffic congestion: theoretical approach[END_REF], and [START_REF] Hendrickson | Characteristics of travel time and dynamic user equilibrium for travel-to-work[END_REF] -is a variant of (A.4) with u h1 = 0: In (A.5), utility at the origin is constant and schedule delay costs depend on arrival time but not departure time. Cost functions (A.3), (A.4), and (A.5) all satisfy Assumption 1 in the text (with t * in place of k). Appendix A.2. Atomistic departure rates (Section 2) The atomistic rate for a user of type k is given by Eq. (3): Derivatives of specific interest are (with arguments suppressed to economize on notation) Vehicle k departs at time and arrives at time As shown by Silva et al. (2017, Eq. (24c)), If user A deviates from the candidate PSNE so that vehicle k departs at t rather than t k , vehicle k arrives at Vehicle k can benefit from deviation only if a k < t * k : a condition which reduces to ∆ < N A /s. Deviation is not profitable if ∆ > N A /s, or equivalently f = N A /∆ < s as per Theorem 1. Appendix A.5.2. Gain from internalization with m users With m large users the aggregate equilibrium departure rate in the candidate PSNE during the period of queuing is given by eq. ( 23): When all vehicles have the same desired arrival time, t * , the critical times t s , t q , t, and t e are determined by the following four equations: t e -t s = N/s, (A.9) β (t * -t s ) = γ (t e -t * ) , (A.10) (A.11)
98,721
[ "923059" ]
[ "98298", "15080", "501124" ]
00176014
en
[ "sdv" ]
2024/03/05 22:32:13
2007
https://hal-bioemco.ccsd.cnrs.fr/bioemco-00176014/file/Hauzy_et_al_.pdf
Celine Hauzy Florence D Hulot Audrey Gins Michel Loreau INTRA-AND INTERSPECIFIC DENSITY-DEPENDENT DISPERSAL IN AN AQUATIC PREY-PREDATOR SYSTEM Dispersal intensity is a key process for the persistence of prey-predator metacommunities. Consequently, knowledge of the ecological mechanisms of dispersal is fundamental to understanding the dynamics of these communities. Dispersal is often considered to occur at a constant per capita rate; however, some experiments demonstrated that dispersal may be a function of local species density. Here we use aquatic experimental microcosms under controlled conditions to explore intra-and interspecific density-dependent dispersal in two protists, a prey Tetrahymena pyriformis and its predator Dileptus sp. We observed intraspecific density-dependent dispersal for the prey and interspecific density-dependent dispersal for both the prey and the predator. Decreased prey density lead to an increase in predator dispersal, while prey dispersal increased with predator density. Additional experiments suggest that the prey is able to detect its predator through chemical cues and to modify its dispersal behaviour accordingly. Density-dependent dispersal suggests that regional processes depend on local community dynamics. We discuss the potential consequences of density-dependent dispersal on metacommunity dynamics and stability. Knowledge of dispersal mechanisms is crucial to understanding the dynamics of spatially structured populations and metacommunities [START_REF] Leibold | The metacommunity concept : a framework for multi-scale community ecology[END_REF]. Such knowledge may also be useful for explaining the response of communities to fragmentation and climate change. Metacommunity dynamics can be influenced by local processes such as intra-and interspecific interactions (Lotka 1925;[START_REF] Rosenzweig | Graphical representation and stability conditions of predator-prey interactions[END_REF][START_REF] Volterra | Variations and fluctuations in the numbers of individuals in animal species living together[END_REF] and regional processes such as dispersal, that link the dynamics of several local communities [START_REF] Cadotte | Metacommunity influences on community richness at multiple spatial scales: A microcosm experiment[END_REF]. Dispersal is the movement of individuals from one patch (emigration) to another (immigration). Intermediate intensities of dispersal can increase the persistence of prey-predator metacommunities [START_REF] Crowley | Dispersal and the stability of predator-prey interactions[END_REF]Holyoak & Lawler 1996a, b;[START_REF] Huffaker | Experimental studies on predation : dispersion factors and predatorprey oscillations[END_REF]Nachman 1987a;[START_REF] Reeve | Environmental variability, migration, and persistence in host-parasitoid systems[END_REF][START_REF] Zeigler | Persistence and patchiness of predator-prey systems induced by discrete event population exchange mechanisms[END_REF]. Dispersal rate is often considered a constant trait of species, but it may be condition-dependent. In particular, it may depend on the density of species in the local community. Density-dependent dispersal implies a direct interaction between local (population dynamics) and regional (dispersal) processes, which could influence metacommunity dynamics and stability. Many studies have explored dispersal in the context of a single species. They have shown that dispersal often depends upon a species' own local density [START_REF] Diffendorfer | Testing models of source-sink dynamics and balanced dispersal[END_REF]. We call this effect intraspecific density-dependent dispersal. Dispersal may either increase (positive density-dependent dispersal) or decrease (negative density-dependent dispersal) as population density increases. Positive and negative intraspecific density-dependent dispersal has been observed in mites [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF], insects [START_REF] Fonseca | Density-dependent dispersal of black fly neonates is by flow[END_REF] and vertebrates (French & Travis 2001;[START_REF] Galliard | Mother-offspring interactions affect natal dispersal in a lizard[END_REF][START_REF] Matthysen | Density-dependent dispersal in birds and mammals[END_REF]; see for review [START_REF] Matthysen | Density-dependent dispersal in birds and mammals[END_REF], but not in protists (Holyoak & Lawler 1996a, b). In a mite preypredator system, [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] found positive intraspecific density-dependent dispersal INTRODUCTION in the prey, but not in the predator. Conversely, French & Travis (2001) observed densityindependent prey dispersal but density-dependent parasitoid dispersal in a beetle-wasp system. A few studies have experimentally explored how dispersal of one species is affected by the density of another species. We refer to this type of dispersal as interspecific densitydependent dispersal. The presence of a predator or parasitoid has enhanced prey dispersal in some insect communities [START_REF] Holler | Enemy-induced dispersal in a parasitic wasp[END_REF][START_REF] Kratz | Effects of Stoneflies on Local Prey Populations: Mechanisms of Impact Across Prey Density[END_REF][START_REF] Wiskerke | Larval parasitoid uses aggregation pheromone of adult hosts in foraging behavior -a solution to the reliability-detectability problem[END_REF]. By contrast, in aquatic ciliates, dispersal of the prey (Colpidium striatum) was not affected by the presence of the predator (Didinium nasutum) (Holyoak, personal communication;Holyoak & Lawler 1996a, b). However, these studies considered predator presence or absence and not predator density. [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] showed with terrestrial mites that prey emigration had a positive relationship with predator density and that predator emigration had a negative relationship with prey density. Similarly, [START_REF] Kratz | Effects of Stoneflies on Local Prey Populations: Mechanisms of Impact Across Prey Density[END_REF] found that a decrease in prey density enhanced predator emigration in aquatic insect larvae. French & Travis (2001) observed a decrease in parasitoid swap dispersal as prey dispersal increased but no interspecific density-dependent dispersal for the prey. Thus, overall, dispersal seems to be a function of local densities in several experimental models. However, only two studies [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001) have considered the full set of intra-and interspecific effects of density on dispersal in prey-predator systems, in spite of their great interest in the perspective of metacommunity theory. Interspecific density-dependent dispersal in prey may be considered as a predatorinduced defence [START_REF] Lima | Behaviorial decisions made under the risk of predation : a review and prospectus[END_REF]. Other predator-induced responses include morphological changes in vertebrates [START_REF] Kishida | Flexible architecture of inducible morphological plasticity[END_REF] and invertebrates [START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF][START_REF] Tollrian | Inducible defences in cladocera: constraints, costs, and multipredator environments[END_REF]. Predator-induced dispersal suggests that the prey is able to assess the presence of its predator. Several experiments in aquatic systems showed that prey may detect their predator because of organic compounds they release in the medium, for instance Daphnia [START_REF] Lampert | Chemical induction of colony formation in a green alga (Scenedesmus acutus) by grazers (Daphnia)[END_REF][START_REF] Stibor | Predator-induced phenotypic variation in the pattern of growth and reproduction in Daphnia hyalina (Crustacea; Cladocera)[END_REF] and ciliates [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF]. By contrast, perception in ciliates may require encounter between individuals: two mechanisms have been reported in ciliates: (1) detection of their predators by direct membrane contact [START_REF] Kuhlmann | Escape response of Euplotes octocarinatus to turbellarian predators[END_REF][START_REF] Kusch | Behavioural and morphological changes in ciliates induced by the predator Amoeba proteus[END_REF]), and (2) detection of local hydrodynamic disturbances created by the motion of cilia [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF]. Consequently, interspecific density-dependent dispersal in ciliates may occur through waterborn chemical cues or may require direct contact. Here we explore intra-and interspecific density-dependent dispersal in freshwater protists. These organisms are often patchily distributed in ponds and lakes at the scale of millimetres or centimetres [START_REF] Arlt | Vertical and horizontal microdistribution of the meifauna in the Greifswalder Bodden[END_REF][START_REF] Smirnov | Spatial distribution of gymnamoebae (Rhizopoda, Lobosea) in brackish-water sediments at the scale of centimeters and millimeters[END_REF][START_REF] Taylor | Microspatial heterogeneity in the distribution of ciliates in a small pond[END_REF][START_REF] Wiackowski | Small-scale distribution of psammophilic ciliates[END_REF]). We use a prey-predator couple, in aquatic experimental microcosms under controlled conditions and investigate the effects of population density on dispersal, and address three questions. First, does a species' own density affect its dispersal (intraspecific density-dependent dispersal)? We test this hypothesis for the prey and the predator separately. Second, does prey density affect predator dispersal, and does predator density affect prey dispersal (interspecific density-dependent dispersal)? If prey dispersal is positively related to predator density, our third question investigates the effects of predator organic compounds on prey dispersal. In addition, we explore these effects at low and high initial prey density to assess the interaction between prey and predator densities on prey dispersal. Study Organisms Tetrahymena pyriformis Ehrenberg, a bacterivorous protist, and its protist predator Dileptus sp. were obtained from Carolina Biological Supply (Burlington, NC, USA). Prey and predator were cultured in 50 mL microcosms containing medium inoculated with a mixed bacterial suspension. The medium was prepared by sterilizing mineral water with 0•75 g.L -1 of Protozoan Pellet (Carolina Biological Supply). Cultures were maintained at 18•0 ± 0•5 °C under controlled light (14:10 h light:dark cycle). One day after bacterial inoculation, each culture was inoculated with 1 mL of T. pyriformis to give about 240 cells.mL -1 . Three days later, T. pyriformis cultures reached a stationary phase; they were then used to feed Dileptus sp. The same culturing method was used in all experiments. Under our standard culture conditions, the minimal generation times of T. pyriformis and Dileptus sp. were 8,18 h and c. 24 h, respectively (Hauzy C. & Hulot F.D., unpublished data). Experimental design To measure dispersal, we used microcosms made of two 100 mL bottles (55 mm internal diameter) connected by a 10 cm tube (5 mm internal diameter). We defined dispersal as migration from a bottle initially containing organisms (donor patch) to a bottle free of organisms (recipient patch). We conducted six independent experiments according to the following design. The tube of each microcosm was initially clamped and donor patches were assigned randomly. MATERIALS AND METHODS Initial densities in all experiments were adjusted by serial dilution in 1-day-old bacterial culture after counting the 3-day-old T. pyriformis and the 1-day-old Dileptus sp. cultures. Counts were done under a binocular microscope in 10 µL drops for T. pyriformis, and 100 µL drops for Dileptus sp. Several drops were examined until a minimum number of 400 individuals was counted. The donor patch received 50 mL of the experimental treatment culture. The recipient patch received 50 mL of standardized 1-day-old bacterial culture. The experiments were initiated by releasing the clamp off the tube. Organisms dispersed freely during a time that was shorter than the generation time of the species studied. Treatments were replicated five times, except experiment 5, which was replicated four times. At the end of the experiment, the content of each bottle was fixed with formaldehyde at a final concentration of 0•2%. Because the recipient patches did not contain high population densities, they were concentrated by centrifugation (5 min, 2000 r.p.m., 425 g). Organisms were counted under a binocular microscope in 10 µL drops for T. pyriformis, and 100 µL drops for Dileptus sp. Several drops were examined in accordance with the following two procedures: (1) in experiments 1-4 and 6 (see below) up to 100 or 400 individuals were counted, respectively, and (2) in experiment 5, individuals were counted in 800 µL. Dispersal was measured by the dispersal rate per capita and per generation, and was calculated as the ratio of the density of the focal species in the recipient patch at the end of the experiment to its initial density in the donor patch. Initial, not final, density in the donor patch was used to avoid the potentially confounding factor of prey depletion in experiments testing prey dispersal in the presence of the predator (see experiments 4, 5 and 6 below). In experiment 1 we tested the effect of T. pyriformis density on its own dispersal in the absence of Dileptus sp. Density treatments corresponded to cultures with 12 700 cells.mL -1 , 1270 cells.mL -1 and 43.1 cells mL -1 . The dispersal time was 4 h. In experiment 2 we tested the effect of Dileptus sp. density on its own dispersal. Treatments correspond to three levels of density: 61.3 cells mL -1 , 30.6 cells.mL -1 and 15.3 cells.mL -1 . T. pyriformis density was adjusted to 3.3 cells.mL -1 in all treatments. The dispersal time was 18 h. Interspecific density-dependent dispersal Experiment 3 tested the effect of T. pyriformis density on Dileptus sp. dispersal. A Dileptus sp. culture was mixed 50:50 with a T. pyriformis culture of varying density. We obtained three treatments with the same initial Dileptus sp. density (20.8 cells.mL -1 ) but different initial T. pyriformis densities: 5400 cells.mL -1 , 540 cells.mL -1 and 54.0 cells.mL -1 . The dispersal time was 18 h. Experiment 4 tested the effect of Dileptus sp. density on T. pyriformis dispersal. Cultures with different Dileptus sp. densities were mixed 50:50 with a T. pyriformis culture. T. pyriformis initial density was 1120 cells.mL-1 in all treatments, and Dileptus sp. densities were 37.5 cells.mL -1 , 18.8 cells.mL -1 and 9.4 cells.mL -1 . The dispersal time was 5 h. Mechanism of detection In order to test whether T. pyriformis is able to detect Dileptus sp. via a chemical signal, we compared prey dispersal rate in the presence of the predator (treatment «with»), in a filtered medium of predator culture (treatment «filtered») and in the absence of predator (treatment «without»). This hypothesis was tested independently for two initial T. pyriformis densities (experiment 5: 550 cells.mL -1 ; experiment 6: 6600 cells.mL -1 ). In the treatment «with», we added the Dileptus sp. culture to the T. pyriformis culture (initial density of Dileptus sp. in experiment 5: 63.5 cells mL -1 ; in experiment 6: 22.1 cells mL -1 ). In the treatment «filtered», we replaced the Dileptus sp. culture of the treatment «with» with the same Dileptus sp. filtered with a 1.2 µm Whatman GF/C filter permeable to chemical compounds and bacteria. In the treatment «without», the T. pyriformis culture was diluted with a 1-day-old bacterial culture. Each treatment was replicated five and four times in experiments 5 and 6, respectively. The dispersal time in both experiments was 8 h. Statistical Analysis Data were analysed with linear (LM) or linear mixed effects models in R vs. 2•2•0. For experiments 1-4, data were considered as continuous variables whereas data of experiments 5 and 6 were considered categorical. When homoscedasticity of variances (Bartlett's test) was satisfied (experiments 2, 3, 5 and 6), we used the LM procedure. When variances were heteroscedastic (experiments 1 and 4), we used the Generalized Least Squares procedure of the linear mixed effects model, which accounts for heteroscedasticity. The Generalized Least Squares procedure gave the same qualitative results as the LM procedure. Tukey's post hoc tests were used to determine the differences between treatments and groups of treatments. In experiment 1, no T. pyriformis individuals could be detected in the recipient patch for three of five replicates of the low density treatment. T. pyriformis density had a strong significant effect on its own dispersal rate (Fig. 1a; t = 4.17, d.f. = 13, P = 0.001). The treatment with the highest density (12 700 cells.mL -1 ), which corresponded to the beginning of the stationary phase, was significantly different (P < 0.001) from the lower density treatments (1270 and 43.1 cells.mL -1 ). Experiment 2 (Fig. 1d) showed no significant effect of Dileptus sp. density on its per capita dispersal rate ( F = 2•45, d.f. = 14, P = 0•141). Interspecific density-dependent dispersal In experiment 3 (Fig. 1c), T. pyriformis density had a strong significant effect on Dileptus sp. dispersal rate (F = 7.07, d.f. = 14, P = 0.019). The average Dileptus sp. dispersal rate was significantly higher at the lowest prey density (54.0 cells mL -1 ) than at higher prey densities (5400.0 cells.mL -1 and 540.0 cells.mL -1 ) (P < 0.0001). In experiment 4 (Fig. 1b), the initial T. pyriformis density (1120 cells.mL -1 ) was chosen such that it does not affect its own dispersal rate (see Results of experiment 1). Dileptus sp. density had a strong significant effect on the dispersal rate of its prey (F = 22.28, d.f. = 14, P < 0.001) and the dispersal rate of T. pyriformis was significantly higher at the two highest Dileptus sp. densities (37.5 cells.mL -1 and 18.8 cells.mL -1 ) than at the lowest density (9.8 cells mL -1 ) (P < 0.0001). RESULTS Mechanism of detection Experiments 5 and 6 were conducted at a predator density that induces prey dispersal (see Results of experiment 4). When the density of T. pyriformis was low (experiment 5), the differences among treatments on T. pyriformis dispersal rate were significant (Fig. 2a; F = 165.4, d.f. = 12, P < 0.001). Tukey's post hoc test indicated that prey dispersal rate in the treatments «filtered» and «with» were significantly higher than in the treatment «without» (P < 0.001). Prey dispersal rate was also significantly higher in the treatment «filtered» than in the treatment «with» (P < 0.005). When initial T. pyriformis density was high (experiment 6), the effects of the treatments «without», «filtered» and «with» on T. pyriformis dispersal rate were marginally significant (Fig. 2b; F = 3.623, d.f. = 9, P = 0.070). Tukey's post hoc test shows that the prey dispersal rate in the treatment «filtered» was marginally higher than in the treatment «without» (P = 0•060). The results of our study suggest that in aquatic prey-predator systems, the dispersal of a species can be a plastic trait that depends on population densities. We observed intraspecific density dependence in dispersal for the prey T. pyriformis. By contrast, there was no significant intraspecific density dependence in dispersal for the predator Dileptus sp. Interspecific density-dependent dispersal was observed for both the prey and the predator. A decrease in T. pyriformis density led to a significant increase in Dileptus sp. dispersal rate, while T. pyriformis dispersal was higher when Dileptus sp. density was higher. The two previous studies [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001) that have exhaustively explored density-dependent dispersal in a prey-predator system revealed two different patterns (Fig. 3). French & Travis (2001) observed that predator dispersal depended on its own density and on prey density, but prey dispersal was densityindependent. By contrast, [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] showed interspecific density-dependent dispersal for both the prey and the predator, and intraspecific density-dependent dispersal for the prey only. Our results follow the same pattern as [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]. Thus, only two patterns of density-dependent dispersal in prey-predator systems have received experimental support. An increase in the prey dispersal rate when predator density increases, suggests that the prey is able to detect its predator and avoid it. Studies on ciliates' perception have shown that two different detection mechanisms are possible: recognition through chemical cues released in the medium [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF][START_REF] Seravin | Feeding Behavior of Unicellular Animals. I. The Main Role of Chemoreception in the Food Choise of Carnivorous Protozoa[END_REF] and recognition that requires direct contact [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | Escape response of Euplotes octocarinatus to turbellarian predators[END_REF][START_REF] Kusch | Behavioural and morphological changes in ciliates induced by the predator Amoeba proteus[END_REF]). Our results suggest that the prey is able to detect its predator through chemical cues. At a low initial prey density, prey dispersal was significantly higher when prey was in the presence of predators or in the presence of a filtered medium of predator cultures than in the control. At a high initial prey density, prey dispersal was marginally higher when prey was in the presence of a predatorfiltered culture than in the control or in the presence of the predator. The difference in prey dispersal between the predator-filtered culture and the predator culture may be a result of prey depletion by the predator in the latter treatment. Two hypotheses may explain the discrepancy between the experiments at low and high densities. First, at a low initial prey density (550 cells.mL -1 ), there is no effect of prey density on its own dispersal (see experiment 1, Fig. 1a). The dispersal observed in the presence or simulated presence of the predator is only due to the predator. By contrast, at a high initial prey density (6600 cells.mL -1 ), prey density may have an effect on its own dispersal. Therefore, in the absence of the predator, prey dispersal is high and the effect of a predator (whether real or simulated) on dispersal is reduced in comparison with the prey's intraspecific density effect. This result suggests an upper bound on prey dispersal. Second, the discrepancy between the two experiments might be a consequence of different predator densities in experiments 5 (63.5 cells.mL -1 ) and 6 (22.1 cells.mL -1 ). However, these two densities are both in the range of predator densities that induce prey dispersal (see experiment 4, Fig. 1b). Therefore the latter hypothesis is not supported by our data. [START_REF] Travis | Density-dependent dispersal in host-parasitoid assemblages[END_REF], and in (b) [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] and present experiments. Implications for prey-predator metacommunities In a seminal paper, [START_REF] Huffaker | Experimental studies on predation : dispersion factors and predatorprey oscillations[END_REF] showed that prey-predator interactions persist longer in a large fragmented landscape than in a small fragmented landscape or isolated patches. His experiment stimulated theoretical studies that have explicitly addressed the role of spatial heterogeneity in the persistence of prey-predator interactions that are prone to extinction when isolated de Roos, 1991 #29;Hassell, 1991 #30;Sabelis, 1988 #31;Sabelis, 1991 #32}. Several experimental studies showed that individuals' migration between local communities allows regional persistence because of the asynchrony of local dynamics (Holyoak & Lawler 1996a;[START_REF] Janssen | Metapopulation dynamics of a persisting predator-prey system in the laboratory: Time series analysis[END_REF][START_REF] Taylor | Metapopulations, Dispersal, and Predator-Prey Dynamics: An Overview[END_REF][START_REF] Van De Klashorst | A demonstration of asynchronous local cycles in an acarine predator-prey system[END_REF]. Theoretical studies focused on the essential role of dispersal intensity in prey-predator metacommunities [START_REF] Crowley | Dispersal and the stability of predator-prey interactions[END_REF]Nachman 1987a, b;[START_REF] Reeve | Environmental variability, migration, and persistence in host-parasitoid systems[END_REF][START_REF] Zeigler | Persistence and patchiness of predator-prey systems induced by discrete event population exchange mechanisms[END_REF]. These models (reviewed in (Holyoak & Lawler 1996a, b) predict that an intermediate dispersal level of prey and predator enables metacommunity persistence. A low dispersal rate reduces the probability of recolonization of locally extinct patches and cannot prevent local extinctions, whereas a high dispersal rate tends to synchronize local dynamics [START_REF] Brown | Turnover Rates in Insular Biogeography : Effect of Immigration on Extinction[END_REF][START_REF] Levins | Extinction. In Some mathematical questions in biology[END_REF][START_REF] Yodzis | The Indeterminacy of Ecological Interactions as Perceived through Perturbation Experiments[END_REF]. Experiments have confirmed that moderate dispersal extends the persistence of prey-predator systems (Holyoak & Lawler 1996a, b). However, in these theoretical studies the dispersal ability of species from one patch to another is regarded as an unconditional process described by a single parameter. Our results add to the body of experiments [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001; for review see Matthysen 2005) that show that dispersal is density-dependent, and hence that regional processes depend upon local population dynamics. This strong interaction between local and regional processes is likely to affect the dynamics and stability of communities and metacommunities. Recent models that incorporate density-dependent dispersal behaviour show different community-level effects of dispersal (reviewed in [START_REF] Bowler | Causes consequences of animal dispersal strategies: relating individual behavior to spatial dynamics[END_REF]. Most of these models explored the effects of intraspecific density-dependent dispersal on the stability of single-species metapopulations. Models that incorporate positive densitydependent dispersal behaviour, as we showed here with T. pyriformis, have found a stabilizing effect of dispersal on population dynamics, whereas models that have simpler dispersal rules do not observe stabilizing effects [START_REF] Janosi | On the Evolution of Density Dependent Dispersal in a Spatially Structured Population Model[END_REF][START_REF] Ruxton | Fitness-dependent dispersal in metapopulations and its consequences for persistence and synchrony[END_REF]; but see [START_REF] Ruxton | Density-dependent migration and stability in a system of linked populations[END_REF]. Other models have shown that the form of the relationship between dispersal and density is important for predicting its consequences for stability [START_REF] Amarasekare | Interactions between local dynamics and dispersal: Insights from single species models[END_REF][START_REF] Ruxton | Density-dependent migration and stability in a system of linked populations[END_REF][START_REF] Ylikarjula | Effects of Patch Number and Dispersal Patterns on Population Dynamics and Synchrony[END_REF]. The effects of interspecific density-dependent dispersal on the stability of preypredator metacommunities are still unclear. French & Travis (2001) parameterized a model and found no differences in species persistence and community dynamics between a fixed mean dispersal and interspecific density-dependent dispersal for the predator (parasitoid). By contrast, taking into account intra-and interspecific density-dependent dispersal improves the ability of prey-predator metacommunity models to predict metacommunity dynamics in experiments [START_REF] Bernstein | A Simulation Model for an Acarine Predator-Prey System (Phytoseiulus persimilis-tetranychus urticae)[END_REF][START_REF] Ellner | Habitat structure and population persistence in an experimental community[END_REF]Nachman 1987a, b). Thus, density-dependent dispersal may be fundamental for our understanding of prey-predator metacommunity dynamics. At present, several questions remain unanswered. Is there an interaction between the effects of intra-and interspecific density-dependent dispersal on prey-predator metacommunities? Do different density-dependent dispersal patterns (Fig. 3) have different effects at the metacommunity level? What are the implications of the interaction between local and regional processes for conservation and biological control? Our microcosm experiments demonstrate that the dispersal of prey and predator protists can depend on both intra-and interspecific density. Our results may be fundamental and general because they were obtained with relatively simple organisms (unicellular eukaryotes). We further show that prey can detect predator presence through organic compounds that the predator releases in the medium. Therefore chemical signals among organisms may play an important role in species dispersal, and density-dependent dispersal may be a pivotal process in metacommunity dynamics. Understanding and testing the effects of density-dependent dispersal on metacommunity dynamics, is a challenge for future studies. Figure 1 . 1 Figure 1. Effects of (a) Tetrahymena pyriformis density and (b) Dileptus sp. density on T. pyriformis dispersal rate, and effects of (c) T. pyriformis density and (d) Dileptus sp. density on Dileptus sp. dispersal (mean ± 1 SE). Letters indicate significant differences in dispersal rate among density treatments. Figure 2 . 2 Figure 2. Tetrahymena pyriformis detects Dileptus sp. presence through chemical cues (mean ± 1 SE). (a) Low initial density of T. pyriformis; (b) high initial density of T. pyriformis. Letters indicate significant differences in dispersal rate among treatments. Figure 3 . 3 Figure 3. Density-dependent dispersal patterns in prey-predator systems. Arrows indicate positive (+) or negative (-) significant effect of density on dispersal observed in (a) French & Travis (2001), and in (b)[START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] and present experiments. We thank M. Huet for her help in maintaining the protist cultures and T. Tully for advice on the statistical analysis. We thank G. Lacroix, S. Leroux and anonymous reviewers for theirs remarks that permit progress of the manuscript. C.H. thanks B. Boublil for his constant support. ACKNOWLEDGEMENTS REFERENCES
32,480
[ "1178157", "178300" ]
[ "31573", "31573", "33172" ]
00153592
en
[ "phys" ]
2024/03/05 22:32:13
2007
https://hal.science/hal-00153592v2/file/chpcr.pdf
Sylvain Landron Marie-Bernadette Lepetit The crucial importance of the t 2g -e g hybridization in transition metal oxides We studied the influence of the trigonal distortion of the regular octahedron along the (111) direction, found in the CoO2 layers. Under such a distortion the t2g orbitals split into one a1g and two degenerated e ′ g orbitals. We focused on the relative order of these orbitals. Using quantum chemical calculations of embedded clusters at different levels of theory, we analyzed the influence of the different effects not taken into account in the crystalline field theory ; that is metal-ligand hybridization, long-range crystalline field, screening effects and orbital relaxation. We found that none of them are responsible for the relative order of the t2g orbitals. In fact, the trigonal distortion allows a mixing of the t2g and eg orbitals of the metallic atom. This hybridization is at the origin of the a1g-e ′ g relative order and of the incorrect prediction of the crystalline field theory. I. INTRODUCTION Since the discovery of super-conductivity in the hydrated Na 0.35 CoO 2 -1.3H 2 O 1 compound and of the very large thermopower in the Na 0.7±δ CoO 2 2 members of the same family, the interest of the community in systems built from CoO 2 layers has exploded. The first step in the understanding of the electronic properties of transition metal oxides, such as the CoO 2 -based compounds, is the analysis of the crystalline field splitting of the d orbitals of the transition metal atom. Indeed, depending on this splitting, the spin state of the atom, the nature of the Fermi level orbitals, and thus the Fermi level properties will differ. The CoO 2 layers are built from edge-sharing CoO 6 octahedra (see figure 1). In these layers, the first coordina- tion shell of the metal atom differs from the regular octahedron by a trigonal distortion along the three-fold (111) axis (see figure 6). In all known materials (whether cobalt oxides or other metal oxides such as LiVO 2 , NaTiO 2 , NaCrO 2 , etc. . . ), this distortion is in fact a compression. The local symmetry group of the metal atom is lowered from O h to D 3d . The T 2g irreducible representation of the O h group is thus split into one E g and one A 1g representations. The relative energies of the resulting e ′ g and a 1g orbitals (see figure 6) has been a subject of controversy in the recent literature, as far as the low spin Co 4+ ion is concerned. At this point let us point out the crucial importance of the knowledge of this energetic order for the understanding of the low energy properties of the CoO 2 layers. Indeed, the possible existence of an orbital order, as well as the minimal model pertinent for the description of these systems depend on this order. Authors such as Maekawa 3 , following the crystalline field theory, support that the a 1g orbital is of lower energy than the two degenerated e g ones, leading to an orbital degeneracy for the Co 4+ ion. On the contrary, ab initio calculations, both using periodic density functional methods 4 and local quantum chemical methods for strongly correlated systems 5 yield an a 1g orbital of higher energy than the e ′ g ones, and a non degenerated Fermi level of the Co 4+ ion. Angle Resolved Photoemis- sion Spectroscopy (ARPES) experiments were performed on several CoO 2 compounds 6 . This technique probes the Fermi surface and clearly shows that the Fermi surface of the CoO 2 layers is issued from the a 1g orbitals, and not at all from the e ′ g orbitals (orbitals of E g symmetry, issued from the former t 2g orbitals), supporting the ab-initio results. In the present work, we will try to understand the reasons why the crystalline field model is unable to find the good energetic order of t 2g orbitals in such trigonal distortions. Several hypotheses can be made to explain the orbital order : the delocalization of the metal 3d orbitals toward the ligands, the fact that the electrostatic potential of the whole crystal differs from the one assumed in the crystalline field model, the correlation effects within the 3d shell, the screening effects, etc. All these hypotheses will be specifically tested on the Co 4+ (3d 5 ) ion that is subject in this work to a more thorough study than other metal fillings. Nevertheless, other metal fillings (3d 1 to 3d 3 , that can be found in vanadium, titanium chromium, . . . oxides) will also be studied. We will see the crucial importance of the band filling on the t 2g orbitals order. In this work we will focus only on the O h to D 3d trigonal distortion, subject of the controversy. The next section will present the method used in this work, section three and four will reports the calculations and analyze them, finally the last section will be devoted to the conclusion. II. COMPUTATIONAL METHOD AND DETAILS The energy of the atomic 3d orbitals is an essentially local value, as supposed in the crystalline field model. However its analysis exhibits some non local contributions. Indeed, orbitals energies can be seen as resulting from the following terms: • the electrostatic potential due to the first coordination shell -in the present case, the six oxygen atoms of the octahedron, further referred as nearest neighbor oxygens (NNO) -, • the electrostatic potential due to the rest of the crystal, • the kinetic energy that includes the hybridization of the metal orbitals with nearest neighbor ligands, • the Coulomb and exchange contributions within the 3d shell, • the radial relaxation of the 3d orbitals, • and finally the virtual excitations from the other orbitals that are responsible for the screening effects. All these contributions, excepts for the electrostatic potential due to the rest of the crystal (nucleus attractions and Coulomb interactions), are essentially local contributions 7 and known to decrease very rapidly with the distance to the metal atom. In fact, they are mostly restricted to the first coordination shell of the cobalt. On the contrary, the Madelung potential retains the resulting non local contributions from the nucleus attraction and the Coulomb electron-electron repulsion. It is known to be very slowly convergent with the distance. We thus made calculations at different levels, including first all the above effects, and then excluding them one at the time, in order to end up with the sole effects included in the crystalline field model. The calculations will thus be done on CoO 6 or Co fragments. Different embedding and different levels of calculation will be used. The Co -O distance will be fixed to the value of the super-conducing compound, i.e. R Co-O = 1.855 Å. The angle θ between the Co -O direction and the z axis (see figure 6) will be varied from 0 to 90 • . The calculations will be done at the Complete Active Space Self Consistent Field + Difference Dedicated Configurations Interaction 8,9 (CASSCF+DDCI, see subsection II A) level for the most involved case, using the core pseudopotential and basis set of Barandiaran et al. 10 . The fragment used will include all the first coordination oxygens in addition to the cobalt atom. The embedding will be designed so that to properly represent the full Madelung potential of the super-conducting material, and the exclusion effects of the rest of the crystal on the computed fragment electrons (see reference 5 for further details). For the simplest case a minimal basis set derived from the preceeding one will be used and only the cobalt atom will be included in the computed fragment. The effect of the crystalline field will be described by -2 point charges located at the positions of the first coordination shell oxygens. The calculations will be done at the CASSCF level only. Between these two extreme cases, several intermediate ones will be considered, in order to check the previously enumerate points. The electrostatic potential due to the cobalt first oxygen neighbors (NNO), as well as the unscreened Coulomb and exchange contributions within the 3d shell, are included in all calculations. The electrostatic potential is treated either through the inclusion of the NNO in the computed fragment or through -2 point charges. The Coulomb and exchange contributions are treated through the CASSCF calculation. The electrostatic contribution of the rest of the crystal is included only in the most involved calculations, using an appropriated embedding of point charges and Total Ions pseudo-Potential 11 . The hybridization of the metal 3d orbitals is treated by including explicitely the NNO in the considered fragment (CoO 6 ). The radial relaxation of the 3d orbitals is treated when extended basis set are used. When a minimal basis set is used, the radial part of the orbitals is frozen as in the high spin state of the isolated Co 4+ ion. Finally, the screening effects are treated only when the calculation is performed at the CASSCF+DDCI level. A. The CASSCF and DDCI methods Let us now described shortly the CASSCF and DDCI ab initio methods. These methods are configurations interaction (CI) methods, that is exact diagonalization methods within a selected set of Slater's determinants. These methods were specifically designed to treat strongly correlated systems, for which there is no qualitative single-determinant description. The CASSCF method treats exactly all correlation effects and exchange effects within a selected set of orbitals (here the 3d shell of the cobalt atom). The DDCI method treats in addition the excitations responsible for the screening effects on the exchange, repulsion, hopping, etc. integrals. These methods are based on the partitioning of the fragment orbitals into three sets the occupied orbitals that are always doublyoccupied in all determinants of the Complete Active Space or CAS (here the cobalt inner electrons and the NNO ones), the active orbitals that can have all possible occupations and spins in the CAS (here the cobalt 3d orbitals), the virtual orbitals that are always empty in the CAS. The CASCI method is the exact diagonalization within the above defined Complete Active Space. The CASSCF method optimizes in addition the fragment orbitals in order to minimize the CASCI wave function energy. This is a mean-field method for the occupied orbitals but all the correlation effects within the active orbitals are taken into account. Finally the DDCI method uses a diagonalization space that includes the CAS, all single-and double-excitations on all determinants of the CAS, except the ones that excite to occupied orbitals into two virtual orbitals. Indeed, such excitations can be shown not to contribute -at the second order of perturbation -to the energy differences between states that differ essentially by their CAS wave function. Therefore, they have little importance for the present work. The DDCI method thus accurately treats both the correlation within the CAS and the screening effects. Compared to the very popular density functional methods, the CAS+DDCI method presents the advantage of treating exactly the correlation effects within the 3d shell. This is an important point for strongly correlated materials such as the present ones. Indeed, even if the DFT methods should be exact provided the knowledge of the correct exchange-correlation functional, the present functionals work very well for weakly correlated systems, but encounter more difficulties with strong correlation effects. For instance the LDA approximation finds most of the sodium cobaltites compounds ferromagnetic 4 in contradiction with experimental results. LDA+U functionals try to correct these problems by using an ad hoc on-site repulsion, U, within the strongly correlated shells. This correction yields better results, however it treats the effect of the repulsion within a mean field approximation, still lacking a proper treatment of the strong correlation. The drawbacks of the CAS+DDCI method compared to the DFT methods are its cost in term of CPU time and necessity to work on formally finite and relatively small systems. In the present case however, this drawback appear to be an advantage since it decouples the local quantities under consideration from the dispersion problem. III. RESULTS AND ANALYSIS Let us first attract the attention of the reader on what is supposed to be the energy difference between the e ′ g and a 1g orbitals of the Co 4+ ion in an effective model. In fact, the pertinent parameters for an effective model should be such that one can reproduce by their means the exact energies or, in the present case, the ab-initio calculation of the different Co 4+ atomic states. It results, that within a Hubbard type model, the pertinent effective orbital energies should obey the following set of equations E (|a 1g ) = 4ε(e ′ g ) + ε(a 1g ) + 2U + 8U ′ -4J H E |e ′ g = 3ε(e ′ g ) + 2ε(a 1g ) + 2U + 8U ′ -4J H ∆E = E |e ′ g -E (|a 1g ) = ε(a 1g ) -ε(e ′ g ) where the schematic picture of the |e ′ g and |a 1g states is given in figure 3, ε(e ′ g ) and ε(a 1g ) are the effective orbital energies of the e ′ g and a 1g atomic orbitals, U is the effective electron-electron repulsion of two electrons in the same cobalt 3d orbital, U ′ the effective repulsion of two electrons in different cobalt 3d orbitals and J H the atomic Hund's exchange effective integrals within the cobalt 3d shell. g is doubly-degenerated, the hole being located either on the e ′ g1 or on the e ′ g2 orbitals. A. The reference calculation The reference calculation includes all effects detailed in the preceding section. For the super-conducting com-pound the effective t 2g splitting was reported in reference 5 to be ∆E = ε(a 1g ) -ε(e ′ g ) = 315 meV This point corresponds to θ ≃ 61.5 • (that is a value of θ larger than the one of the regular octahedron θ 0 ≃ 54.74 • ) where the crystalline field theory predicts a reverse order between the t 2g orbitals. B. Screening effects The effect of the screening on the t 2g orbital splitting can be evaluated by doing a simple CASCI calculation using the same fragment, embedding, basis set and orbitals as the preceding calculation. Without the screening effects, one finds a t 2g splitting of ∆E = ε(a 1g ) -ε(e ′ g ) = 428 meV Obviously the screening effects cannot be taken as responsible for the qualitative energetic order between the a 1g and e ′ g orbitals. C. Cobalt 3d -oxygen hybridization The effect of the hybridization of the cobalt 3d orbitals with the neighboring oxygen ligands can be evaluated by taking out the oxygen atoms from the quantum cluster, and treating them as simple -2 point charges at the atomic locations. The other parameters of the calculation are kept as in the preceding case. The new orbitals are optimized at the average-CASSCF level between the two |e ′ g and the |a 1g states. It results in a t 2g splitting of ∆E = ε(a 1g )ε(e ′ g ) = 40 meV for the super-conducting compound. Again the hybridization of the cobalt 3d orbitals with the neighboring oxygens cannot be taken as responsible for the inversion of the splitting between the a 1g and e ′ g orbitals. D. Long-range electrostatic potential The effect of the long-range electrostatic potential can be evaluated by restricting the embedding to the NNO point charges only, that is to the electrostatic potential considered in the crystalline field method. One finds a t 2g splitting of ∆E = ε(a 1g ) -ε(e ′ g ) = 124 meV Once again the results is positive and thus the long-range electrostatic potential is not the cause of the crystalline field inversion of the t 2g splitting. E. Orbital radial relaxation At this point only few effects on top of the crystalline field theory are still treated in the calculation. One of them is the radial polarization effect of the 3d orbitals, that allows their adaptation to the different occupations in the specific |a 1g and |e ′ g states. This polarization is due to the use of an extended basis set. We thus reduce the basis set to a minimal basis set (only one orbital degree of freedom per (n, l) occupied or partially occupied atomic shell). The minimal basis set was obtained by the contraction of the extended one ; the radial part of the orbitals being frozen as the one of the the isolated Co 4+ high spin state. This choice was done in order to keep a basis set as close as possible to the extended one, and because only for the isolated atom all 3d orbitals are equivalent, and thus have the same radial part. One obtains in this minimal basis set a t 2g splitting of ∆E = ε(a 1g ) -ε(e ′ g ) = 41 meV At this point we computed the effective orbital energies in the sole crystalline field conditions, however the result is still reverse than what is usually admitted within this approximation. Indeed, the Co 4+ ion was computed in the sole electrostatic field of the NNO, treated as -2 point charges, the calculation is done within a minimal basis set, and at the average-CASSCF level. F. Further analysis In order to understand this puzzling result, we plotted the whole curve ∆E(θ) (see figure 4) at this level of calculation and analyzed separately all energetic terms involved in this effective orbital energy difference. One sees on figure 4 that the ∆E(θ) curve is not monotonic, as expected from the crystalline field theory. Indeed, while for θ = 0 the relative order between the a 1g and e ′ g orbitals is in agreement with the crystalline field predictions, for θ = 90 • the order is reversed. One should also notice that, in addition to the θ 0 value of the regular octahedron, there is another value of θ for which the three t 2g orbitals are degenerated. In the physically realistic region of the trigonal distortion (around the regular octahedron θ 0 value) the relative order between the a 1g and e ′ g orbitals is reversed compared to the crystalline field predictions. Let us now decompose ∆E(θ) into • its two-electron part within the 3d shell -∆E 2 (θ) - • and the rest referred as 3d single-electron part -∆E 1 (θ). ∆E 1 includes the kinetic energy, the electron-nucleus and electron-charge interaction, and the interaction of the 3d electrons with the inner shells electrons. FIG. 4: Orbital splitting between the a1g and e ′ g orbitals when only the nearest neighbor ligands electrostatic field is included. The dotted red curve corresponds to the singleelectron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation ( 1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)). The solid vertical line points out the regular octahedron θ value and the dashed vertical line the θ value for the super-conducting compound. One thus has ∆E = ∆E 1 + ∆E 2 = ε(a 1g ) -ε(e ′ g1 ) = ε(a 1g ) -ε(e ′ g2 ) with ∆E1 = a1g - ∇ 2 2 a1g -e ′ g - ∇ 2 2 e ′ g (1) + a1g N -ZN RN a1g -e ′ g N -ZN RN e ′ g (2) + χ : occ 2 a1g χ 1 r12 a1g χ -a1g χ 1 r12 χ a1g - χ : occ 2 e ′ g χ 1 r12 e ′ g χ -e ′ g χ 1 r12 χ e ′ g (3) and ∆E2 = a1g a1g 1 r12 a1g a1g -e ′ g e ′ g 1 r12 e ′ g e ′ g +2 a1g e ′ g 1 r12 a1g e ′ g -a1g e ′ g 1 r12 e ′ g a1g (4) - where the equations are given in atomic units. Z N refers to the nucleus charge of the cobalt atom and the -2 point charges located at the NNO positions. R N is the associated electron-charge distance. The sum on χ runs over all the orbitals of the cobalt inner-shells. Let us now examine the dependence on θ of each of the terms of ∆E 1 and ∆E 2 . Kinetic energy : the radial part of each of the 3d orbitals being identical due the the minimal basis set restriction, the kinetic part is identical for all 3d orbitals and thus its contribution to ∆E 1 (terms labeled 1 of ∆E 1 ) vanishes. Nuclear interaction : obviously this contribution to ∆E 1 (terms labeled 2 of ∆E 1 ) strongly depends on θ through the position of the -2 charges. Interaction with the inner-shells electrons : this term (terms labeled 3 of ∆E 1 ) depends only on the shape of the t 2g and inner-shells orbitals. However, the minimal basis set does not leave any degree of freedom for the relaxation of the inner-shells orbital whose shapes are thus independent of θ. Similarly, the 3d radial part of the 3d orbitals is totally frozen. ∆E 2 : finally, the dependence of ∆E 2 can only go through the shape of the a 1g and e ′ g orbitals whose radial part is totally frozen due to the use of a minimal basis set. If one accepts that the a 1g and e ′ g orbitals are issued from the t 2g orbitals of the regular octahedron, their angular form is totally given by the symmetry (see eq. 5, 6) and both ∆E 2 and the third contribution of ∆E 1 should be independent of θ. e g    e • g1 = 1 √ 3 d xy + √ 2 √ 3 d xz e • g2 = 1 √ 3 d x 2 -y 2 + √ 2 √ 3 d yz (5) t 2g          a • 1g = d z 2 e •′ g1 = √ 2 √ 3 d xy -1 √ 3 d xz e •′ g2 = √ 2 √ 3 d x 2 -y 2 -1 √ 3 d yz (6) where the x, y and z coordinates are respectively associated with the a, b and c crystallographic axes. Figure 4 displays both ∆E 1 (dotted red curve) and ∆E 2 (dashed green curve) contributions to ∆E. One sees immediately that ∆E 2 is not at all independent of θ but rather monotonically increasing with θ. It results that the above hypotheses of the t 2g exclusive origin for the e ′ g orbitals is not valid. Indeed, out of the θ = θ 0 point, the only orbital perfectly defined by the symmetry is the a 1g orbital. The e ′ g and e g orbitals belong to the same irreducible representation (E g ) and can thus mix despite the large t 2g -e g energy difference. If we name this mixing angle α, it comes Figure 5 displays α as a function of θ. One sees that the t 2g -e g hybridization angle α is non null -except for the regular octahedron -and a monotonic, increasing function of θ. Even if very small (±0.6 • ), this t 2g -e g hybridization has an important energetic effect, since it lowers the the e ′ g orbital energy while increasing the e g one. α is very small but it modulates large energetic factors in ∆E 2 : on-site Coulomb repulsions of two electrons in the 3d orbitals. The result is a monotonic increasing variation of ∆E 2 as a function of θ. The variation of the ∆E 1 term is dominated by its nuclear interaction part and exhibits a monotonic decreasing variation as a function of θ, as expected from the crystalline field theory. The nuclear interaction and t 2g -e g hybridization have thus opposite effects on the a 1g -e ′ g splitting. The failure of the crystalline field theory thus comes from not considering the t 2g -e g hybridization. In the calculations presented in figures 4 and 5, the screening effects on the on-site Coulomb repulsions and exchange integrals were not taken into account. Thus, the absolute value of ∆E 2 as a function of the hybridization α, is very large and α is very small. When the screening effects are properly taken into account, the absolute value of ∆E 2 as a function of α is reduced by a factor about 6, and the t 2g -e g hybridization is much larger than the values presented in figure 5. Indeed, in the superconducting compound, for a realistic calculation including all effects, one finds α ≃ 13 • (θ = 61.5 • ). At this point we would like to compare the a 1g -e ′ g splitting found in the present calculations and the one found using DFT methods. Indeed, our splitting (315 meV for the superconducting compound) is larger than the DFT evaluations (always smaller < 150 meV). This point can be easily understood using the single-electron and two-electron part analysis presented above. Indeed, while the single-electron part is perfectly treated in DFT calculations, the two-electron part is treated within the exchange-correlation kernel. However these kernels are well known to fail to properly reproduce the strong correlation effects present in the transition metal opened 3d shells. One thus expect that while the singleelectron part of the atomic orbital energies is well treated, the two-electron part is underestimated, resulting in an under-evaluation of the a 1g -e ′ g splitting, as can be clearly seen from figure 4. IV. OTHER CASES We considered up to now a Co 4+ ion, that is five electrons in the 3d shell, and a fixed metal-ligand distance, R M-O . Let us now examine the effect of the distance RM -O and the band filling on the a 1g -e ′ g splitting. The calculations presented in this section follow the same procedure as in sections III E, III F. For different fillings a typical example in the transition metal oxides family was used to define the type of metallic atom and metal oxygen distances. Minimal basis set issued from full contraction of the basis set given in reference 10 will be used. sees immediately that despite the large variation of the metal-ligand distance, the relative order of the a 1g and e ′ g orbitals remains identical. The main effect of RM -O is thus to renormalize the amplitude of the splitting, low-ering the splitting for larger distances and increasing it for smaller ones. A. The effect of the Co-O distance B. 3d 1 The simplest filling case corresponds to only one electron in the 3d shell. This is, for instance, the case of the NaTiO 2 compound. The calculations were done using the average Ti-O distance found in NaTiO 2 12 : R Ti-O = 2.0749 Å. In this case, ∆E 2 = 0 and ∆E(θ) = ∆E 1 (θ) behaves as pictured in figure 4. The a 1g orbital is of lower energy than the e ′ g for θ > θ 0 and of higher energy for θ < θ 0 . This result is in perfect agreement with the crystalline field theory. C. 3d 2 A simple example of the 3d 2 filling in transition metal oxides is the LiVO 2 compound. Indeed, the vanadium atom is in the V 3+ ionization state. We thus used a metal oxygen distance of R V-O = 1.9787 Å13 . Figure 7 displays the a 1g -e ′ g splitting as well as its decomposition into the single-electron and two-electron parts. As in the FIG. 7: Orbital splitting between the a1g and e ′ g orbitals for a 3d 2 transition metal. Only the nearest neighbor ligands electrostatic field is included in the calculation. The dotted red curve corresponds to the single-electron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation (1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)). 3d 5 case (figure 4), the single-electron and two-electron parts behave in a monotonic way as a function of θ, and in an opposite manner. In the present case, however, the two-electron part always dominates over the one-electron part and the a 1g -e ′ g orbital splitting is always reversed compared to the crystalline field predictions. As for the 3d 5 system, there is a slight e ′ g -e g hybridization that is responsible for the t 2g orbitals order. D. 3d 3 Examples of 3d 3 transition metal oxides are found easily in the chromium compounds. Let us take for instance the NaCrO 2 system 14 . The metal oxygen distance is thus : R Cr-O ≃ 1.901 Å. Figure 8 displays the a 1g -e ′ g orbital splitting as well as its decomposition into singleand two-electron parts. As usual the single-electron part 0 20 40 60 80 θ ο ∆E = ε(a 1g ) - ε(e g ') 2-elec. part 1-elec. part ε(a 1g ) -ε(e g ') t 2g orbital splitting 3d 3 system FIG. 8: Orbital splitting between the a1g and e ′ g orbitals for a 3d 3 transition metal. Only the nearest neighbor ligands electrostatic field is included in the calculation. The dotted red curve corresponds to the single-electron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation (1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)). and the two-electron part are monotonic as a function of θ but with slopes of opposite signs. This case is quite similar to the 3d 5 case since none of the single-and twoelectron parts dominates the t 2g orbital splitting over the whole range. Indeed, for small values of θ, the crystalline field effect dominates and the a 1g orbital is above the e ′ g ones while, for large values of θ, the two-electron part dominates and the a 1g orbital is again above the e ′ g ones. In a small intermediate region the order is reversed. In the realistic range of θ (θ ≃ θ 0 ) there is a strong competition between the two effects (quasi-degeneracy of the a 1g and e ′ g orbitals) and no simple theoretical prediction can be made. The crystalline field theory is not predictive but the present calculations cannot be considered as predictive either, since all the neglected effects may reverse the a 1g -e ′ g order. V. DISCUSSION AND CONCLUSION In the present work we studied the validity of the crystalline field theory under the application of a trigonal distortion on the regular octahedron. Under such a distortion, the T 2g irreducible representation (irrep) of the O h group spits into A 1g and E g irreps (T 2g -→ A 1g ⊕E g ), while the e g irrep remains untouched (E g -→ E g ). The hybridization between the t 2g and e g orbitals thus become symmetry allowed, even if hindered by energetic factors. This hybridization is not taken into account in the crystalline field theory. It is however of crucial importance for the relative order between the former t 2g orbitals and the reason of the failure of the crystalline field theory to be predictive. Indeed, due to the t 2g -e g orbitals hybridization, the two-electron part of the e ′ g orbital energy becomes dependant of the amplitude of the distortion and of opposite effect to the single-electron part. The relative order of the t 2g orbitals thus depends on the competition between these two effects and as a consequence of the band filling. In this work we studied the O h to D 3d distortion, however one can expect similar effects to take place for other distortions of the regular octahedron. The condition for these effects to take place is that the T 2g irreducible representation splits into a one-dimensional irrep (A) and the same two-dimensional irrep (E) as the one the e g orbitals are transformed to T 2g -→ A ⊕ E E g -→ E Indeed, under such a distortion, t 2g -e g hybridization phenomena are allowed. The distortion should thus transform O h into sub-groups that keep the C 3 (111) symmetry axis : C 3 , C 3v , D 3 , S 6 and D 3d . Examples such deformations are the elongation of the metal-ligand distance of one of the sets of three symmetry related ligands, or the rotation of such a set three ligands around the (111) symmetry axis. For instance, one will expect that t 2g -e g hybridization will also take place in trigonal prismatic coordination. However, in real systems like the sodium cobaltites, these distortion do not usually appear alone but rather coupled. For instance, in the squeezing of the metal layer between the two oxygen layers observed as a function of the sodium content in Na x CoO 2 , the Co-O bond length and the three-fold trigonal distortion are coupled. Since this composed distortion belongs to the above-cited class, the t 2g -e g hybridization will take place and the relative orbital order between the a 1g and e ′ g orbitals will be qualitatively the same as in figure 4. The bond length modification at equal distortion angle, θ, will only change the quantitative value of the orbital splitting, but not its sign. A bond elongation reduces the splitting a bond compression increases it. One can thus expect in sodium cobaltites that the a 1g -e ′ g orbital energy splitting will decrease with increasing sodium content. The reader should however have in mind that the effects of this splitting reduction will remain relatively small compared to the band width as clearly seen in reference 17 . In fact, one can expect that a large effect will be the modification of the band dispersion due not only to the bond length modification, but also to the t 2g -e g hybridization. FIG. 1 : 1 FIG. 1: Schematic representation of the CoO2 layers. FIG. 2 :arccos 1 √ 3 ≃ 213 FIG. 2: Schematic representation of cobalt 3d splitting. θ represents the angle between the z axis -the 3-fold (111) axis of the CoO6 octahedron -and the Co -O direction. θ0 = arccos 1 √ 3 ≃ 54.74 • is the θ angle for the regular octahedron. FIG. 3 : 3 FIG.3: Schematic representation of the Co 4+ states of interest. Let us point out that |e ′ g is doubly-degenerated, the hole being located either on the e ′ g1 or on the e ′ g2 orbitals. . part ε(a 1g ) -ε(e g ') FIG. 5 : 5 FIG.5: t2g-eg hybridization angle under the trigonal distortion. Figure 6 FIG. 6 : 66 Figure 6 displays the a 1g -e ′ g energy splitting as a function of the distortion angle θ and for different distances. The range of variation : from 1.8 Å to 1.95 Å, includes all physically observed distances in CoO 2 layers. One . part ε(a 1g ) -ε(e g ') Acknowledgments The authors thank Jean-Pierre Doumerc and Michel Pouchard for helpful discussions and Daniel Maynau for providing us with the CASDI suite of programs. These calculations where done using the CNRS IDRIS computational facilities under project n • 1842.
33,544
[ "829388" ]
[ "2037", "2037" ]
01760174
en
[ "shs" ]
2024/03/05 22:32:13
2006
https://insep.hal.science//hal-01760174/file/143-%20Hausswirth-Supplementation-ScienceSports2006-21-1-8-12.pdf
C Hausswirth C Caillaud R Lepers J Brisswalter email: [email protected] Influence d'une supplémentation en vitamines sur le rendement de la locomotion après une épreuve d'ultratrail Influence of a vitamin supplementation on locomotion gross efficiency after an ultra-trail race Keywords: Rendement, Exercice de longue durée, Vitamines, Altération musculaire Gross efficiency, Long duration exercise, Vitamin, Muscle damage Objectifs. -Le but de ce travail était d'étudier l'importance de la variation du rendement de la locomotion à la suite d'une d'épreuve d'ultratrail. Le second objectif était d'étudier l'effet sur le rendement d'une stratégie de supplémentation avant l'exercice en vitamines selon des doses et une composition correspondantes aux apports nutritionnels conseillés (ANC). Sujets et méthodes. -Vingt-deux sujets bien entraînés en endurance ont réalisé quatre tests de mesure du rendement avant, 24, 48 et 72 heures après une épreuve de type ultra-« trail » (3000 m en montée suivis de 3000 m en descente) ainsi que quatre tests de mesure de la force maximale volontaire aux mêmes périodes. Ces sujets étaient divisés selon une méthode en double insu en deux groupes expérimentaux (avec ou sans apport nutritionnel en vitamines et micronutriments, Isoxan Endurance ® ). Résultats. -Dans les deux groupes on a observé une diminution du rendement de la locomotion 24 et 48 heures après la course (respectivement entre le prétest et 24 heures après : 20,02 ± 0,2 vs 19,4 ± 0,1 %, p < 0,05) et une diminution de la force maximale volontaire immédiatement après l'épreuve. Dans ce cadre, la diminution du rendement, 24 heures après la course est significativement moins importante dans le groupe avec apport nutritionnel. Conclusion. -Les résultats de cette étude confirment la diminution du rendement à la suite d'un exercice de longue durée observée classiquement dans la littérature. Dans notre étude, l'apport en vitamines et micronutriment est associé à une moindre diminution du rendement et de la force maximale volontaire postexercice suggérant un possible effet de cet apport sur la fonction musculaire. Des travaux ultérieurs devront tester l'effet de ce type d'apport sur une moindre altération de la fonction musculaire notamment à la suite d'exercices excentriques. Introduction La dernière décennie a vu l'émergence et le développement chez des pratiquants de différents niveaux d'entraînement d'activités physiques d'endurance de très longue durée (supérieur à cinq heures) dans des profils de terrain et de dénivelés variés « ultra-trails ». Dans ce cadre, comme pour toute activité de longue durée la capacité de l'athlète à dépenser le moins d'énergie pour un même niveau de puissance fournie (rendement) est un facteur de la performance sportive [START_REF] Prampero | Energetics of muscular exercise[END_REF][START_REF] Hausswirth | Le coût énergétique de la course de durée prolongée : étude des paramètres d'influence[END_REF]. La variation du rendement de la locomotion avec la durée de l'exercice a bien été décrite dans la littérature. Pour des exercices d'une durée supérieure à une heure, avec l'apparition de phénomènes de fatigue centrale et périphérique, une diminution du rendement est systématiquement décrite [e.g. , 8]. Plusieurs facteurs impliqués sont cités comme responsables de cette altération tels que la variation de la mobilisation des substrats énergétiques, le stress thermique et la régulation des électrolytes de l'organisme, l'altération de la fonction musculaire, liés à la surcharge de travail notamment de type excentrique ou encore la modification du patron locomoteur. La dépense énergétique importante (supérieure à 3000 kcal/ j) lors de ce type d'épreuve s'accompagne de la nécessité pour l'athlète d'associer à sa préparation une stratégie d'apport énergétique exogène et de contrôler la composition alimentaire en macro-et micronutriments de ces apports [START_REF] Bigard | Nutrition du sportif[END_REF]. Par ailleurs, dans les épreuves de type « trail » les variations de déclivité et de nature de terrain augmentent la part des contractions excentriques et les risques de microlésions musculaires. Dans ce cadre, il est à présent bien établi dans les épreuves d'endurance que l'augmentation de la consommation d'oxygène et des dommages musculaires se traduisent par l'apparition d'un stress oxydatif néfaste pour l'organisme notamment chez le sujet peu entraîné. Ainsi, des travaux récents se sont intéressés à l'influence de certaines vitamines sur ce stress oxydatif, les résultats semblent suggérer une influence positive de plusieurs vitamines (E et C) sur la capacité antioxydante et un possible effet de cette supplémentation sur l'altération musculaire lors du travail excentrique [START_REF] Maxwell | Changes in plasma antioxydant status during eccentric exercise and the effect of vitamin supplementation[END_REF]. Dans ce cadre, le premier objectif de ce travail est d'observer l'importance de la variation du rendement de la locomotion lors de ce type particulier d'épreuve d'ultratrail. Le second objectif est d'étudier un possible effet bénéfique sur cette variation du rendement d'une stratégie de supplémentation avant l'exercice en vitamines selon des doses et une composition correspondantes aux apports nutritionnels conseillés (ANC) pour la population sportive [START_REF] Martin | Apports nutritionnels conseillés pour la population française[END_REF]. Méthodes Sujets Vingt-deux sujets bien entraînés en endurance ont participé à ce travail (âge : 40 ± 1,9 ans, taille : 177 ± 1,3 cm, masse corporelle : 70,4 ± 1 kg. Au cours des deux mois précédant les tests, leur volume d'entraînement hebdomadaire comprenait en moyenne 76 km par semaine. Tous les sujets étaient habitués aux épreuves sur ergocycle en laboratoire. Ils ont rempli un consentement écrit après avoir été informés en détail des procédures de l'expérimentation et cette étude a été agréée par le comité d'éthique pour la protection des individus (Saint-Germain-en-Laye, France) Protocole expérimental 2.2.1. Test progressif maximal La première épreuve réalisée par tous les sujets était un test progressif maximal de détermination de la consommation maximale d'oxygène (O 2max ) réalisé sur ergocycle un mois avant l'épreuve d'ultratrail. Après un échauffement de six minutes à 100 W, l'intensité mécanique était augmentée de 30 W par minute, jusqu'à ce que le sujet ne puisse plus maintenir la puissance imposée. Les critères d'atteinte de O 2max étaient les suivants : un plateau de O 2 malgré l'augmentation de la puissance, une fréquence cardiaque (FC) supérieure à 90 % de la FC max théorique et un quotient respiratoire (QR) supérieur à 1,15 [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. À partir des valeurs de débit ventilatoire (E), de consommation d'oxygène (O 2 ) et de production de dioxyde de carbone (CO 2 ) le seuil ventilatoire (SV) était déterminé selon la méthode décrite par [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF] [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF]. Lors de cette première épreuve les sujets étaient également familiarisés à un test d'évaluation de la force maximale isométrique des membres inférieurs (FMIV). Lors de ce test l'angle de flexion du genou était fixé à 100 degrés. Chaque contraction maximale était maintenue deux à trois secondes. Protocole de supplémentation À la suite du premier test, les sujets ont été divisés en deux groupes d'aptitude aérobie identique et l'apport en vitamines et micronutriments (Isoxan Endurance ® , NHS, Rungis, France) a été randomisé selon une procédure en double insu avec un groupe supplémenté (Iso) et un groupe placebo (Pla). Le traitement a débuté 21 jours avant l'épreuve et a pris fin deux jours après la fin de la course. La composition et les doses en Isoxan Endurance ® étaient conformes aux apports nutritionnels conseillés pour les sportifs. Tests sous-maximaux d'évaluation du rendement de la locomotion Le rendement de la locomotion a été évalué lors d'un exercice de pédalage de six minutes sur ergocycle réalisé à une intensité de 100 W (inférieure au seuil ventilatoire pour l'ensemble des sujets) suivi de dix minutes à l'intensité correspondante au seuil ventilatoire. Ces tests ont été réalisés au cours de quatre sessions expérimentales respectivement avant (préexercice), puis 24, 48 et 72 heures après la course (post-24, post-48, post-72). Dix minutes après chaque session, les sujets réalisaient un test de FMVI. Description de la course La course a eu lieu à La Plagne sur un parcours totalisant 3000 m de dénivelé positif suivi de 3000 m de dénivelé négatif. Les temps moyens réalisés par les sujets lors de cette course étaient de six heures 34 ± 49 minutes soit une vitesse moyenne de 8,4 km/h. Le rendement mécanique global du cyclisme (en pourcentage) était calculée comme le rapport entre le travail mécanique accompli par minute et l'énergie métabolique dépensée par minute [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF]. Matériel et mesures Cadence de pédalage Toutes les épreuves de cyclisme se déroulaient sur un ergocycle à résistance électromagnétique de type SRM (Jülich, Welldorf, Allemagne). Cet ergocycle pouvait s'ajuster précisément à leurs caractéristiques anthropométriques grâce à un réglage horizontal et vertical de la selle et du cintre. Son mode de fonctionnement permettait la production d'une puissance constante indépendamment de la cadence de pédalage naturellement adoptée par les sujets [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF][START_REF] Jones | Experimental human muscle damge: morphological changes in relation with other indices of damage[END_REF]. La cadence de pédalage (rév/min) était enregistrée en continu pendant toute la durée des épreuves. Analyse statistique Pour chaque variable, la valeur moyenne et l'écart-type étaient calculés. L'effet de la période de mesure et du groupe Les différences avec les valeurs de précourse sont significatives : pour p < 0,05. de supplémentation sur l'ensemble des variables mesurées était analysé par une analyse de variance (Manova) à deux facteurs. Pour cette analyse les valeurs étaient exprimées en fonction de la valeur enregistrée au préexercice. Puis les différences entre les conditions expérimentales étaient déterminées par un test posthoc de type Newman-Keuls. Le seuil de signification était fixé à p < 0,05. Résultats Rendement de la locomotion Les valeurs de rendement, de ventilation et de cadence de pédalage sont présentées Tableau 1. Chez tous les sujets, on observe une diminution du rendement de la locomotion et une augmentation de la ventilation 24 et 48 heures après la course. En revanche, aucune différence significative n'est observée entre les valeurs de rendement mesurées préexercice et 72 heures après la course (Fig. 1). Enfin, une diminution significative de la cadence de pédalage est observée 24 heures après la course. Lorsque l'on compare les deux groupes expérimentaux la diminution du rendement (Delta rendement) est significativement moindre dans le groupe supplémenté (Iso) comparé au groupe placebo (Pla) 24 heures après la course (Fig. 2). Force maximale volontaire Les valeurs de force maximale volontaires diminuent de façon significative après la course dans les deux groupes (respectivement pour Iso et Pla : -36,5 ± 3 % et -36,9 ± 2 %). Une corrélation significative est observée entre la diminution du rendement et celle de la force maximale isométrique (r = 0,978, p < 0,05). Dans ce cadre les valeurs de FMVI du groupe Iso retournent à des valeurs de repos plus rapidement que dans le groupe Pla. Discussion Le premier résultat important de cette étude est l'altération du rendement de la locomotion observée 24 et 48 heures après une épreuve de longue durée de type ultratrail. Ces résultats correspondent à ceux classiquement observés dans la littérature depuis une dizaine d'année. [START_REF] Brisswalter | In: Énergie et performance[END_REF]. Plusieurs facteurs explicatifs sont avancés pour expliquer cette variation : d'une part une modification de l'utilisation des substrats avec une métabolisation accrue de substrats lipidiques, d'autre part l'effet du stress thermique et de la déshydratation associée et enfin une altération des propriétés contractiles notamment dans le cadre d'exercices mettant en jeu une part importante de travail excentrique. Dans notre travail d'une durée moyenne de six heures 34 ± 49 minutes, la moitié de l'épreuve se déroulait en descente, nous aurions pu ainsi émettre l'hypothèse d'une altération plus importante du rendement comparée à des épreuves de durée inférieure et se déroulant en terrain plat. Paradoxalement, nous observons une altération moindre (environ 3 %) que celles observées dans la littérature (de 5-7 %), (pour revue, [START_REF] Hausswirth | Le coût énergétique de la course de durée prolongée : étude des paramètres d'influence[END_REF]). Plusieurs facteurs méthodologiques peuvent expliquer cette différence, en particulier le niveau d'intensité de l'exercice qui correspond ici environ à 40 % de VO 2max , par exemple la place du premier test de mesure du rendement situé 24 heures après la course alors que dans les autres études il est mesuré immédiatement après. Par ailleurs, dans ce travail nous n'observons aucune variation du quotient respiratoire entre le test préexercice et celui postcourse. Dans ce cadre nous pouvons émettre l'hypothèse selon laquelle la diminution du rendement observée ici est principalement liée à un effet résiduel d'altération des propriétés contractiles du muscle qui disparaît dans notre étude 72 heures après l'épreuve. La réalisation d'un exercice physique immédiatement après une course de ce type reste difficile ou impossible à étudier dans des conditions réelles de course, aussi les travaux ultérieurs devront essayer d'analyser les effets de la modification des propriétés contractiles à la suite du travail excentrique sur le rendement immédiatement après l'exercice. Le second résultat intéressant de ce travail est l'effet significatif et bénéfique d'une supplémentation en vitamines et micronutriments sur l'altération du rendement et sur celle de la force maximale volontaire après l'épreuve. À notre connaissance, il n'a été effectué aucune étude concernant les effets d'un apport en vitamines et micronutriments sur les aspects métaboliques de la locomotion, la majeure partie des travaux ayant étudié les effets de cet apport sur la fonction musculaire [e.g. 13]. À ce jour les résultats restent encore peu clairs. Néanmoins, il est classiquement rapporté dans la littérature une altération de la fibre musculaire lors d'exercice excentriques associée à une perte de force [e.g. 7] .La diminution de la force peut atteindre des valeurs de 50 % et le retour à des valeurs normales perdure plusieurs jours après l'exercice [START_REF] Mackey | Skeletal muscle collagen contents in humans following high force eccentric contractions[END_REF]. Plusieurs facteurs explicatifs semblent impliqués dans cette altération musculaire lors de l'exercice prolongé notamment la production de radicaux libres ou stress oxydatif liés, d'une part à une consommation d'oxygène importante, d'autre part aux microlésions musculaires induites par l'exercice notamment excentrique [pour revue, 1]. Dans ce cadre, il a été proposé une action bénéfique d'un apport en vitamines (notamment C et E) sur ce stress oxydatif. À ce jour, bien que les résultats expérimentaux ayant tenté de valider ces hypothèses restent encore contradictoires, et malgré les conditions de notre étude en situation réelle qui nous limitait à une approche descriptive nous pouvons émettre l'hypothèse selon laquelle chez les sujets du groupe ayant pris un apport en vitamines, la moindre altération de la fonction musculaire a permis également de minimiser la diminution du rendement de la locomotion. Conclusion Les résultats de cette étude confirment et précisent les résultats présentés dans la littérature sur la diminution du rendement à la suite d'un exercice de longue durée. Un résultat intéressant de ce travail est la relation significative entre la diminution de la force maximale volontaire observée après l'étude et celle du rendement de la locomotion. Dans le cadre de cette étude descriptive nous observons un effet d'un apport en vitamines et micronutriments sur cette relation. Des travaux ultérieurs portant sur la nature de cet effet, notamment en prenant en compte un possible effet sur le stress oxydatif sont nécessaires pour préciser l'intérêt d'un apport en vitamines sur l'adaptation physiologique dans ce type d'épreuve. 2. 3 . 1 . 31 Mesure des paramètres ventilatoires et gazeux La fréquence cardiaque était enregistrée en continu pendant la course grâce à un cardiofréquencemètre (POLAR vantage, Finlande). Pendant les épreuves sur ergocycle, la consommation d'oxygène (O 2 ), la fréquence cardiaque (FC) ainsi que les paramètres respiratoires (débit ventilatoire :E, fréquence respiratoire : FR) étaient enregistrés en continu par un système d'analyse télémétrique de type Cosmed K4b 2 (Rome, Italie) validé par Mc Laughlin et al. (2001) [9]. Pour chaque paramètre, une valeur de la moyenne et de l'écart-type étaient calculées entre la troisième et la dixième minute d'exercice. Fig. 1 . 1 Fig. 1. Variation du rendement de la locomotion après la course d'ultratrail. Fig. 2 . 2 Fig. 2. Comparaison de la variation du rendement (delta rendement) entre les deux groupes expérimentaux (Iso vs Pla). Remerciements Cette étude a bénéficié du support des laboratoires NHS (Rungis, France). Nous remercions également les docteurs P. Le Van, J.M. Vallier, E. Joussellin, ainsi que C. Bernard pour leur aide lors de la réalisation de ce projet. Références
18,054
[ "1012603" ]
[ "441096", "410122", "452825", "303091" ]
01760198
en
[ "shs" ]
2024/03/05 22:32:13
2001
https://insep.hal.science//hal-01760198/file/145-%20Effect%20of%20pedalling%20rates.pdf
R Lepers G Y Millet N A Maffiuletti C Hausswirth J Brisswalter Effect of pedalling rates on physiological response during endurance cycling Keywords: Cadence, Oxygen uptake, Triathletes, Fatigue This study was undertaken to examine the effect of different pedalling cadences upon various physiological responses during endurance cycling exercise. Eight well-trained triathletes cycled three times for 30 min each at an intensity corresponding to 80% of their maximal aerobic power output. The first test was performed at a freely chosen cadence (FCC); two others at FCC-20% and FCC +20%, which corresponded approximately to the range of cadences habitually used by road racing cyclists. The mean (SD) FCC, FCC-20% and FCC + 20% were equal to 86 (4), 69 (3) and 103 (5) rpm respectively. Heart rate (HR), oxygen uptake (VO2), minute ventilation (VE) and respiratory exchange ratio (R) were analysed during three periods: between the 4th and 5th, 14th and 15th, and 29th and 30th min. A significant effect of time (P < 0.01) was found at the three cadences for HR, VO 2 . The V E and R were significantly (P < 0.05) greater at FCC + 20% compared to FCC-20% at the 5th and 15th min but not at the 30th min. Nevertheless, no significant effect of cadence was observed in HR and VO 2 . These results suggest that, during high intensity exercise such as that encountered during a time-trial race, well-trained triathletes can easily adapt to the changes in cadence allowed by the classical gear ratios used in practice. Introduction During training or racing, experienced cyclists or triathletes usually select a relative high pedalling cadence, close to 80-90 rpm. The reasons behind the choice of such a cadence are still controversial and are certainly multi-factorial. Several assumptions relating to neuromuscular, biomechanical or physiological parameters have previously been proposed. The concept of a most economical cadence is generally supported by experiments where cadences have been varied from the lowest to the highest rates and a parabolic oxygen uptake (V0 2 )-cadence relationship has been obtained. Nevertheless in reality, extreme cadences such as 50 or 110 rpm are very rarely used by road cyclists or triathletes. Simple observations have shown for example that on a flat road at 40 km•h-1 cadences ranged from 67 rpm with a 53:11 gear ratio (GR) to 103 rpm with a 53:17 GR. During an up-hill climb at 20 km•h-1 , cadences ranged from 70 rpm with a 39:17 GR to 103 rpm with a 39:25 GR. Thus, the range of cadences adopted by cyclists using these common GR may vary from 70 to 100 rpm, which corresponds to approximately 85 rpm ± 20%. The effect of exercise duration upon cycling cadence has not been well studied. The freely chosen cadence (FCC) has seemed to be relatively stable during high intensity cycling exercise of 30 min duration [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] but the FCC was found to decrease during 2 h of cycling at submaximal intensity [START_REF] Lepers | Evidence of neuromuscular fatigue after prolonged cycling exercise[END_REF]. In a non fatiguing situation, the FCC is known to be higher than the most economical cadence. However, a shift in the energetically optimal rate during exercise towards the FCC has recently been reported by [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF]. These observations raise a question concerning the choice of any particular GR by road racing cyclists and thus of a pedalling rate and the physiological consequences of this choice for exercise duration . Therefore, the purpose of this study was to investigate whether the use of cadences 20% lower or higher than the freely chosen one during high intensity endurance exercise induced different changes in metabolic parameters as fatigue occurred. Methods Subjects Eight well-trained male triathletes volunteered to participate in this study. The physical characteristics of the subjects are given in Table 1. They were informed in detail of the experiment and gave written consent prior to all tests. Experiment procedures Each subject completed four tests during a 3 week period. Each session was separated by at least 72 h. AH experiments were conducted using an electromagnetically braked cycle ergometer (Type Excalibur, Lode, Groningen, The Netherlands) of which the seat and handlebars are fully adjustable to the subject's dimensions. The ergometer was also equipped with racing pedals, and toe clips allowing the subjects to wear cycling shoes. The first session was used to determine the maximal oxygen uptake (VO2max) of the subjects. The V0 2max test began with a warm-up at 100 W lasting 6 min, after which the power output was increased by 25 W every 2 min until the subjects were exhausted. The three other sessions were composed of a 10 min warm-up ride followed by a 30 min submaximal test at 80% of the highest power sustained for 2 min (P max ). The first of these three sessions was performed at the FCC which corresponded to the cadence that the subjects spontaneously adopted within the first 5 min. During the last 25 min of this test, subjects were asked to maintain a similar cadence. For the two other tests, subjects rode in a random order at FCC-20% or FCC + 20%. The heart rate (HR) was monitored continuously, and gas exchanges were collected at three periods: between the 4th-5th (period 1), the 14th-15th (period 2), and 29th-30th (period 3) min. The HR, VO 2 , minute ventilation (V E ) and respiratory exchange ratio (R) for these three periods were analysed. Statistical analysis A two-way ANOVA (time x cadence) was performed using HR, VO2, VE and R as dependent variables. When a significance of P < 00.05 was obtained using the ANOVA, Tukey post-hoc multiple comparisons were made to determine differences either among pedal rates or among periods. Results Mean (SD) FCC were 86 (4) rpm, therefore FCC-20% and FCC + 20% corresponded to 69 (3) and 103 (5) rpm, respectively (Table 1). A significant time effect (P <0.01) was found at the three cadences in HR, VE (Table 2). The rise in VO 2 between the 5th and the 30th min corresponded to 11.0 (7.4)%, 10.3 (6.9)% and 9.9 (3.7)% at FCC-20%, FCC and FCC + 20%, respectively. Between the 5th and the 30th min VE increased by 35.4 (17.4)%, 28.7 (10.9)% and 21.2 (5.2)% at FCC-20%, FCC and FCC +20%, respectively. No significant differences appeared among the three cadences. A significant effect of cadence was found in V E and R in the first part of the exercise (Table 2). Posthoc tests showed that V E was significantly greater at FCC + 20% compared to FCC-20% at the 5th and 15th min but not at the 30th min. Similarly, R was significantly greater at FCC + 20% in comparison to FCC-20% and FCC at the 5th and 15th min but not at the 30th min. In VO 2 and HR, no significant effect of cadence was observed. Discussion The main finding of this study was the absence of significant differences in physiological parameters among the three different pedalling rates (FCC, FCC-20% and FCC + 20%) despite a significant effect of exercise duration. Increases in HR, VO 2 and V E at the end of 30 min of cycling at 80% of P max observed in this study were similar to previous observations made in well-trained triathletes [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF]. Several hypotheses have been proposed to explain the so-called drift in VO 2 at high power outputs, such as an additional oxygen cost of higher rates of V E , increasing muscle and body temperatures and/or changes in muscle activity patterns and/or in fibre type recruitment (for a review, see [START_REF] Whipp | The slow component of 0 2 uptake kinetics during heavy exercise[END_REF]. [START_REF] Barstow | Influence of muscle fiber type and pedal frequency on oxygen uptake kinetics of heavy exercise[END_REF] examined the physiological responses of subjects to intense exercise (half way between the estimated blond lactate threshold and VO 2max ) lasting 8 min for a range of pedalling frequencies between 45 and 90 rpm. Their results showed that the slow composent of VO2 was significantly affected by fibre type distribution but not by pedalling rates. Similarly, [START_REF] Billat | The role of cadence on the slow VO 2 component in cycling and running exercise[END_REF] have shown that for high intensity exercise (95% VO 2max ), a pedalling rate 10% lower than the freely chosen one induced the same VO 2 slow component. In the present study, in the range of pedalling rates used habitually by road cyclists (from 70 to 100 rpm), no significant effects of cadence were found upon the rises in VO 2 during 30 min of endurance exercise. Also, our data are quite different to those of [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] who examined cadences between 50 and 110 rpm. However, such a discrepancy could be explained by the relatively small range of cadences used in the present study. The only difference observed between cadences in this study occurred in V E and R in the first part of the exercise. High pedalling rates induced greater V E at the 5th and 15th min of exercise, which were associated with higher R values (> 1.0). These data suggest a higher contribution of anaerobic metabolism to power production in the first 15 min at FCC + 20%. Moreover, they corroborate those of [START_REF] Zoladz | Human muscle power generating capability during cycling at different pedalling rates[END_REF] who showed that beyond 100 rpm there is a decrease in external power that can be delivered at a given V0 2 with an associated earlier onset of metabolic acidosis. Importantly, this could be disadvantageous for maintained high intensity exercise. However, in the present study, such a specificity at the highest pedalling rates did not affect the continuation of the exercise since similar values of V E and R were found at the end of the exercise at all three cadences. The mean cadence spontaneously adopted by the triathletes in this study [86 (4) rpm] corroborated previous results obtained from trained cyclists or triathletes [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF][START_REF] Lepers | Evidence of neuromuscular fatigue after prolonged cycling exercise[END_REF]. Although it has been shown that pedalling rates could affect: 1. The maximal power during a 10 s sprint [START_REF] Zoladz | Human muscle power generating capability during cycling at different pedalling rates[END_REF] 2. The power generating capabilities following high intensity cycling exercise close to 90% VO2max [START_REF] Beelen | Effect of prior exercise at different pedalling frequencies on maximal power in humans[END_REF] the reasons behind the choice of a particular cadence during endurance cycling and the corresponding GR by cyclists remain unclear. We recently showed that cycling exercise at different pedalling rates induced changes in the neural and contractile properties of the quadriceps muscle but no significant effects of cadence were found when considering a range of FCC ± 20% (Lepers et al., in press). Moreover, in the present study the FCC did not appear to be more energetically optimal than FCC-20% or FCC + 20%, either at the beginning or at the end of the exercise. [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] have recently shown that the theoretical energetically optimal pedalling rate, corresponding to the lowest point of the parabolic V02cadence relationship, shifted progressively over the du-ration of exercise towards a higher pedalling rate (from 70 to 86 rpm) which was closer to the freely chosen one. Therefore, a minimisation of energy cost seems not to be a relevant parameter for the choice of cadence, at least in a non fatigued state. Actually, the choice of cadence adopted by cyclists during endurance exercise seems dependent upon factors other than the metabolic cost. Biomechanical and neuromuscular hypotheses have already been proposed to explain the choice of the pedalling rate during short-term high intensity exercise. However, such interesting hypotheses need to be explored during prolonged exercise. In conclusion, the results of the present study showed that, for high intensity endurance exercise corresponding to a time trial race for example, the use of cadences in a range corresponding to the classical GR induced similar physiological effects. These data suggest that well-trained triathletes can easily adapt to the changes in cadence used habitually during racing. Further investigations are necessary to target the mechanisms involved in the choice of pedalling rate during prolonged cycling.
13,144
[ "1012603" ]
[ "452825", "452825", "452825", "441096", "303091" ]
01760210
en
[ "shs" ]
2024/03/05 22:32:13
2003
https://insep.hal.science//hal-01760210/file/146-%20Influence%20of%20drafting%20%20during%20swimming.pdf
A Delextrat V Tricot C Hausswirth T Bernard F Vercruyssen J Brisswalter Influence of drafting during swimming on ratings of perceived exertion during a swim-to-cycle transition in well-trained triathletes published or not. The documents may come Influence of drafting during swimming on ratings of perceived exertion during a swim-to-cycle transition in well-trained triathletes Numerous physiological and psychological factors have been suggested to account for successful performance during endurance events. Among psychological parameters, the perception of workload represents a major determinant of performance [START_REF] Russell | On the current status of rated perceived exertion[END_REF], as a positive or negative evaluation of the exercise's constraints could lead to either the continuation or the cessation of the competitive task. The technique most commonly used for the measurement of perceived exertion is the Rating Scale of Perceived Exertion (RPE, from 6 to 20) described by [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. This parameter is classically suggested to be a good indicator of physical stress and provides a complementary tool for training prescription, especially during long duration exercises where fatigue is likely to occur (Williams & Eston, 1989). In this context, most studies have been conducted during continuous unimodal events. During the last decade several multimodal events involving successive locomotion mode sessions such as triathlon have attracted increased attention of scientists. Further, introduction of drafting, i.e., swimming or cycling directly behind a competitor, has considerably modified the race strategy, as triathletes attempt to swim or cycle as fast as possible to stay in the leading group. During these events it has been shown that the succession of different exercises or different drafting strategies leads to specific physiological adaptation when compared with a unimodal sport (Hausswirth, Vallier, Lehenaff, Brisswalter, Smith, Millet & Dreano, 2001). However, to our knowledge, relatively little information is available on the effect of successive exercises on the RPE scores. Therefore, the aim of the present study was to investigate whether RPE responses measured during a cycling session are affected by a prior swimming bout realized at a competition pace. Eight well-trained triathletes competing at national level were tested during four experimental sessions. The first test was always a 750-m swim realized alone Ad at a competition pace (A: Swimming Alone). It was used to judge swimming intensity for each subject. During the three other tests, presented in counterbalanced order, subjects undertook a 15-min. ride on a bicycle ergometer at 75% of maximal aerobic power (MAP) and at a freely chosen cadence (FCC). This test was preceded by either a 750-m swim performed alone at the pace adopted during Swimming Alone (SAC trial), a 750-rn swim in drafting position at the pace adopted during Swimming Alone (SDC trial), or a cycling warm-up (same duration as the swimming tests) at a power representing 30% of maximal aerobic power (MAP, C trial). The subjects were asked to rate their perceived exertion (RPE 6-20 scale, Borg, 1970) immediately after the cessation of the swimming and cycling bouts. Moreover, blood lactate concentration was assessed immediately after swimming, and after 3 and 15 min. of cycling, and oxygen uptake (V02) was collected continuously during cycling. The RPE responses and physiological parameters measured during the cycling trials are presented in Table 1. Analysis showed that prior swimming alone led to significantly higher V02, Lactate, and RPE values during subsequent cycling when compared with cycling alone (p< ,051. Further, swimming in drafting position yielded significantly lower blood lactate concentration and RPE values measured during subsequent cycling in comparison with swimming alone (p < .05). The main result was that RPE during cycling at a constant power output is significantly higher after a swimming bout. The similar evolution of RPE and physiological parameters confirms the hypothesis that RPE is a good indicator of exercise metabolic load (Williams & Eston, 1989) even during multimodal events and could therefore be a useful tool during triathletes' training, especially to prescribe exercise intensity during combined swimming and cycling exercises. Moreover, the lower RPE responses obtained during the cycling session when preceded by a swimming bout performed in drafting position in comparison with an isolated swimming bout indicated that drafting strategies during competition lead, on the one hand, to a significant improvement in energy cost of locomotion, i.e., lower V02 and lactate values, and, on the other hand, to a lower perceived workload. Therefore, we suggest that drafting during swimming improved both physiological and psychological factors of triathlon performance. Further studies are still needed to validate this hypothesis during a triathlon competition.
5,105
[ "1012603", "19845" ]
[ "303091", "303091", "441096", "303091", "303091", "303091" ]
01738307
en
[ "shs" ]
2024/03/05 22:32:13
2018
https://audencia.hal.science/hal-01738307/file/Radu%20Lefebvre%20%26%20al%2C%20IJEBR%2C%202017.pdf
Étienne St Jean Miruna Radu - Lefebvre Cynthia Mathieu Can Less be More? Mentoring Functions, Learning Goal Orientation, and Novice Entrepreneurs Self-Efficacy Purpose One of the main goals of entrepreneurial mentoring programs is to strengthen the mentees' selfefficacy. However, the conditions in which entrepreneurial self-efficacy is developed through mentoring are not yet fully explored. This article tests the combined effects of mentee's learning goal orientation and perceived similarity with the mentor and demonstrates the role of these two variables in mentoring relationships. Design The current study is based on a sample of three hundred and sixty (360) novice Canadian entrepreneurs who completed an online questionnaire. We used a cross-sectional analysis as research design. Findings Findings indicate that the development of entrepreneurial self-efficacy (ESE) is optimal when mentees present low levels of learning goal orientation (LGO) and perceive high similarities between their mentor and themselves. Mentees with high LGO decreased their level of ESE with more in-depth mentoring received. Limitation This study investigated a formal mentoring program with volunteer (unpaid) mentors. Generalization to informal mentoring relationships needs to be tested. Practical implication/value The study shows that, in order to effectively develop self-efficacy in a mentoring situation, learning goal orientation (LGO) should be taken into account. Mentors can be trained to modify mentees' LGO to increase their impact on this mindset and mentees' entrepreneurial self-efficacy. Originality/value This is the first empirical study that demonstrates the effects of mentoring on entrepreneurial selfefficacy and reveals a triple moderating effect of LGO and perceived similarity in mentoring relationships. Introduction In recent decades, countries all over the world have implemented support programs contributing to the development of entrepreneurial activity as part of the entrepreneurial ecosystem [START_REF] Spigel | The Relational Organization of Entrepreneurial Ecosystems[END_REF]. Among these initiatives, the mentoring of novice entrepreneurs was emphasized as highly beneficial for enhancing entrepreneurial self-efficacy (ESE) and entrepreneurial skills (e.g. [START_REF] Crompton | The effect of business coaching and mentoring on small-to-medium enterprise performance and growth[END_REF][START_REF] Gravells | Mentoring start-up entrepreneurs in the East Midlands -Troubleshooters and trusted friends[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF]. Extensive empirical research [START_REF] Ozgen | Social sources of information in opportunity recognition: Effects of mentors, industry networks, and professional forums[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF][START_REF] Ucbasaran | Opportunity identification and pursuit: does an entrepreneur's human capital matter?[END_REF] confirmed the positive impact of mentoring relationships on both mentees' cognitions (improving opportunity identification, clarifying business vision) and emotions (reducing stress and feelings of being isolated, establishing more ambitious goals). However, there is limited knowledge of how mentoring relationships produce these outcomes. We thus know little about the individual and relational variables moderating the impact of mentoring relationships. This article makes a theoretical and practical contribution to our understanding of how, and under what conditions, mentor input (mentor functions), along with a mentee variable (mentee's learning goal orientation; LGO) and a mentoring relationship variable (perceived similarity with the mentor) combine to develop novice entrepreneurs' ESE. This, in turn, will enable entrepreneurial support programs to better match and support mentoring dyads. Despite their potential effects on mentees' ESE [START_REF] Egan | The Impact of Learning Goal Orientation Similarity on Formal Mentoring Relationship Outcomes[END_REF][START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF], research dedicated to the study of ESE development while simultaneously taking into account mentor functions, perceived similarity with the mentor, and mentees' LGO is scarce. Studies based on goal orientation theory [START_REF] Dweck | Mindset: The new psychology of success[END_REF][START_REF] Dweck | A social-cognitive approach to motivation and personality[END_REF], social learning theory (Bandura, 1986[START_REF] Bandura | Self-efficacy : the exercise of control[END_REF] and social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF] generated consistent evidence related to the development of ESE through supportive relationships such as mentoring. Goal orientation theory emphasizes the role of LGO in producing positive effects on mentees' ESE [START_REF] Godshalk | Aiming for career success: The role of learning goal orientation in mentoring relationships[END_REF][START_REF] Kim | Learning goal orientation, formal mentoring, and leadership competence in HRD: A conceptual model[END_REF], whereas social learning theory and social comparison theory focus on the importance of perceived similarity in producing positive ESE outcomes at the mentee level [START_REF] Ensher | Effects of Race, Gender, Perceived Similarity, and Contact on Mentor Relationships[END_REF][START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF]. The present article builds on these three streams of literature to test the combined effects of perceived similarity with the mentor and mentees' LGO on mentees' ESE. Moreover, we build on previous mentoring research in entrepreneurship that has established that the input mentors bring in mentoring relationships can be effectively operationalized as a set of mentoring functions. These mentoring functions can be related to career development whereas others are more focused on the mentees' attitude change and skills development [START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF]. The aim of the present study is to demonstrate that the impact of mentoring functions on mentees' ESE is moderated by the mentee's LGO and perceived similarity with the mentor. The reason for combining these three streams of literature to test our moderating model is that together they contribute to our understanding of the impact of mentoring relationships on novice entrepreneurs. First, the social comparison perspective within mentoring relationships is considered by testing the moderating effect of perceived similarity with the mentor on mentees' ESE development. Second, goal orientation is taken into account as part of novice entrepreneurs' psychological disposition upon entering a mentoring relationship, and how these relationships can have an impact on their ESE. Third, we highlight the potential combined effect of mentees' LGO and perceived similarity with the mentor in explaining the conditions in which mentees' ESE could develop to allow them to reach their full potential. The article is structured as follows: first, we present the theoretical background and the main hypotheses. Then we focus on our empirical study and the methods used to test the hypotheses. Based on a sample of 360 entrepreneurs supported by a mentoring program in Canada, the study shows that mentoring functions foster ESE under certain conditions, which supports the hypotheses concerning the moderating role of mentees' LGO and perceived similarity with the mentor. We demonstrate that high perceived similarity with the mentor increases mentees' ESE and we show that mentoring functions increase mentees' ESE, particularly when mentees have low levels of LGO. We discuss these findings and highlight their theoretical and practical implications for entrepreneurial research and policy. Theoretical background This section first presents the notion of ESE and its relevance in the context of mentoring for entrepreneurs. We then focus on the issue of the mentor's input and show the importance of mentor functions and mentees' perceived similarity with the mentor for mentees' ESE development. Mentees' LGO is also introduced and we highlight its direct and moderating effects on mentees' ESE enhancement. Finally, the combined effect of mentees' LGO, mentor functions and perceived similarity with the mentor is examined to explore how these variables may influence the development of mentees' ESE as a result of involvement in mentoring relationships. ESE refers to the subjective perception of one's ability to successfully accomplish a specific task or behavior [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF]. According to Bandura (1997, p. 77), ESE beliefs are constructed through four main sources of information: 1/ enactive mastery experiences that serve as indicators of capability; 2/ vicarious experiences that alter efficacy beliefs through transmission of competencies and comparison with the attainments of others; 3/ verbal persuasion and allied types of social influence that may persuade the individuals that they possess certain capabilities; and 4/ physiological and affective states from which people partly judge their capability, strength, and vulnerability to dysfunction. Although mentoring may not support ESE development through enactive mastery experiences, indirect evidence obtained from previous studies (ref. [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF] suggests that mentoring can develop ESE through the three other processes (vicarious learning, verbal persuasion, physiological and emotional states). Mentors may act as role models in a vicarious learning relationship which consists in facilitating mentees' self-evaluation and development of entrepreneurial and business skills through social comparison and imitative behavioral strategies [START_REF] Barnir | Mediation and Moderated Mediation in the Relationship Among Role Models, Self-Efficacy, Entrepreneurial Career Intention, and Gender[END_REF][START_REF] Johannisson | University training for entrepreneurship: a Swedish approach[END_REF][START_REF] Scherer | Role Model Performance Effects on Development of Entrepreneurial Career Preference[END_REF]. Indeed, vicarious learning from mentors was identified as the most significant contribution to mentoring relationships, regardless of the context being studied [START_REF] Barrett | Small business learning through mentoring: evaluating a project[END_REF][START_REF] Crocitto | Global mentoring as a means of career development and knowledge creation: A learning-based framework and agenda for future research[END_REF][START_REF] D'abate | Mentoring as a learning tool: enhancing the effectiveness of an undergraduate business mentoring program[END_REF][START_REF] Gordon | Coaching the mentor: Facilitating reflection and change[END_REF][START_REF] Hezlett | Protégés' learning in mentoring relationships: A review of the literature and an exploratory case study[END_REF][START_REF] Lankau | An investigation of personal learning in mentoring relationships: content, antecedents, and consequences[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF]. Furthermore, mentors may use verbal persuasion strategies to help mentees explore and sometimes change their attitudes and beliefs [START_REF] Marlow | Analyzing the influence of gender upon high-technology venturing within the context of business incubation[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF]. Finally, mentors may influence mentees' emotional states by reducing their levels of stress related to perceived uncertainty and future challenges [START_REF] Kram | Mentoring as an antidote to stress during corporate trauma[END_REF][START_REF] Sosik | Leadership styles, mentoring functions received, and jobrelated stress: a conceptual model and preliminary study[END_REF]. It is, however, important to note that not all mentors are equally invested in mentoring relationships; some may only provide marginal mentoring [START_REF] Ragins | Marginal mentoring: The effects of type of mentor, quality of relationship, and program design on work and career attitudes[END_REF] or worse, harmful mentoring experiences [START_REF] Eby | Protégés' negative mentoring experiences: Construct development and nomological validation[END_REF][START_REF] Eby | The Protege's Perspective Regarding Negative Mentoring Experiences: The Development of a Taxonomy[END_REF][START_REF] Simon | A typology of negative mentoring experiences: A multidimensional scaling study[END_REF]. The quality and depth of mentoring relationships can be assessed by mentor functions [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF] that allow mentees to benefit from the mentoring relationship in various ways, particularly in terms of positive changes regarding their ESE [START_REF] Day | The relationship between career motivation and self-efficacy with protégé career success[END_REF][START_REF] Powers | An Exploratory, Randomized Study of the Impact of Mentoring on the Self-Efficacy and Community-Based Knowledge of Adolescents with Severe Physical Challenges[END_REF][START_REF] Wanberg | Mentoring research: A review and dynamic process model[END_REF]. Mentor functions studied in large organizations, as well as in entrepreneurship, refer to three categories of support a mentee can receive: psychological, careerrelated, and role modeling [START_REF] Bouquillon | It's only a phase': examining trust, identification and mentoring functions received accross the mentoring phases[END_REF][START_REF] Pellegrini | Construct equivalence across groups: an unexplored issue in mentoring research[END_REF][START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF][START_REF] Waters | The role of formal mentoring on business success and self-esteem in participants of a new business start-up program[END_REF]. Mentor functions can act as an indicator of the quality of the mentoring provided or received [START_REF] Hayes | Mentoring and nurse practitioner student self-efficacy[END_REF]. These functions influence the mentoring process, more specifically the development of mentees' ESE; prior research has demonstrated that higher levels of psychological support improve mentees' ESE [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF]. As a result of their focus on providing challenging tasks to the mentee or in guiding them throughout the decision-making process, career-related functions also play a significant role in the development of mentees' ESE [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF][START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF]. To sum up, there is consistent evidence that mentor functions have a direct impact on mentees' ESE. Our goal is to demonstrate the contribution of two moderating variables that may enhance or diminish the impact of mentoring functions on mentees' ESE development: perceived similarity with the mentor and mentees' LGO, as indicated in the Figure 1. The role of perceive similarity with mentor in supporting mentees' ESE development The notion of "perceived similarity" was introduced by [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF], who stressed that when individuals evaluate their own opinions and abilities, there is a tendency to look to external sources of information such as role models. Social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF] complements Bandura's social cognitive learning theory in suggesting that the greater the perceived similarity to the role model, the greater the impact of that role model on the observer's ESE [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF]. Social comparison theory highlights that the observer's identification with the role model is crucial for maintaining the social comparison process. Perceived similarity regarding age, gender, background [START_REF] Wheeler | Self-Schema matching and attitude change: Situational and dispositional determinants of message elaboration[END_REF], values and goals [START_REF] Filstad | How newcomers use role models in organizational socialization[END_REF] reinforces identification to the role model. Individuals tend to compare themselves with people they perceive as similar to themselves, and avoid comparing themselves with people perceived as too different [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF]. Mentoring relationships with low levels of perceived similarity are thus likely to reduce the social comparison process and generate a negative impact on vicarious learning; this decrease in vicarious learning would negatively impact the observer's ESE. To generate positive outcomes as role models, one condition seems essential: mentors of entrepreneurs must be perceived as similar by their mentees [START_REF] Elam | Gender and entrepreneurship: A multilevel theory and analysis[END_REF][START_REF] Terjesen | The role of developmental relationships in the transition to entrepreneurship: A qualitative study and agenda for future research[END_REF][START_REF] Wilson | An analysis of the role of gender and self-efficacy in developing female entrepreneurial interest and behavior[END_REF]. In three recent meta-analyses in mentoring contexts, [START_REF] Eby | An interdisciplinary meta-analysis of the potential antecedents, correlates, and consequences of protégé perceptions of mentoring[END_REF], [START_REF] Ghosh | Antecedents of mentoring support: a meta-analysis of individual, relational, and structural or organizational factors[END_REF] and [START_REF] Ghosh | Career benefits associated with mentoring for mentors: A metaanalysis[END_REF] demonstrated that perceived similarity with mentors is correlated to positive mentoring outcomes. The process through which perceived similarity influences mentoring outcomes was characterized by [START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF] as "relational identification" in work relationships (cf. the theory of relational identification; [START_REF] Sluss | Relational identity and identification: Defining ourselves through work relationships[END_REF]. Prior empirical research has shown that entrepreneurs tend to choose role models of the same gender. This tendency is stronger for women entrepreneurs [START_REF] Murrell | The gendered nature of role model status: an empirical study[END_REF], who start a business in what is still perceived as a male dominated social milieu [START_REF] Wilson | Gender, Entrepreneurial Self-Efficacy, and Entrepreneurial Career Intentions: Implications for Entrepreneurship Education[END_REF]. Interestingly, mentoring research has emphasized that perceived similarity is more important than actual similarity [START_REF] Ensher | Effects of Perceived Attitudinal and Demographic Similarity on Protégés' Support and Satisfaction Gained From Their Mentoring Relationships[END_REF]. When identification is effective, mentors share their values and attitudes, and they may model desired entrepreneurial behaviors or attitudes. Comparing oneself to a mentor is an upward social comparison that can stimulate mentees' motivation to engage in a learning process when perceived similarity with the mentor is high [START_REF] Schunk | Developing children's self-efficacy and skills: The roles of social comparative information and goal setting[END_REF]. On the other hand, upward social comparisons can also reduce mentees' ESE if the mentor's level of proficiency seems unattainable and perceived similarity is low [START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF]. As a consequence, a high level of perceived similarity will facilitate upward social comparison with the mentor and enable mentees to improve their ESE through the mentor function received. These considerations suggest the following hypothesis: Hypothesis 1: The mentee's perceived similarity with the mentor has a positive moderating effect on the relation between mentor functions and the mentee's ESE. Mentees' LGO Learning goal orientation (LGO) (also known as mastery goal-orientation) is a relatively stable psychological disposition that individuals develop through their interpersonal relationships [START_REF] Dweck | Motivational processes affection learning[END_REF]. Individuals with a high LGO tend to perceive their abilities as malleable and subject to change [START_REF] Dupeyrat | Implicit theories of intelligence, goal orientation, cognitive engagement, and achievement: A test of Dweck's model with returning to school adults[END_REF]. These individuals will therefore approach the tasks at hand with self-confidence, and with the intention of developing new skills. They will consequently value hard work and self-improvement and will be constantly looking for new challenges to enhance their skills [START_REF] Dweck | A social-cognitive approach to motivation and personality[END_REF]. By doing so, they engage in new activities, regardless of their difficulty [START_REF] Button | Goal Orientation in Organizational Research: A Conceptual and Empirical Foundation[END_REF]. Conversely, individuals with low levels of LGO tend to see their intelligence and their skills as 'stable' and 'unchangeable', and they tend to have a lower level of ESE than those who perceive their skills as malleable [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF]. Their approach towards, and expectations of, a mentoring relationship will undoubtedly differ from mentees with high levels of LGO. LGO does not seem to be related to short-term or long-term goal setting [START_REF] Harackiewicz | Short-term and long-term consequences of achievement goals: Predicting interest and performance over time[END_REF]; however, individuals with low LGO and high LGO use different strategies to reach their goals. For instance, given that LGO is related to self-regulated learning, low LGO individuals rely more heavily on external support than individuals with high LGO, who will mobilize external sources of information to learn but will behave more autonomously [START_REF] Wolters | The relation between goal orientation and students' motivational beliefs and self-regulated learning[END_REF]. The notions of 'goal orientation' and 'goal setting' are distinct [START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF]. LGO plays a crucial role in understanding how mentees perceive their ability to master a number of skills. From a learning perspective, prior research has shown that mentees enter mentoring relationships either with a desire to grow and improve their current skills [START_REF] Barrett | Small business learning through mentoring: evaluating a project[END_REF][START_REF] Benton | Mentoring women in acquiring small business management skills -Gaining the benefits and avoiding the pitfalls[END_REF] or to receive advice and suggestions on how to improve their entrepreneurial project [START_REF] Gaskill | Qualitative Investigation into Developmental Relationships for Small Business Apparel Retailers: Networks, Mentors and Role Models[END_REF][START_REF] Gibson | Developing the professional self-concept: Role model construals in early, middle, and late career stages[END_REF] without having to change their current skills. LGO may be related to these mentoring outcomes from the mentees' perspective and thus depend on their motivation to grow/learn or to receive advice/help from their mentors. High LGO mentees could exhibit the first category of motivations whereas low LGO mentees may prefer the second types of motivations. In a study that investigated children's behavior after a failure in school, [START_REF] Diener | An Analysis of Learned Helplessness: Continuous Changes in Performance, Strategy, and Achievement Cognitions Following Failure[END_REF] found that learning-oriented children make fewer attributions and focus on remedies for failure, while helpless children (i.e., low LGO) focus on the cause of failure. In school, students who adopt a high LGO engage in more self-regulated learning than the others [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF][START_REF] Pintrich | The role of expectancy and self-efficacy beliefs[END_REF]. Furthermore, a high LGO mindset, also called a growth mindset [START_REF] Dweck | Mindset: The new psychology of success[END_REF], is demonstrated to be related to high intrinsic motivation [START_REF] Haimovitz | Dangerous mindsets: How beliefs about intelligence predict motivational change[END_REF], goal achievement [START_REF] Burnette | Mind-sets matter: A meta-analytic review of implicit theories and self-regulation[END_REF] and ESE [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF]. Therefore, we assume that mentees with a high level of LGO will also have a high level of ESE, based on the influence the former has on the latter. These considerations lead us to the following hypothesis: Hypothesis 2: Mentee's LGO is positively related to his/her ESE. As we mentioned earlier, mentees can enter mentoring relationships harboring different motivations: to learn and to improve their skills or to receive advice and suggestions on how to manage their business. Who would benefit most from mentoring relationships with regard to ESE development? There is evidence that LGO is associated with feedback seeking behaviors [START_REF] Tuckey | The influence of motives and goal orientation on feedback seeking[END_REF][START_REF] Vandewalle | A goal orientation model of feedback-seeking behavior[END_REF][START_REF] Vandewalle | A test of the influence of goal orientation on the feedback-seeking process[END_REF]; entrepreneurs with high LGO should thus be attracted to mentoring, as it procures feedback in a career setting where there are no hierarchical superiors for assessing one's skills and performance. Additionally, entrepreneurs with high LGO should be stimulated by mentoring relationships and consider their mentors as a potential learning source [START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF] to develop their intelligence and skills [START_REF] Ames | Achievement goals in the classroom: Students' learning strategies and motivation processes[END_REF]. On the other hand, low LGO entrepreneurs would prefer situations in which they can perform well (performance goal orientation) [START_REF] Dweck | Mindset: The new psychology of success[END_REF]. Given that they perceive their intelligence as fixed in time, when facing a difficult task or receiving a bad performance, they will seek help or try to avoid the task at hand rather than try to learn new skills that could allow them to face a similar challenge in the future. As previously mentioned, individuals with high LGO tend to exhibit a higher level of ESE. Despite the fact that mentoring can be a source of learning for them, it is unlikely that they will significantly improve their ESE. As mentioned by [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF], vicarious experience (i.e., observing someone similar to oneself succeeding in a particular task will improve the observer's beliefs that he/she can also master the task) as well as verbal persuasion allow individuals to adjust their ESE to a more realistic level, either upward or downward. Thus, considering the high level of ESE of mentees with high LGO, it is highly probable that, at best, they will maintain their high ESE, or experience a decrease in ESE to a more realistic level. The picture is quite different for low LGO mentees. They believe their intelligence to be stable and immovable. When facing a difficult task or receiving negative performance feedback, they will either seek help to accomplish the task or try to avoid it in the future [START_REF] Dweck | Mindset: The new psychology of success[END_REF]. Novice entrepreneurs, despite feeling incompetent at performing certain tasks, are often required to complete these tasks because they often do not have the resources to hire qualified individuals to help them. Under these conditions, external support may become the preferred way to overcome this personal limitation as it may help them feel more effective in their management decisions. Given that low LGO entrepreneurs do not believe their intelligence is malleable, they are not likely to work on developing new skills to face challenging situations. Consequently, mentoring can help them feel more confident about their efficacy in managing their business (i.e., ESE). However, the increase of their ESE is dependent on the mentor functions received, and therefore it may only last as long as they stay in the mentoring relationship. To sum up, mentoring may have less of an effect on high LGO novice entrepreneurs' ESE. For these entrepreneurs, mentoring may represent a source of learning (along with formal education, entrepreneurs' clubs, media, learning through action, etc.). Mentoring will thus keep their ESE high or slightly readjust it to a more realistic level. On the other hand, low LGO novice entrepreneurs may view mentoring as a significant source of help to overcome their perceived inability to deal with career-related goals and tasks. With the support of a mentor, the latter type of mentee should consequently perceive themselves as more suited to accomplish the tasks related to their entrepreneurial career, and thus experience an improvement of their ESE. These considerations suggest the following hypothesis: Hypothesis 3: Mentee's LGO has a negative moderating effect on the relationship between the mentor functions and the mentee's ESE, such that the relationship would be stronger for low LGO mentees. As previously mentioned, low LGO mentees do not think that they are able to significantly improve their abilities. Thus, they will seek advice, support and help from mentors to compensate for their perceived weaknesses. Given that mentoring offers an opportunity to compare with others and because low LGO mentees may not believe they can change their abilities, perceived similarity with the mentor may act as a moderator of the relationship between mentor functions and mentees' ESE. Indeed, mentees would probably be more willing to accept advice and support from a mentor if the former is perceived as highly similar to the latter, causing in turn the mentor functions to improve ESE to a greater extent. Furthermore, throughout social comparison processes [START_REF] Corcoran | Social comparison: Motives, standards, and mechanisms[END_REF][START_REF] Festinger | A Theory of Social Comparison Processes[END_REF], the more the mentor exerts his/her functions, the more adapted the mentee will feel toward his/her entrepreneurial career, which, in turn, will have a positive influence on his/her ESE. However, when the mentee perceives himself/herself as not being very similar to the mentor, social comparison processes will stop [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF]. Therefore, mentor functions would have less effect in improving the mentee's ESE as the mentee would feel less adapted to an entrepreneurial career [START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF]. This suggests the following hypothesis: Hypothesis 4: The impact of the mentor functions on the mentee's ESE is enhanced when the mentor is perceived as highly similar and when the mentee's LGO is low. Methodology We conducted a study of mentoring relationships within Réseau M, a mentoring network Before the first pairing, every mentor receives a mandatory three hour training session on the mission of mentoring and the main guidelines to follow. Novice entrepreneurs benefit from mentor support for a minimal cost: a few hundred dollars per year, and in some cases, for free. The program is available to every novice entrepreneur who wants to be supported by a mentor. Mentees are seeking career-related support (e.g. advice, a sounding board for decision-making, expertise, etc.), as well as psychological support (e.g. to ease loneliness, to be reassured or encouraged, etc.) from their mentors. Each mentor acts as a volunteer to help novice entrepreneurs in their entrepreneurial journey. Most of them are experienced entrepreneurs that are retired and want to stay active by supporting those less experienced, and a few of them are still working in the business world (e.g. bankers, practitioners, etc.). To ensure the coordination of the mentoring cells, the Fondation organizes workshops dedicated to the development of mentor-mentee relationships. Réseau M provides a Code of Ethics and a standard mentoring contract signed by mentors and mentees at the beginning of their interaction. Sample The sample for this study was composed of mentored entrepreneurs from Réseau M of the Fondation de l'entrepreneurship, who had attended at least three meetings with their mentor or were still in a mentoring relationship, and whose email addresses were valid at the time of the survey. In 2008, mentees were invited to participate in the study by email, and two follow-ups were conducted with non-respondents, resulting in a total of 360 respondents (a response rate of 36.9%). Given that the Fondation was not able at that time to provide information concerning the demographic characteristics of the sample, we decided to compare early respondents (who answered the first time), and later respondents (who answered after follow-ups), as suggested by [START_REF] Armstrong | Estimating nonresponse bias in mail surveys[END_REF]. There are no significant differences between the two groups in terms of demographic variables, business-related variables, and the variables measured in the study. The respondents are thus representative of the studied population. Table 1 shows the characteristic of the sample. (1999). This allowed us to measure several perceived abilities such as: defining strategic objectives (3 items), coping with unexpected challenges (3 items) [START_REF] De Noble | Entrepreneurial self-efficacy: The development of a measure and its relationship to entrepreneurial action[END_REF], recognizing opportunities (3 items), engaging in action planning (3 items), supervising human resources (3 items), and managing finance issues (3 items) [START_REF] Anna | Women business owners in traditional and non-traditional industries[END_REF]. These items are similar to those suggested by other authors [START_REF] Mcgee | Entrepreneurial Self Efficacy: Refining the Measure[END_REF]. Seven-point Likert scales were used. The Cronbach's alpha was 0.936, which is well above the average [START_REF] Cronbach | Coefficient alpha and the internal structure of tests[END_REF]. A mean score of all the items was calculated. Mentor functions. The measure of mentor functions was developed by St-Jean (2011), and includes 9 items assessed on a seven-point Likert scale. This scale provides an assessment of the depth of mentoring provided. The Cronbach's alpha was 0.898, which is well above average. A mean score of all the items was calculated. Perceived similarity. We used the measure developed by [START_REF] Allen | Relationship effectiveness for mentors: Factors associated with learning and quality[END_REF], which includes similarity in values, interests, personality, and those suggested by [START_REF] Ensher | Effects of Race, Gender, Perceived Similarity, and Contact on Mentor Relationships[END_REF], including similarity in worldview. Seven-point Likert scales were used and the Cronbach's alpha was 0.897, which is well above average. A mean of all the items was calculated. Learning goal orientation (LGO). The study used a measure developed by [START_REF] Button | Goal Orientation in Organizational Research: A Conceptual and Empirical Foundation[END_REF], which includes 8 items. Seven-point Likert scales were used. The Cronbach's alpha was 0.927, which is well above the average suggested. A mean score of all the items was calculated. Control variables. There are certain exogenous variables that may impact ESE, such as the respondents' gender [START_REF] Mueller | Gender-role orientation as a determinant of entrepreneurial self-efficacy[END_REF][START_REF] Wilson | An analysis of the role of gender and self-efficacy in developing female entrepreneurial interest and behavior[END_REF], age [START_REF] Maurer | Career-relevant learning and development, worker age, and beliefs about selfefficacy for development[END_REF], education level and management experience. They were all included in the analysis. The research was conducted in French. Thus, all the items have been translated into English and proofread by a professional translator, to ensure the validity of measures. Common method bias Using self-reported data and measuring both predictors and dependent variables may result in common method variance (CMV) [START_REF] Lindell | Accounting for common method variance in crosssectional research designs[END_REF][START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. To reduce the possibility of CMV, we first ensured confidentiality for each respondent in order to reduce social desirability, respondent leniency, and taking on perceptions consistent with the researchers' objectives [START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. We also performed Harman's single factor test as a post-hoc test. This procedure involved conducting an unrotated exploratory factor analysis on all of the items collected for this study. Results indicate that data converge into four factors, with the first factor explaining 26.87% of the variance. Furthermore, data show negative correlation or no correlation between the main variables (Table 1 shows no significant correlation between LGO and perceived similarity or mentor functions), which is unlikely to appear in data contaminated with CMV. Moreover, when the variables are too complex and cannot be anticipated by the respondent, as observed in this study, this reduces the potential effects of social desirability and therefore reduces CMV [START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. Given that personality is usually measured through self-report instruments, the fact that we used a self-report questionnaire for LGO does not constitute a limitation of the current study [START_REF] Spector | Method variance in organizational research -Truth or urban legend?[END_REF]. We thus believe that the risk of CMV with the data used for the present study is relatively low. Data analysis A hierarchical regression analysis of ESE was conducted to test the hypotheses. We started by entering control variables, and then we considered the main effects of mentees' LGO, perceived similarity with the mentors and mentor functions. Lastly, we entered the interactions between independent variables and we ended with a triple interaction analysis. To calculate the interaction between variables and to avoid collinearity, we first multiplied the relevant variables and focused on the results of each mean. After removing surveys where participants left out answers, the remaining sample was composed of 314 respondents. Results Means, standard deviations and correlations between variables are shown in Table 2. Model 3 takes into consideration the moderators (R 2 =0.268), and Model 4 adds the three-way interaction between independent variables (R 2 =0.284). The hypotheses were validated with model 4. Indeed, Model 4 shows that age has a negative effect on ESE, whereas the level of education and prior management experience produced a positive impact on ESE (p=0.073). LGO is related to ESE level (β=0.344, p=0.000), which confirms H2. The moderation of the LGO (H3) and perceived similarity (H1) on ESE is also confirmed (β=-0.357, p=0.000 and β=0.205, p=0.008, respectively). Finally, the three combined independent variables simultaneously influence ESE, which confirms H4 (β=-0.160, p=0.023). Overall, the two-way and three-way interactions explain 0.099% of the variance of ESE (Δ adj.R 2 ). Figure 2 shows that perceived similarity positively influences the interaction between mentor functions and ESE. Thus, when mentees perceive little similarity with their mentor, there is no shift in their ESE. Yet, in dyads where mentees perceive their mentor as highly similar, an increase in mentor functions increases mentees' ESE as well. Figure 4 illustrates the three-way interaction between variables. When a mentee has a high LGO, the mentor functions lower his/her ESE, no matter the level of perceived similarity. For mentees with low LGO, mentor functions increase their ESE level. This effect is the most significant when mentees perceive their mentors as similar, which indicates that mentoring relationships are the most effective at enhancing mentees' ESE when mentees have a low LGO orientation and a high level of perceived similarity with their mentor. Implications The present research results show the positive effects of mentor functions on mentees' ESE when perceived similarity with the mentor is high. This suggests that entrepreneurial role models may play a similar role in improving ESE as found with other types of support relationships, such as entrepreneur-in-residence programs and business incubators [START_REF] Christina | The Role of Entrepreneur in Residence towards the Students' Entrepreneurial Performance: A Study of Entrepreneurship Learning Process at Ciputra University, Indonesia[END_REF][START_REF] George | What is (the point of) an entrepreneur in residence? The Lancaster University experience, with some worldwide comparisons[END_REF], peer learning networks [START_REF] Kempster | Learning to lead in the entrepreneurial context[END_REF][START_REF] Kutzhanova | Skill-based development of entrepreneurs and the role of personal and peer group coaching in enterprise development[END_REF] and, more generally, in the context of public support for entrepreneurs [START_REF] Delanoë | From intention to start-up: the effect of professional support[END_REF][START_REF] Robinson | Supporting black businesses: narratives of support providers in London[END_REF]. Findings suggest that high and low LGO mentees do not share the same motivations when entering mentoring relationships. Mentees with low levels of LGO are looking for advice and approval relative to their entrepreneurial skills (reassurance motivation) because external feedback may enable them to go beyond their perceived abilities (guidance motivation). On the other hand, mentees with high LGO levels are probably looking for a mentoring relationship that may enable them to improve their skills by learning from their mentor's experience, a support relationship that may stimulate them in terms of new ideas and practices (motivation to be challenged). The present research also demonstrates that low LGO mentees benefit most from mentors' help in improving their ESE. High LGO mentees experienced a higher ESE when mentor functions were lower; conversely, when mentor functions were fully exercised, these mentees' ESE had a tendency to decrease to the same ESE level as that of low LGO mentees. In other words, in an intense mentoring context (high mentor functions), mentees reported a similar level of ESE, regardless of their LGO levels. At first glance, one would be tempted to prevent high LGO novice entrepreneurs from being accompanied by a mentor, as it seems to lead to a reduction in their level of ESE. However, previous studies have demonstrated that some entrepreneurs are overly optimistic, and this has a negative effect on the survival of their business [START_REF] Lowe | Overoptimism and the performance of entrepreneurial firms[END_REF]. Moreover, [START_REF] Hmieleski | When does entrepreneurial self-efficacy enhance versus reduce firm performance?[END_REF] demonstrated that a high ESE has a negative effect on business performance when the entrepreneurs' optimism is high. In this perspective, mentoring could be useful for these entrepreneurs because it brings ESE to a level closer to the reality of the entrepreneurs' abilities, which could reduce errors committed due to overconfidence in their skills. Finally, our findings suggest that the positive effect of mentoring on mentees' ESE may be limited to the duration of the mentoring relationship for low LGO novice entrepreneurs. In other words, as long as low LGO mentees are involved in a mentoring relationship, they will probably feel more self-confident. However, once the mentoring relationship ends, they may experience a decrease in their ESE because of their need for constant external reassurance and support. This suggests that LGO is an important personal variable to consider in researching entrepreneurship support outcomes. In this regard, [START_REF] Dweck | Motivational effects on attention, cognition, and performance[END_REF] demonstrated that it is possible to develop specific training and support that effectively enhances the participants' LGO, which, in turn, has an important effect on their motivational processes, attention, cognition, and performance. Thus, an important practical implication of our findings is that mentors could learn how to counsel novice entrepreneurs with low levels of ESE and LGO, and help them not only improve their ESE level but also their LGO, thus securing an enduring increase in their ESE once the mentoring relationship ends. Discussion The present study has three main theoretical contributions. First, we demonstrate that the impact of mentors on mentees' ESE is moderated by the perceived similarity with the mentor, as previously assessed in entrepreneurial education contexts [START_REF] Laviolette | The impact of story bound entrepreneurial role models on self-efficacy and entrepreneurial intention[END_REF][START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF][START_REF] Schunk | Developing children's self-efficacy and skills: The roles of social comparative information and goal setting[END_REF]. Prior research has stressed the positive effect of mentoring on mentees' ESE [START_REF] Gravells | Mentoring start-up entrepreneurs in the East Midlands -Troubleshooters and trusted friends[END_REF][START_REF] Kent | An evaluation of mentoring for SME retailers[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF] and the fact that mentors act as role models [START_REF] Barnir | Mediation and Moderated Mediation in the Relationship Among Role Models, Self-Efficacy, Entrepreneurial Career Intention, and Gender[END_REF]. We introduce the notion of upward comparison with the mentor to explain the importance of mentees' perceived similarity with the mentor, based on social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF][START_REF] Gibson | Role models in career development: New directions for theory and research[END_REF]) [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF]. Second, our study demonstrates the importance of mentees' LGO in entrepreneurial mentoring relationships, because of its relationship with mentees' ESE. Prior research based on goal-orientation theory documented the relationship between LGO and ESE in other contexts [START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF]. Our findings suggest that there is a strong relationship between LGO and the need for feedback [START_REF] Tuckey | The influence of motives and goal orientation on feedback seeking[END_REF][START_REF] Vandewalle | A goal orientation model of feedback-seeking behavior[END_REF][START_REF] Vandewalle | A test of the influence of goal orientation on the feedback-seeking process[END_REF], as the mean score for the level of mentees' LGO in our study is 6.24 (on 7). However, another explanation for this high level of LGO may be that entrepreneurship, being a career with many challenges and difficulties [START_REF] Aspray | Positive illusions, motivations, management style, stress and psychological traits : A review of research literature on women's entrepreneurship in the information technology field[END_REF][START_REF] Grant | On being entrepreneurial: the highs and lows of entrepreneurship[END_REF], attracts individuals interested in learning and with a desire to improve their abilities. This latter explanation is probably more plausible, as previous research on LGO in a mentoring context found a mean score of mentees' LGO of 4.35 (on 7) [START_REF] Egan | The Impact of Learning Goal Orientation Similarity on Formal Mentoring Relationship Outcomes[END_REF]) and a study measuring the impact of LGO on entrepreneurial intentions found an LGO score of 5.198 (on 7) [START_REF] De Clercq | The roles of learning orientation and passion for work in the formation of entrepreneurial intention[END_REF]. Additionally, prior research has shown that a high level of LGO combined with a high level of ESE is likely to lead to choosing entrepreneurship as a career choice [START_REF] Culbertson | Enhancing entrepreneurship: The role of goal orientation and self-efficacy[END_REF]. In fact, a recent study indicated that LGO strengthens the relationship between ESE and entrepreneurial intention [START_REF] De Clercq | The roles of learning orientation and passion for work in the formation of entrepreneurial intention[END_REF]. Thus, LGO may be an important mindset that attracts and retains individuals in an entrepreneurial career, which suggests new research directions. Finally, the third contribution of the present study is that it provides evidence concerning the combined effects of mentor functions, mentees' LGO and perceived similarity with the mentor on mentees' ESE. We confirmed the fourth hypothesis relative to the positive impact of the mentor functions on the mentee's ESE when the mentor is perceived as highly similar and when the mentee's LGO is low. The research model explains 15.1% of the variance when considering main effects only (adj. R 2 ). Adding the interaction effects explains an additional 9.9% of the variance, for an R 2 final adjustment of 0.25. Findings confirm previous research relative to the positive correlation between the mentees' LGO, level of education, prior management experience, and ESE [START_REF] Bell | Goal orientation and ability: Interactive effects on selfefficacy, performance, and knowledge[END_REF][START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF]. We found that a low level of LGO combined with a high level of perceived similarity significantly contributed to reinforcing novice entrepreneurs' ESE in a mentoring context. Our study has, however, several limitations. First, although LGO is highlighted as an important moderator to consider in the study of mentoring for entrepreneurs, we cannot confirm without a doubt that low/high LGO mentees have different motivations for entering a mentoring relationship. Our reasoning was guided by the theoretical framework of LGO and social comparison theory; however, further investigation of the reasons underlying the need for a mentor could bring additional confirmation of the underlying processes at play. Second, the present research assessed the impact of mentoring on mentees' ESE. However, not every entrepreneur has the desire to improve his/her ESE and novice entrepreneurs may seek mentoring for other cognitive or affective reasons. Thus, our final sample may include mentees who did not seek ESE development. Nevertheless, the reader should keep in mind that many other outcomes could be reached through mentoring and, as such, focusing on ESE development, despite highlighting specific processes at play, suggests a limited view of the potential effects of mentoring on the entrepreneurial process. The role of mentoring in improving opportunity identification, reducing loneliness and stress of novice entrepreneurs, or developing better managerial skills are also important research questions to be further explored. Third, we measured ESE development within a formal mentoring program. Given that mentors are trained and aware of the many aspects that could foster or hinder the effectiveness of mentoring, our findings cannot be extended to informal mentoring settings. Indeed, because informal mentors are generally well-known by their mentees before the beginning of the mentoring relationship, the former may be selected based on perceived similarity with the latter. Thus, our findings are most relevant for formal mentoring programs. Fourth, the study was not longitudinal, making it difficult to assess the mentoring effects on the development of mentees' ESE over time. Longitudinal research is thus necessary to better evaluate the contribution of personal and relational mentoring variables in terms of impact on mentees' ESE. Conclusion For the past decades, many mentoring programs have been launched in developed countries and evidence exists that they may trigger many outcomes [START_REF] Wanberg | Mentoring research: A review and dynamic process model[END_REF]. Prior research has also emphasized mentoring's contribution to novice entrepreneurs' personal development [START_REF] Edwards | Promoting entrepreneurship at the University of Glamorgan through formal and informal learning[END_REF][START_REF] Kent | An evaluation of mentoring for SME retailers[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Turning experience into learning[END_REF] and business success in terms of startup launching, fundraising and business growth [START_REF] Mcadam | Building futures or stealing secrets? Entrepreneurial cooperation and conflict within business incubators[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] Styles | Using SME intelligence in mentoring science and technology students[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF]. These programs invest time and energy into identifying mentees and mentors potentially interested in developing mentoring relationships. However, little attention is being paid to the matching process of mentors and mentees in terms of perceived similarity and the training of mentors that could be offered. The present research demonstrates that role-model identification needs to be secured by mentoring programs so as to ensure that novice entrepreneurs perceive their mentor as someone who is relevant, inspiring, and accessible. Mentoring programs could consider the similarity of mentors and mentees before making proposals concerning the composition of mentoring dyads. Also, mentors could be informed of the importance of perceived similarity in mentoring relationships. Moreover, the predominance of male mentors may become an issue as more women entrepreneurs enter the market. Research indicates that gender matching of mentors and mentees is especially important for women [START_REF] Quimby | The influence of role models on women's career choices[END_REF]. Social identity theory [START_REF] Tajfel | Differentiation between social groups: Studies in the social psychology of intergroup relations[END_REF] and the similarity-attraction paradigm [START_REF] Byrne | The attraction paradigm[END_REF]) predict more perceived similarity and identification in same-gender relationships. Another practical implication related to these findings is that more attention should be paid to the matching process of mentoring dyads in terms of learning motivations and learning orientation. Complementary mentoring relationships may thus develop, with the help of a program manager, who could assist mentors in the identification of mentees' learning needs so as to ensure more effective mentoring relationships with regard to their potential impact on mentees' ESE. Training should be provided to mentors in order to help them identify their mentees' needs and personal profile more accurately in order to adapt the rendering of mentoring functions while taking into account mentees' needs and motivations. Given that LGO can be enhanced through training, mentors may play a significant role in developing mentees' LGO and in fostering mentees' ESE by the same token. Figure 1 . 1 Figure 1. Tested theoretical model launched in 2000 by the Fondation de l'entrepreneurship, an organization dedicated to Quebec's economic development. Réseau M provides mentoring support to novice entrepreneurs through a network of 70 mentoring cells implemented across the province of Quebec (Canada). These cells are generally supported by various economic development organizations such as local development centres (LDC's), Community Future Development Corporations (CFDCs), and local chambers of commerce. These organizations ensure the program's local and regional development, while subscribing to the mentoring model provided by the Fondation de l'entrepreneurship. Local organizations have cell coordinators in charge of recruiting mentors, organizing their training, promoting the program to novice entrepreneurs, and pairing and guiding mentor-mentee dyads. Figure 2 . 2 Figure 2. Moderating effect of perceived similarity on the interaction between mentor functions and ESE Figure 3 . 3 Figure 3. Moderating effects of LGO on the interaction between mentor functions and ESE. Figure 4 . 4 Figure 4. Three-way interaction between mentor functions, LGO, and perceived similarity for the development of ESE Table 1 . Sample characteristics 1 Mentoring relationship caracteristics Male mentees: 162 (51.6%) Female mentees: 152 (48.4%) Paired with male mentors: 275 (81.4%) Paired with female mentors: 63 (18.6%) Mean mentoring relationships length: 16.07 months (SD=14.4) Mean meeting length: 68.52 minutes (SD=14.4) Median meeting frequency: Each month Mentees characteristics Mean age: 39.8 years old (SD=8.97) Mentees with university degree: 173 (55%) Experience in industry before startup: Less than 5 years: 61.6% Experience in entrepreneurship: Less than 5 years: 82.9% Firm characteristics Mean number of employees: 4.48 (SD=9.69) Annual turnover: Less than $100,000CAD: 62.8% Annual gross profit: Less than $25,000CAD: 68.1% Professional services: 23.0% Manufacturing: 14.4% Retailing: 11.9% Others: 50.7% Measures Entrepreneurial self-efficacy (ESE). To gain better insight into the dimensions of ESE, we combined the scales developed by [START_REF] Anna | Women business owners in traditional and non-traditional industries[END_REF] and De Noble et al. Table 2 . Means, Standard Deviations and Correlations of Variables 2 Mean S.D. 1 2 3 4 5 6 7 1-Gender 0.48 0.50 1.00 2-Age 39.81 8.97 -0.01 1.00 3-Education 2.53 0.94 0.12* 0.08 1.00 4-Managerial experience 2.29 1.56 -0.13* 0.25* -0.09 1.00 5-LGO 6.24 0.88 0.12* -0.05 -0.02 0.04 1.00 6-Perceived Similarity 4.71 1.40 0.01 -0.14* -0.09 -0.01 -0.00 1.00 7-Mentor Functions 5.39 1.15 0.06 -0.14* -0.00 -0.03 0.01 0.61* 1.00 8-Ent. Self-efficacy (ESE) 5.89 0.76 0.01 -0.21* 0.05 0.08 0.33* 0.16* 0.16* (dependent variable) *=p≤0.05 Table 2 2 illustrates the results of the hierarchical regression of ESE. As expected, Model 1 takes into account control variables (R 2 =0.069), Model 2 adds the main effects (R 2 =0.175), while Table 3 . Entrepreneurial Self-Efficacy Hierarchical Regression 3 Model 1 Model 2 Model 3 Model 4 Std.β Std. β Std.β Std.β
61,666
[ "4517" ]
[ "253844", "456031", "253844" ]
01760249
en
[ "shs" ]
2024/03/05 22:32:13
2005
https://insep.hal.science//hal-01760249/file/CadenceSelectionBrJSportsMed-2005-Vercruyssen-267-72.pdf
Dr Vercruyssen email: [email protected]...... Objectives: To investigate the effect of cadence selection during the final minutes of cycling on metabolic responses, stride pattern, and subsequent running time to fatigue. Methods: Eight triathletes performed, in a laboratory setting, two incremental tests (running and cycling) to determine peak oxygen uptake (VO 2 PEAK) and the lactate threshold (LT), and three cycle-run combinations. During the cycle-run sessions, subjects completed a 30 minute cycling bout (90% of LT) at (a) the freely chosen cadence (FCC, 94 (5) rpm), (b) the FCC during the first 20 minutes and FCC220% during the last 10 minutes (FCC220%, 74 (3) rpm), or (c) the FCC during the first 20 minutes and FCC+20% during the last 10 minutes (FCC+20%, 109 (5) rpm). After each cycling bout, running time to fatigue (T max ) was determined at 85% of maximal velocity. Results: A significant increase in T max was found after FCC220% (894 (199) seconds) compared with FCC and FCC+20% (651 (212) and 624 (214) seconds respectively). VO 2 , ventilation, heart rate, and blood lactate concentrations were significantly reduced after 30 minutes of cycling at FCC220% compared with FCC+20%. A significant increase in VO 2 was reported between the 3rd and 10th minute of all T max sessions, without any significant differences between sessions. Stride pattern and metabolic variables were not significantly different between T max sessions. Conclusions: The increase in T max after FCC220% may be associated with the lower metabolic load during the final minutes of cycling compared with the other sessions. However, the lack of significant differences in metabolic responses and stride pattern between the run sessions suggests that other mechanisms, such as changes in muscular activity, probably contribute to the effects of cadence variation on T max . uring triathlon racing (swim/cycle/run), the most critical and strategic aspect affecting overall performance is the change from cycling to running. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF][START_REF] Gottshall | The acute effects of prior cycling cadence on running performance and kinematics[END_REF][START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF][START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] These studies have attempted to identify aspects of cycling that may improve running performance in triathletes. Drafting has been shown to be a beneficial cycling strategy which results in an improved subsequent running performance in elite triathletes. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] More recently, the selection of cycling cadence during a cycle-run combination has been identified by researchers as an important variable that may affect overall performance. 1 2 5 6 Cadence selection has been reported to influence metabolic responses, kinematic variables, and performance during a cycle-run session. However, the extent to which the cadence selection affects subsequent maximal running performance during a cycle-run combination remains unclear. In a laboratory setting, Vercruyssen et al [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] have shown that the adoption of a low cadence (73 rpm), corresponding to the energetically optimal cadence, reduced oxygen uptake (VO 2 ) during a cycle-run session, compared with the selection of higher cadences (80-90 rpm). These authors suggested that the choice of a low cadence (,80 rpm) before the cycle-run transition may be advantageous for the subsequent run. However, during field based investigations, Gottshall and Palmer 2 found an improved 3200 m track running performance after 30 minutes of cycling conducted at a high cadence (.100 rpm) compared with lower cadences (70-90 rpm) for a group of triathletes. It was suggested that the selection of a high cadence improved running performance through increased stride rate and running speed during the subsequent run. In contrast, Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] showed no effect of cycling cadence (60-100 rpm) and stride rate on a subsequent 3000 m running performance. These conflicting results indicate the difficulty of predicting the optimal cadence selection for a cycle-run session in trained triathletes. In most of the above experiments, the triathletes were required to cycle at either an imposed cadence (range 60-110 rpm) or a freely chosen cadence (range 80-90 rpm) which remained constant for the entire 30 minutes of the cycle bout. This lack of cadence variation does not reproduce race situations, during which the cadence may vary considerably especially before the cycle-run transition. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF] Many triathletes attempt to optimise the change from cycling to running by selecting high cadences (.100 rpm) during the final kilometres of cycling. 1 2 6 Another strategy, however, may be the selection of a low cadence (,75 rpm) before the cycle-run transition, in order to conserve energy for the subsequent run. 4 5 To our knowledge, no data are available on cadence changes during the last few minutes before the cycle-run transition and its effects on subsequent running performance. Therefore the aim of this investigation was to examine, in a laboratory setting, the effect of cadence variations during the final 10 minutes of cycling on metabolic responses, stride pattern, and subsequent running time to fatigue in triathletes. METHODS Participants Eight experienced male triathletes currently in training volunteered to take part in this experiment. All had regularly competed in triathlon racing at either sprint (0.750 km swim/ 20 km cycle/5 km run) or Olympic distances (1.5 Maximal tests Two incremental tests were used to determine peak oxygen uptake (VO 2 PEAK), maximal power output (P max ), maximal running speed (V max ), and lactate threshold (LT). Subjects performed cycling bouts on a racing bicycle mounted on a stationary turbo-trainer system. Variations in power output were measured using a ''professional'' SRM crankset system (Schoberer Rad Messtechnick, Fuchsend, Germany) previously validated in a protocol comparison using a motor driven friction brake. [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF] Running bouts were performed on a motorised treadmill situated next to the cycle turbo-trainer. For cycling, the test bout began at an initial workload of 100 W for three minutes, after which the power output was increased by 40 W every three minutes until exhaustion. For the treadmill test, the initial running speed was fixed at 9 kph, with an increase in velocity of 1.5 kph every three minutes. For both cycling and running tests, there was a one minute rest period between each increment for the sampling of capillary blood (35 ml) from a hyperaemic earlobe. Blood samples were collected to determine plasma lactate concentration ([La 2 ]) using a blood gas analyser (ABL 625; Radiometer Medical A/S, Copenhagen, Denmark). performed 15 minutes of warm up comprising 13 minutes at a low power output (100-130 W) and the last two minutes at the individual workload required during the cycle bout of cycle-run sessions. After two minutes of rest, each triathlete completed a cycle bout at (a) the freely chosen cadence (FCC), (b) the FCC during the first 20 minutes and FCC220% during the last 10 minutes (FCC220%), or (c) the FCC during the first 20 minutes and FCC+20% during the last 10 minutes (FCC+20%). The FCC¡20% range has previously been used during a 30 minute cycling exercise in triathletes. 9 10 Cycling bouts were performed at a power output corresponding to 90 % of LT (266 (28) W) and represented an intensity close to that reported in previous studies of the relation between cycling cadence and running performance. 5 6 FCC220% was chosen to replicate cadence values close to the energetically optimal cadence previously noted in triathletes, [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] and FCC+20% allowed us to reproduce cadence values close to those reported during cycling strategies before running. 1 2 6 Cadence and power output were monitored using the SRM power meter during all cycling bouts. No feedback was given to the subjects on their FCC over the three conditions. After each cycling bout, running time to fatigue (T max ) was determined on the treadmill at a running speed corresponding to 85% of V max (.LT) for each athlete (16.7 (0.7) kph). 11 12 During these tests, VO 2 , minute ventilation (VE), and On the basis of previous experiments and the completion respiratory exchange ratio were continuously recorded every 15 seconds using Ametek gas analysers (SOV S-3A and COV CD3A; Pittsburgh, Pennsylvania, USA). The four highest consecutive VO 2 values were summed to determine VO 2 PEAK. [START_REF] Bishop | The relationship between plasma lactate parameters, Wpeak and endurance cycling performance[END_REF] P max and V max were calculated as the average power output and running speed in the last three minutes completed before exhaustion. Heart rate (HR) was monitored every 10 seconds during each experimental session using an electronic HR device with a chest electrode (Polar Vantage NV; Polar Electro Oy, Kempele, Finland). The LT calculated by the modified D max method was determined by the point on the polynomial regression curve that yielded the maximal perpendicular distance to the straight line formed by the lactate inflection point (first increase in lactate concentration above the resting level) and the final lactate point. [START_REF] Bishop | The relationship between plasma lactate parameters, Wpeak and endurance cycling performance[END_REF] Cycle-run combinations All triathletes completed, in random order, three cycle-run sessions each composed of 30 minutes of cycling, on a cycle turbo-trainer, and a subsequent run to fatigue. A fan was used in front of the subject during these experimental sessions. Before each experimental condition, subjects of pilots tests, this running intensity was chosen to induce fatigue in less than 20 minutes. All subjects were given verbal encouragement throughout each trial. The T max was taken as the time at which the subject's feet left the treadmill as he placed his hands on the guardrails. The transition time between running and cycling was fixed at 45 seconds to reproduce the racing context. 1 6 Measurement of metabolic variables VO 2 , VE, and HR were monitored and analysed during the following intervals: 3rd-5th minute of cycling bout (3-5 min), 20th-22nd minute (20-22 min), 28th-30th minute (28-30 min) and every minute during the running sessions. Five blood samples were collected at the following intervals: before the warm up, at 5, 20, and 30 minutes during cycling, and at the end of T max . Measurement of kinematic variables Power output and cycling cadence were continuously recorded during the cycling bouts. For each running session, a 50 Hz digital camera was mounted on a tripod 4 m away from the motorised treadmill. Subsequently, the treadmill speed and period between two ground contacts for the same foot were determined using a kinematic video analysis system (SiliconCoach Pro Version 6, Dunedin, New Zealand). From these values, stride pattern characteristicsthat is, stride rate (Hz) and stride length (m)-were calculated every 30 seconds during the first five minutes and the last two minutes of the T max sessions. Statistical analysis All data are expressed as mean (SD). A two way variance analysis plan with repeated measures was performed to analyse the effects of cadence selection (FCC, FCC220%, FCC+20%) and time during the cycle-run sessions using VO 2 , VE, HR, [La 2 ], stride rate, stride length, cadence and power Cycling bouts of cycle-run sessions No significant variation in FCC was observed during the first 20 minutes of the three cycling bouts (table 2). In addition, mean power output values were not significantly different between the cycling bouts (264 (30), 263 (28), and 261 (29) W respectively for FCC, FCC220%, and FCC+20%). These data show that subjects adhered to the experimental design with respect to the required power output-cadence combination. A significant effect of exercise duration (between 3-5 and 28-30 min intervals) was observed on VO 2 , VE, and HR during the FCC and FCC+20% bouts whereas no significant variation in these metabolic variables was identified with exercise duration during the FCC220% condition (table 3). Moreover, mean VO 2 , VE, and HR were significantly lower at FCC220% compared with FCC+20% during the 28-30 min interval (respectively, 25.3%, 218.2%, and 26.8%). [La 2 ] was significantly higher during the 28-30 min interval at FCC+20% compared with FCC (+31.2%) or FCC220% (+55.5%). Running bouts of cycle-run sessions A significant increase in T max was observed only after the FCC220% modality when compared with both the FCC+20% and FCC conditions (+43.3% and +37.3% respectively; fig 1). T max values were 624 (214), 651 (212) and 894 (199) seconds after the FCC+20%, FCC and FCC220% modalities respectively. A significant increase in DVO 2 -that is, between the 3rd and 10th minute-was found during the T max completed after FCC (+6.1%), FCC+20% (+6.7%), and FCC220% (+6.5%) (table 4). However, mean VO 2 , VE, HR, and [La 2 ] were not significantly different between the three T output, as dependent variables. A Tukey post hoc test was sessions (table 4). max used to determine any differences between the cycle-run combinations. Differences in T max obtained between the three experimental conditions were analysed by one way analysis of variance. A paired t test was used to analyse differences in VO 2 PEAK, HR peak , and VO 2 at LT between the two maximal tests. Statistical significance was set at p,0.05. RESULTS Maximal tests No significant differences in VO 2 PEAK were observed between the sessions (table 1). However, HR peak and VO 2 at LT were significantly higher during running than during the maximal cycling bout (+2.9% and +15.8% respectively). No significant difference in stride pattern was observed during the T max sessions whatever the prior cadence selection (fig 2). Mean stride rate (Hz) and stride length (m) were 1.49 (0.01) and 3.13 (0.02), 1.48 (0.01) and 3.13 (0.03), 1.49 (0.01) and 3.15 (0.02), during the T max sessions subsequent to the FCC, FCC220% and FCC+20% bouts respectively. DISCUSSION The main findings of this investigation show a significant increase in T max when the final 10 minutes of cycling is performed at FCC220% (894 seconds) compared with FCC (651 seconds) and FCC+20% (624 seconds). Several hypotheses are proposed to explain the differences in T max reported during the various cycle-run combinations for the group of triathletes. A number of studies have analysed characteristics of cyclerun sessions in triathletes, with particular focus on physiological and biomechanical aspects during the subsequent run. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF] For instance, during a running session after cycling, a substantial increase in energy cost, VE, and HR, and differences in muscle blood flow have been observed compared with an isolated run. 1 3 5 6 Moreover, variations in running kinematics such as stride rate, segmental angular position, and joint angle have been shown after a cycle bout. 3 5 These running alterations, which have been linked to the effects of exercise duration and cycle-run transition, were reported during treadmill sessions conducted at a submaximal intensity and not during a high intensity running bout. In this study we investigated these effects at a high intensity close to a running speed previously observed during a short cycle-run combination in triathletes. [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] Metabolic hypotheses The T max values of this investigation are comparable to those previously reported during an exhaustive isolated run performed at an intensity corresponding to 85-90% VO 2 MAX. [START_REF] Avogadro | Changes in mechanical work during severe exhausting running[END_REF][START_REF] Billat | The VO2 slow component for severe exercise depends on type of exercise and is not correlated with time to fatigue[END_REF][START_REF] Candau | Energy cost and running mechanics during a treadmill run to voluntary exhaustion in humans[END_REF] It has previously been reported that metabolic and muscular factors are potential determinants of middle distance running performance and/or exhaustive treadmill sessions in trained subjects. [START_REF] Borrani | Is the VO2 slow component dependent on progressive recruitment of fast-twitch fibers in trained runners?[END_REF][START_REF] Brandon | Physiological factors associated with middle distance running performance[END_REF][START_REF] Paavolainen | Neuromuscular characteristics and muscle power as determinants of 5-km running performance[END_REF][START_REF] Paavolainen | Neuromuscular characteristics and fatigue during 10 km running[END_REF][START_REF] Prampero | Factors limiting maximal performance in humans[END_REF] With respect to metabolic factors, the improvement in T max observed after FCC220% may be related to changes in energy contribution. In support of this hypothesis, it has been reported that the determinants of maximal performances in middle distance running may be linked to the energy requirement for a given distance and the maximal rate of metabolic energy output from the integrative contribution of aerobic and anaerobic systems. 15 18 During submaximal and maximal running, the VO 2 variation has been reported to reflect the relative contribution from the aerobic and anaerobic sources. [START_REF] Brandon | Physiological factors associated with middle distance running performance[END_REF] In the context of a cycle-run session, Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] have reported that triathletes were able to sustain a higher fraction of VO 2 MAX during a 3000 m track run performed after cycling at 60 rpm than during cycling at 80 and 100 rpm. These authors suggested that a greater contribution of the aerobic component, during running after the choice of a low cadence, may delay fatigue for longer running distances. In this investigation, the analysis of VO 2 may also provide information on possible changes in aerobic contribution during high intensity running. Given the range of T max values, the metabolic variables were analysed during the first 10 minutes of each running session, corresponding approximately to the mean T max values reported after the FCC and FCC+20% modalities (fig 1). The evaluation of this time interval indicates no significant differences in VO 2 between the T max sessions, suggesting that the determination of T max in this study was not affected by changes in metabolic energy from the aerobic or anaerobic systems. There was, however, a significant increase in VO 2 between the 3rd and 10th minute (6.1-6.7%) during the three T max sessions, regardless of the prior experimental condition (table 4). During exercise lasting less than 15 minutes, the continual rise in VO 2 beyond the 3rd minute has been termed the VO 2 slow component (VO 2SC ). 5 11 19 20 The occurrence of a VO 2SC is classically observed during heavy running and cycling exercises associated with a sustained lactic acidosis-that is, above the LT. 19 21 22 Postulated mechanisms responsible for this VO 2SC include rising muscle temperature (Q 10 effect), cardiac and ventilatory muscle work, lactate kinetics, catecholamines, and recruitment of less efficient type II muscle fibres. [START_REF] Poole | Determinants of oxygen uptake[END_REF] Within this framework, Yano et al [START_REF] Yano | Relationship between the slow component of oxygen uptake and the potential reduction in maximal power output during constant-load exercise[END_REF] suggested that muscular fatigue may be one of the factors that produce the development of a VO 2SC during high intensity cycling exercise. However, several investigators have examined the influence of prior exercise on the VO 2 response during subsequent exercise. [START_REF] Burnley | Effects of prior heavy exercise on phase II pulmonary oxygen uptake kinetics during heavy exercise[END_REF][START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF][START_REF] Scheuermann | The slow component of O2 uptake is not accompanied by changes in muscle EMG during repeated bouts of heavy exercise in humans[END_REF] Burnley et al [START_REF] Burnley | Effects of prior heavy exercise on phase II pulmonary oxygen uptake kinetics during heavy exercise[END_REF] showed that the magnitude of VO 2 kinetics during heavy exercise was affected only by a prior bout of heavy exercise. On the basis of similar results, it has been suggested that, during successive bouts of heavy exercise, muscle perfusion and/or O 2 off loading at the during the second bout of exercise. 25 26 In addition, changes in the VO 2 response may be accentuated by the manipulation of cadence during an isolated cycling bout. [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] Gotshall et al [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] showed an increase in muscle blood flow and a decrease in systemic vascular resistance with increasing cadence (from Stride rate (Hz) 70 to 110 rpm). These previous experimental designs, based on the characteristics of combined and isolated exercises, are similar to the current one and suggest that cadence selection may affect blood flow and hence the VO 2 response during a subsequent run. For instance, the increased muscle blood flow at high cycling cadence [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] during a prior cycle bout could attenuate the magnitude of VO 2SC during subsequent running. In contrast with these earlier studies, the VO 2SC values of this investigation were not significantly different between trials during the first 10 minutes of exercise between the T max sessions. This was observed despite differences in metabolic load and cadence selection during the previous cycling bouts. These results indicate that the adoption of FCC220% is associated with a reduction in metabolic load with exercise duration, but does not affect the VO 2SC during the subsequent run. For instance, the selection of FCC220% is associated with a significant reduction in VO 2 (25.3%), VE (218.2%), HR (26.8%), and [La 2 ] (255.5 %) during the final 10 minutes of cycling compared with FCC+20%, without any significant changes in VO 2SC during subsequent running between the two conditions. This suggests that the chosen cadences do not affect the VO 2 responses during the subsequent run and also that the occurrence of a VO 2SC does not contribute to the differences in T max found in this study. This is consistent with previous research on trained subjects. [START_REF] Billat | The VO2 slow component for severe exercise depends on type of exercise and is not correlated with time to fatigue[END_REF] Muscular and stride pattern hypotheses Although we conducted no specific analysis of muscular parameters, an attractive hypothesis to explain the differences in T max between conditions is that they are due to differences in the muscular activity or fatigue state during cycle-run sessions. Muscular contractions differ during cycling and running. Cycling is characterised by longer phases of concentric muscular contraction, whereas running involves successive phases of eccentric-concentric muscular action. [START_REF] Bijker | Differences in leg muscle activity during running and cycling in humans[END_REF] Muscle activity during different modes of contraction can be assessed from the variation in the electromyographic signal. In integrated electromyography based investigations, it has been shown that muscles such as the gastrocnemius, soleus, and vastus lateralis are substantially activated during running. 14 17 28 Any alterations in the contractile capability of these muscles may have affected the ability to complete a longer T max during the cycle-run sessions in this study. Furthermore, many studies have reported substantial changes in muscular activity during isolated cycling exercises, especially when cadence is increased or decreased. [START_REF] Ericson | Muscular activity during ergometer cycling[END_REF][START_REF] Marsh | The relationship between cadence and lower extremity EMG in cyclists and noncyclists[END_REF][START_REF] Neptune | The effect of pedaling rate on coordination in cycling[END_REF][START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF] With respect to the cycle-run combination, the manipulation of cadence may accentuate modifications in muscular activity during cycling and influence the level of fatigue during a subsequent run. Marsh and Martin [START_REF] Marsh | The relationship between cadence and lower extremity EMG in cyclists and noncyclists[END_REF] showed a linear increase in electromyographic activity of the gastrocnemius and vastus lateralis muscles when cadences increased from 50 What this study adds This study shows that the choice of a low cadence during the final minutes of cycling improves subsequent running time to fatigue. to 110 rpm. Although activity of the gastrocnemius muscle has been shown to increase considerably more than the soleus muscle as cadence is increased, 30 31 Ericson et al [START_REF] Ericson | Muscular activity during ergometer cycling[END_REF] have also reported a significant increase in soleus muscle activity with the selection of high cadences. These results from isolated cycling exercises conducted in a state of non-fatigue suggest that, during the last 10 minutes of the cycling bout of our study, there was greater recruitment of the vastus lateralis, gastrocnemius, and soleus muscles after cycling at higher cadences. This may have resulted in an increase in fatigue of these muscles, which are substantially activated during subsequent running. In contrast, the lower activity of the vastus lateralis, gastrocnemius, and soleus muscles after the FCC220% condition may have reduced the fatigue experienced during cycling and resulted in improved utilisation of these muscles during the subsequent run. This may have contributed to the observed increase in T max for this condition. Nevertheless, Lepers et al 10 suggested that the neuromuscular fatigue observed after 30 minutes of cycling was attributable to both central and peripheral factors but was not influenced by the pedalling rate in the range FCC¡20%. In this earlier study, the selected power outputs (.300 W) for all cadence conditions were significantly higher than those used in our experiment (260-265 W). The choice of high power outputs during cycling 10 may result in attenuation of the differentiated effects of extreme pedalling cadences on the development of specific neuromuscular fatigue. Further research is required to analyse the relation between various pedalling strategies and muscular recruitment patterns specific to a short cycle-run session (,1 hour). The analysis of movement patterns during the cycle-run sessions also indicates that possible changes in muscle activity may be associated with modifications in kinematic variables. [START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] Hausswirth et al 3 reported significant variations in stride rate-stride length combination during a run session subsequent to a cycling bout compared with an isolated run. These modifications were attributed to local muscle fatigue from the preceding cycle. In the present study, the absence of significant differences in stride pattern during running (fig 2), regardless of the prior cadence selection, indicates that there is no relation between stride pattern and running time to fatigue. These results are consistent with previous results from a laboratory setting where the running speed was fixed on a treadmill after various cadence selections. [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] In contrast, in field based investigations, in which the running speed and stride pattern were freely selected by the athletes, Gottshall and Palmer 2 found that cycling at 109 rpm, compared with 71 and 90 rpm, during a 30 minute cycle session resulted in an increased stride rate and running speed during a 3200 m track session. However, these results are in contrast with those of Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] indicating an effect of the prior cadence on stride pattern only during the first 500 m and not during the overall 3000 m run. The relation between stride pattern, cycling cadence, and running performance is not clear. Further investigation is required to elucidate the mechanisms that affect running performance during a cycle-run session. In conclusion, this study shows that the choice of a low cadence during the final minutes of cycling improves subsequent running time to fatigue. The findings suggest What is already known on this topic Various characteristics of cycle-run sessions in triathletes have been studied, with particular focus on physiological and biomechanical aspects during the subsequent run. During a running session after cycling, a substantial increase in energy cost, minute ventilation, and heart rate, and differences in muscle blood flow have been observed compared with an isolated run. Moreover, variations in running kinematics such as stride rate, segmental angular position, and joint angle have been shown after a cycle bout. that metabolic responses related to VO 2 do not explain the differences in running time to fatigue. However, the effect of cadence selection during the final minutes of cycling on muscular activity requires further investigation. From a practical standpoint, the strategy to adopt a low cadence before running, resulting in a lower metabolic load, may be beneficial during a sprint distance triathlon. Figure 1 1 Figure 1 Running time to fatigue after the selection of various cycling cadences. Values are expressed as mean (SD). *Significantly different from the other sessions, p,0.05. 1 TFigure 2 12 Figure 2 Variations in stride rate during the running time to fatigue after the selection of various cycling cadences. T, Stride rate obtained at Tmax; T21, stride rate obtained at Tmax -1 min; T22, stride rate obtained at T max -2 min. Table 2 2 Cadence and power output values during the three cycling bouts at different time periods: 3-5, 20-22, 28-30 min Cycling bout Cadence (rpm) Power output (W) FCC220% Values are mean (SD). *Significantly different from the first 20 minutes, p,0.05. *Significantly different from the other conditions at the same time period, p,0.05. Table 3 3 Variations in mean oxygen uptake (VO 2 ), minute ventilation (VE), heart rate (HR), and blood lactate concentration ([La 2 ]), during the three cycling bouts, at different time periods: 3-5,20-22, 28-30 min Values are expressed as mean (SD). *Significantly different from the 3-5 min interval, p,0.05. *Significantly different from the 20-22 min interval, p,0.05. `Significantly different from FCC220% at the same period, p,0.05. 1Significantly different from FCC+20% at the same period, p,0.05. Table 4 4 Variations in mean oxygen uptake (VO 2 ), DVO 2 (10-3 min), minute ventilation (VE), heart rate (HR), and blood lactate concentration ([La 2 ]) during the three running sessions performed after cycling Values are expressed as mean (SD). *Significantly different between the 3rd and 10th minute of exercise, p,0.05. ACKNOWLEDGEMENTS We gratefully acknowledge all the triathletes who took part in the experiment for their great cooperation and motivation. . . . . . . . . . . . . . . . . . . . . . Authors' affiliations F Vercruyssen, J Brisswalter, Department of Sport Ergonomics and Performance, University of Toulon-Var, BP 132, 83957 La Garde cedex, France R Suriano, D Bishop, School of Human Movement and Exercise Science, University of Western Australia, Crawley, WA 6009, Australia C Hausswirth, Laboratory of Physiology and Biomechanics, Nationale Institute of Sport and Physical Education, 11, avenue du Tremblay, 75 012 Paris, France Competing interests: none declared
33,782
[ "752657", "1012603", "1029443" ]
[ "303091", "4177", "4177", "441096", "303091" ]
00176025
en
[ "shs", "scco", "sde" ]
2024/03/05 22:32:13
2008
https://shs.hal.science/halshs-00176025/file/Flachaire_Hollard_06c.pdf
Emmanuel Flachaire Guillaume Hollard email: [email protected] Individual sensitivity to framing effects by Keywords: starting-point bias, wta-wtp divergence, social representation JEL Classification: C81, C90, H43, Q51 come Introduction It has long been recognized that the design of a survey may influence respondents' answers. In the particular case in which respondents have to estimate numerical values, this implies that two different surveys may lead to two different valuations of the same object. Such a variation of answers, induced by non-significant change in the survey design, is called a framing effect. Consequently, surveys are sometimes viewed with suspicion when used to provide economic values, since framing effects may alter the quality of survey-based valuation. The existence of these effects is well documented [START_REF] Levin | All frames are created equal : a typology and critical analysis of framing effects[END_REF]. However, the extent to which they may vary between individuals has received little attention. Are some individuals less sensitive to framing effects than others ? How to detect them ? These are the questions addressed in this paper. Our basic idea is to use the theory of social representation to assign to each individual a new variable. This variable represents a proxy for the individual's sensitivity to framing effects. According to this representation variable, we isolate two types of individuals. The first type is proved to be less sensitive to framing effects than the other. We examine two framing effects which are known to have a dramatic effect on valuation, namely, starting-point bias and willingness to pay (WTP) and willingness to accept (WTA) divergence. The results suggest that taking into account heterogenous sensitivity to framing effects is successful in limiting the impact of biases. Furthermore, they prove that the constructed representation variable is not correlated to any of the usual variables. Thus, using the representation variable allows researchers to gather relevant new information. The paper is organized as follows. Section 2 details how social representation can be used to design a new individual variable. Section 3 presents study of the problem of starting-point bias in contingent valuation surveys. Section 4 deals with WTA and WTP divergence. Section 5 provides a discussion, and Section 6 concludes. Representation as a source of heterogeneity Representations are defined in a broad sense by social psychologists as a form of knowledge that serves as a basis for perceiving and interpreting reality, as well as for guiding one's behavior. Representation could concern a specific object, or a more general notion of social interest1 . The founding work [START_REF] Moscovici | La psychanalyse, son image et son public[END_REF] explores the representation of psychoanalysis. In the following decades, various topics have been investigated: representation of different cities, madness, remarkable places, hunting, AIDS, among others (see the different articles presented in Farr and[START_REF] Farr | Social representations[END_REF][START_REF] Moscovici | Psychologie Sociale[END_REF]. The theory of representation has proved useful in the study of economic subjects such as saving and debt [START_REF] Viaud | A positional and representational analysis of consumption: Households when facing debt and credit[END_REF], or the electronic purse [START_REF] Penz | It's practical but no more controllable": Social representation of the electronic purse in Austria[END_REF]. The basic structure of a social representation is composed of a central core and of peripheral elements [START_REF] Abric | Central system, peripheral system : their fonctions and roles in the dynamics of social representation[END_REF]. The central core contains the most obvious elements commonly associated with the object. They can be viewed as stereotypes or common sense. Those elements are not subject to any dispute, as everyone agrees that they are related to the object described. The core in itself does not contain much information and usually is not a surprise to an observer. The peripheral elements, however, contain fewer consensual elements and are less obvious. They represent potential changes in the social representation and indicate new elements that may in the near future become part of the core. They are, somehow, rivals of the core elements. There are several ways to explore the composition of social representations of particular subjects (namely, ethnography, interviews, focus-groups, the content analysis of the media, questionnaires and experiments). In what follows, we will focus on a particular technique, which is the statistical analysis of word associations. These word associations are gathered through answers to an open-ended question such as "What are the words that come to mind when thinking of [the object]?" or "What does [the object] evoke to you?". Thus, the purpose of such questions is to investigate the words being spontaneously associated with a given object. The next step is thus to determine the core of the social representation, on the basis of those individual answers. Once the core has been found, we sort individuals according to those who refer to the core of the social representation and those who don't. This "aller-retour" between social and individual representations can be compared to an election system where individual opinions are aggregated, using majority voting. Once individuals have voted, it is possible to recognize who belongs to the majority and who doesn't. All in all, the task is to transform representations (i.e. lists of words) into a quantitative and individual variable. The method consists of four steps, each of which is illustrated with an example, namely the Camargue representation 2 . The Camargue is a major wetland in the delta of the Rhône (south of France) covering 75.000 hectares. Of exceptional biological diversity, it hosts many fragile ecosystems and is inhabited by numerous species. The survey was administered to 218 visitors to the Camargue at the end of their visit 3 . Note that the respondents had therefore spent some time in the Camargue. Step 1: The data: collecting lists of words The usual way to collect information on representation is by open-ended questions. More precisely, we use a question such as: "What does [the object] evoke to you?" or "What are the words that come to mind when thinking of [the object]?". Individuals are expected to provide a list of words or expressions. Thus, the data take the form of ordered lists of words. The set of answers typically displays a large number of different words, as each individual provides different answers. Indeed, a great variety of words can be used to describe a given object [START_REF] Vergès | Approche du noyau central: propriétés quantitatives et structurales[END_REF][START_REF] Wagner | Theory and method of social representations[END_REF]. Application: In our questionnaire, respondents were asked: "What words come to your mind when you think about the Camargue?" More than 300 different words or expressions have been obtained. Step 2: Classification: choosing a typology for words An individual representation is captured through an ordered list of words. The high number of different words (say 100 to 500) imposes a categorization, i.e. putting together words that are "close" enough. Choosing a particular categorization thus consists in defining a particular typology for the set of words. Empirical applications typically use six to ten categories, which are chosen so as to form homogeneous categories. This step is the only one which leaves the researcher with some degree of freedom, since the notion of proximity is not straightforward. After categorization, each individual's answer is transformed into an ordered list of categories (rather than a list of words). At the end of this categorization, we are left with individual representations containing doubles, that is, with several attributes belonging to the same category. To obtain transitive individual representations, we suppress the lower-ranking citations belonging to the same category. Such treatment eliminates some information. In our case, the length of individual answers decreased by 20%. After treatment (i.e. categorization + suppression of doubles), individual representations boil down to an individual ranking of the set of categories. Application: A basic categorization by frame of reference leads to eight different categories. For instance, the first category is called Fauna and Flora. It contains all the attributes which refer to the animals and local vegetation of the Camargue (fauna, 62 citations, birds, 44, flora, 44, bulls, 37, horses, 53, flamingos, 36, . . . ). The other categories are Landscape, Disorientation, Isolation, Preservation, Human presence and Coast. A particular case is the category Nature which only contains the word nature which can hardly fall into any of the previous categories. There is a ninth category which clusters all attributes which do not refer to any of the categories mentioned above4 . Step 3: Finding the core The simplest way of determining the core element is to classify the different categories according to their citation rate. The core is thus composed of the category that is most widely used by individuals. This is in accordance with the definition of the core as the most consensual and widely accepted elements associated with a given object. Application: After consolidating the data in step 2, we were left with 218 ordered lists of categories. We computed the number of appearances for each category. The results are presented in Step 4: Sorting individuals We choose to isolate individuals who do not mention the top element of the social representation (i.e. the core of the social representation). This leads to a breakdown of individuals into two sub-samples: one which contains the individuals who used the core element in their representation, and one which contains the individuals who did not. The main reason for this is that it is remarkable not to mention any of the core elements. It is thus assumed that not mentioning the core is indeed significant. Since it does not conform to most common practice, this group is often referred to as "minority". The other group, which mentions the core element, is referred to as "mainstream". Application: In the case of the Camargue, the subjects were interviewed at the end of their visit and had seen a lot of animals and plants (they could even see some of them while being interviewed). A small minority of individuals did not refer to Fauna and Flora (18% of the total population, see Table 1). Given these four steps, we are left with two categories of individuals. This leads to a breakdown of individuals into two sub-samples: those who refer to the core of the social representation (mainstream) and the others (minority). We can define a mainstream dummy variable, which can be used to control the sensitivity to framing effects. To do so, existing models have to be adapted. In the following, we use this new variable with empirical data, considering two standard framing effects, starting point bias and WTA-WTP divergence. Starting-point bias In contingent valuation, respondents are asked if they are willing to pay a fixed sum of money for a given policy to be implemented. This discrete choice format is recommended by the NOAA panel over other methods (a panel of experts that sets guidelines to run evaluation surveys, see [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. This "take it or leave it" format mimics a market situation which individuals face in everyday market transactions, and it is incentive-compatible. However, a major drawback is that it leads to a qualitative dependent variable (the respondent answers yes or no), which reveals little about the individual's willingness-to-pay (WTP). To gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF] proposed adding a follow-up discrete choice question to improve the efficiency of discrete choice questionnaires. This mechanism is known as the double bounded model. It basically consists of proposing a second bid to the respondent, greater than the first bid, if the respondent's first answer is yes, and lower otherwise. Several studies have found that estimates of the mean of willingness-to-pay are substantially different from estimates based on the first question alone. This is the so-called starting-point bias [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], which can be seen as a particular case of the anchoring bias put forward by [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF] 6 . Different models have been proposed in the litterature to control for such undesirable effects. However, empirical results suggest that efficiency gains obtained with follow-up questions are lost relative to models using first questions only. All these models assume that all individuals are equally sensitive to starting-point bias. In this section, we consider that some individuals may be more sensitive than others to starting-point bias. We develop a model to handle starting-point bias with heterogeneity in two groups. An application shows that, with individual sensitivity to starting-point bias, we can control for starting-point bias with efficiency gains. Model Different models are proposed in the literature to control for starting-point bias in double-bounded models. All these models assume that the second answer is sensitive to the first bid offer, in the sense that a prior willingness-to-pay W i is used by the respondent i to respond to the first bid offer, b 1i , and an updated willingness-to-pay W ′ i is used to respond to the second bid, b 2i . Each model leads to a specific definition of W ′ i . Whitehead ( 2002) proposes a general model, combining several effects, as follows: W ′ i = W i + γ (b 1i -W i ) + δ, (1) where γ and δ are two parameters. If δ = 0 this model corresponds to the Anchoring model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], where the respondents combine their prior WTP with the value provided by the first bid, such that the first bid offer plays the role of an anchor. The parameter γ measures the strength of the anchoring effect (0 ≤ γ ≤ 1). If γ = 0 this model corresponds to the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], where the WTP systematically shifts between the two answers. The first bid offer is thus interpreted as providing information about the cost or the quality of the object: a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object. The model (1) combines both Anchoring and Shift effects. In model ( 1), all individuals are supposed to be sensitive to the first bid offer in the same manner: the two parameters γ and δ are constant across individuals. If only some respondents are influenced by the first bid (i.e. they combine their prior WTP with the first bid), while the others do not, individual heterogeneity is present. It is well known that econometric estimation of regression models can be seriously misleading if such heterogeneity is not taken into account. Let us assume that we can divide respondents into two distinct groups: one group subject to starting-point bias and another insensitive. We define a Heterogenous model as W ′ i =    W i if I i = 0 W i + γ (b 1i -W i ) + δ if I i = 1 (2) where I i is a dummy variable which is equal to 1 when individual i belongs to one group and 0 if he belongs to the other group. Note that, if I i = 1 for all respondents, this model reduces to the Anchoring & Shift model ; if I i = 0 for all respondents, it reduces to the standard Double-bounded model. These models can be estimated with random effect probit models, taking into account the dynamic aspect of follow-up questions [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], Whitehead 2004). Estimation requires simulated methods and a formal definition of the probability that the individual i answers yes to the j th question, j = 1, 2. For the heterogenous model (2), we calculate this probability, which is equal to: P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )I i D j + λI i D j (3) where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σγσ) and λ = δ/(σγσ). Based on this equation, the parameters are interrelated according to: β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (4) Implementation of the Double-bounded model is obtained with δ = γ = 0, which corresponds to θ = λ = 0 in (3). Implementation of the Anchoring & Shift model is obtained with I i = 1 for i = 1, . . . , n. For a more detailed discussion on the estimation of a random effect probit model, see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]Whitehead (2004). Results We use the dummy variable mainstream, defined in the previous section, and the Camargue survey to conduct an application. In practice, a value of particular interest is the estimate of the WTP mean. Once a model has been estimated, we can obtain fitted values Ŵi , for i = 1, . . . , n, from which we can calculate the estimate of the mean of WTP: μ = n -1 n i=1 Ŵi . We estimate the mean values of WTP from a linear model (MacFadden and Leonard 1993) and compute the confidence intervals by simulation with the Krinsky and Robb procedure (see Haab and McConnell 2003, ch.4). The Single-bounded and Double-bounded models give very different WTP means: 113.5 and 89.8. Their confidence intervals do not overlap. Such inconsistent results suggest that follow-up questions generate starting-point bias in the Double-bounded model. To control for starting-point bias, we estimate an Anchoring & Shift model. The WTP mean is equal to 158.5. It is still very different from the WTP mean obtained from the Single-bounded model (113.5); however, the two confidence intervals overlap slightly. Note that the confidence interval is very wide and the gain in precision obtained by using follow-up questions is lost, a result suggested by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. The Heterogeneous model gives a WTP mean of 110.1. It is very close to the 113.5 obtained from the Single-bounded model. Moreover, the confidence interval obtained from the Heterogeneous model ([99.0; 125.2]), is entirely contained in the confidence interval obtained from the Single-bounded model ([98.1; 138.2]), and thus is narrower. In other words, the Heterogeneous model provides consistent results with the Single-bounded model, and is more precise. Table 3 presents the full econometric results, that is, all the parameter estimates, with the standard errors given in italics. It is clear from this table that using follow-up questions (Double-bounded, Anchoring & Shift and Heterogenous models) provides significantly reduced standard errors, compared to using first answers only (Single-bounded model). Moreover, the precision of the parameter estimates of the regressors is quite similar for the different models using follow-up questions. The anchoring parameter γ is statistically significant when we perform a likelihood-ratio (LR) test of the null hypothesis γ = 0, not the shift parameter δ. It suggests that, when the minority group is not sensitive to starting-point bias (γ = 0, see equation 2), the mainstream group is significantly subject to such an effect (γ = 0.26, with 0 ≤ γ ≤ 1). Finally, the Heterogeneous model performs better than the others: it provides consistent results with the Single-bounded model and greatly improves the precision of the estimation. This suggests that taking into acount an individual sensitivity to startingpoint bias does indeed matter. WTA and WTP divergence Over the past twenty years, a large pattern of empirical evidence has accumulated suggesting a significant divergence between willingness-to-pay (WTP) measures of value, where individuals have to pay for a given object or policy, and willingness-to-accept (WTA) measures of value, where individuals sell the same object or receive money to compensate for the suppression of the same policy [START_REF] Brookshire | Measuring the value of a public good: an empirical comparaison of elicitation procedures[END_REF]. Economic theory suggests that, with small income effects, WTP and WTA should be equivalent. Results from a meta-analysis however prove that the divergence, measured by the ratio WTA/WTP, is often high (i.e. the ratio largely exceeds one) [START_REF] Sayman | Effects of study design characteristics on the wtawtp disparity: a meta analytical framework[END_REF]. Since valuation measures are used for the study of many public-policy questions, these results raise questions about which procedure to use in practice. The divergence is frequent but can be controlled for. The existence of substitutes has been proved to play an important role [START_REF] Shogren | Resolving differences in willingness to pay and willingness to accept[END_REF]). In the case of private goods the divergence disappears if subjects are recruited among experienced dealers [START_REF] List | Does market experience eliminate market anomalies?[END_REF][START_REF] List | Neoclassical theory versus prospect theory: evidence from the marketplace[END_REF]. This suggests that individuals may learn to avoid the divergence. This intuition is confirmed by the design of an experimental protocol that eliminates the divergence [START_REF] Plott | The willingness to pay/willingness to accept gap, the endowment effect, subject misconceptions and experimental procedures for eliciting valuations[END_REF]. The basic ingredients of this protocol are the existence of training rounds and the use of incentive-compatible mechanisms. Taken together, the previous results suggest that subjects may learn to overcome the divergence within a short period of time. These results, however, apply to private goods. If we consider the valuation of some public policy, the time between the survey and the implementation of the policy is too long to implement training rounds. This is the reason why being able to detect subjects who are prone to framing effects is of particular interest for contingent valuations. Survey To measure the discrepancy between WTA and WTP for public goods, we needed to find a public good that can be sold or withdrawn, or bought or provided. Such public goods are not the most common ones. However, we were lucky enough to be offered a golden opportunity. The University of Marne la Vallée (France) was considering changing its Saturday morning classes policy. The growing number of students led to an increasing number of classes on Saturday morning due to the lack of available classrooms. Students started to protest and asked for a clarification of the policy regarding classes on Saturday morning. Two options were considered. Some students were told they would pay lower fees if they accepted classes scheduled on Saturday. The reason for this was that the university could then rent the extra classroom during the week to movie companies to use for filming on location. Other students were offered the option of avoiding Saturday classes by paying higher fees, as the university would have to rent an extra building. So, the trade-off was between paying more to avoid Saturday classes and being paid to attend them. Note that, even though the survey concerned students, it was used to take a real decision. Thus, answers to this particular survey had an impact on the respondents' welfare. We conducted a contingent valuation survey to evaluate both the willingness to pay to avoid classes on Saturday and the willingness to accept classes on Saturday morning. The survey was given to 359 students at the University of Marne La Vallée: 184 individuals were given the WTP version, 175 the WTA one (subjects were randomly assigned to one version). Heterogeneity Gathering information on social representations using an additional open-ended question leads to our four-step methodology. We propose here to simplify this treatment by running this methodology on a sample of subjects, at a pre-test stage, to identify the items that capture most of the opposition between mainstream and minority. This allows us to detect mainstream and minority using a simple discrete choice question. This greatly simplified the exploitation of the data. While the use of an open-ended question implies a specific treatment (categorization and so on), the use of a simple question does away with the need for any treatment. Prior to the survey, we then elicited the representations "of classes on Saturday morning". Quite surprisingly, the survey revealed two groups that differ more broadly on their vision of university rather than on their vision of Saturday morning (we were expecting more precise reasons, such as the opportunity to have a job on Saturday morning, religious reasons). For the mainstream, the main goal of their studies is to get diplomas, while the minority consider that the most important thing to get from university is skills. Following our method, we then decided to include in the contingent valuation survey an additional question labeled as follows: In your opinion, the main purpose of your studies is to get: diplomas 2. skills The two items were presented in a random order. As expected, a large majority of 71% of the 359 respondents, chose the first option (diplomas). And only a minority chose the second option (skills). We now propose to explore the impact of this distinction on the WTA-WTP divergence. If we neglect the distinction among respondents, the WTA and WTP means are very different, respectively equal to 68.7 and 15.3. The WTA/WTP ratio largely exceeds one and is equal to 4.5. Then, we calculate the WTA and WTP means for individuals who answered the questiondiplomas and skills -separately. When we consider the mainstream group (Diplomas), the discrepancy between WTA and WTP is wide and the ratio higher (5.8). However, when we consider the minority group (Skills), the discrepancy and the ratio (2.7) are significantly reduced. Students from the minority group are less sensitive to the WTA and WTP divergence. Even if the discrepancy is not completely removed, the mainstream variable allows us to separate the whole population into two groups that highly differ in their sensibility to framing effects, since the ratio is falling from 5.8 for the mainstream group to 2.7 for the minority. Results Further results and discussion The previous results show that it is possible to extract information on individual representation for a given object, which can be successfully used as a good proxy for the individual sensibility to framing effects. Evidence was presented for two distinct sets of data and two different well-known framing effects. So far, we have basically found a statistically significant relationship between the mainstream variable and the sensitivity to framing effects. The remaining question is thus why does this occur? The first section proves that the representation variable conveys new information, which is not related to other individual characteristics. The second section proposes an interpretation of the link between social representation and framing effects. General considerations on social representation are given in a third section. The last section deals with possible improvements to the proposed approach. Does representation provide new information ? Here, we check if the dummy variable, based on social representation, is correlated with some other individual characteristics. First consider the Camargue survey. Table 5 shows the Pearson correlation coefficient ρ between the mainstream dummy variable and the regressors included in the regression model. A P -value is given in parenthesis for the null hypothesis ρ = 0. We can see that in all cases, the null is rejected (all the P-values are greater than 0.05). It suggests that the dummy variable is not correlated to the regressors. Secondly, consider the Saturday classes survey. Table 6 shows the Pearson correlation coefficient ρ between the Diplomas/Skills dummy variable and other questions from the questionnaire. A P -value is given in parenthesis for the null hypothesis ρ = 0. We can see that in all cases, the null is rejected (all the P-values are greater than 0.05). Again, it suggests that the dummy variable is not correlated to the regressors. These results suggest that the information obtained from individual representation cannot be captured by the use of standard individual characteristics. In this sense, it is new information, not related to standard questions in surveys. From representation to framing effects So far, we have concentrated on the most technical aspects, based on statistical evidence. Here, we propose an interpretation about why representations can be linked to framing effects. This interpretation relies on three distinct arguments. The first two are nothing more than an application of general ideas developed in psychology and sociology. The key argument is thus the third one. 1. Our use of social representation is very classical on some points and more original on others. Identifying the core and peripheral elements of a social representation on the basis of a statistical analysis of word associations is a classic in social psychology. It is also admitted that peripheral elements are identified by a minority. Our approach thus consists in pooling all minorities in one group. 2. The next step of our reasoning is to assume that these individuals are conscious of not being members of the mainstream, while others may just follow the crowd with no clear consciousness of doing so. The idea that members of the minority have a more accurate perception of their identity is generally accepted in sociology. Thus, we associate a classical sociological argument with a more psychological one. 3. The core idea of our work is that the minority group on a particular subject has a stronger opinion, i.e. a more personal or elaborate point of view 7 . Thus, the minority is more likely to resist outside influences and is therefore less sensitive to framing effects. Representation as a marker of past experience If you have never coped with an object or a situation in the past in the past, you are very likely to handle it at first glance in a very predictable way, using common sense or stereotypes. This is what the core represents. But, if for any reason, you have been confronted with this problem in the past, it is very likely that you start recomposing your representation of this object or situation (you don't have the same representation of Paris once you've been there). According to that view, non-mainstream representations are then a consequence of past experiences. Representations can thus be thought of as a fast and frugal way to capture information about past experiences. If we now concentrate on the problem of eliciting preferences (say for public decision making), representations allow us to isolate individuals that have somehow "invested" in their own preference. We expect them to hold a stronger opinion and have more stable preferences, thus be less sensitive to framing effects. Such a distinction is similar to the debate on the origin of individual preferences [START_REF] Slovic | The construction of preference[END_REF]. Members of the minority are assumed to be individuals that have set their preferences, while some members of the mainstream population are assumed to construct their preferences through the elicitation process. Our results suggest how to identify individuals that have set their preferences before the elicitation process begins. The existence of such a population is not a surprise since in any experiments that intend to detect biases, a small, but significant, part of the subjects do not exhibit pathological preferences (among many references, see the experiments in [START_REF] Kahneman | Choices, Values and Frames[END_REF]. This paper is a first step towards detecting such individuals. Criticism, improvements and further research The proposed method has reached the goal of proving that a substantial heterogeneity relative to the sensitivity to framing effects exists, even in socially very homogenous populations such as students. The agenda for further research includes the design of more subtle tools to classify individuals. Here, we are able to isolate a population that is prove to be much less sensitive to framing effects than the residual population. One can think of a more continuous variable that measures the sensitivity to framing effects. a strong point of view. The proposed methodology is open to criticism at two distinct levels. As we are exploiting an open-ended question, a choice has to be made on how to categorize the answers. A good classification requires the creation of homogeneous categories. Even though our classification8 tends to demonstrate the presence of individual sensitivity to framing effects, another choice could be considered. A second criticism may concern the way we construct the two subpopulations on the basis of the social representation. Our choice is to put respondents who cite the most cited category in a mainstream group, and the others in the minority group. Other choices and alternatives splits (with more than two groups) could be used. Finally, our dichotomous split has done a good job as a first step, but further research may help us to better understand the determinants of individual sensitivity to framing effects. Conclusion This paper is a first step towards approaching heterogeneity relative to the sensitivity to framing effects. A simple tool is designed to detect a group of individuals that is proved to be far less sensitive to framing effects than the reference population. This approach is effective on two distinct sets of data concerning different framing effects. This raises important questions at the normative level. How should values be set within heterogeneous groups ? Should the values be computed using only the values of those detected as not sensitive to framing effects ? Table 1 . 1 The top element, Fauna-Flora, is used by a large number of Category Citation Rate Rank Fauna-Flora 82 % 1 Landscape 74 % 2 Isolation 58 % 3 Preservation 51 % 4 Human presence 34 % 5 Nature 33 % 6 Disorientation 32 % 7 Coast 26 % 8 Table 1 : 1 Citation Rank respondents, 82%. Only a minority do not use any element of this category. This is not a big surprise since the main interest of the Camargue (as presented in all related commercial publications, or represented on postcards) is the Fauna and Flora category 5 . Table 2 : 2 Table 2 presents estimates of the WTP mean, obtained from the Double-bounded, Anchoring & Shift and Heterogeneous models. We include estimates obtained from the Single-bounded, taking into account the first answers only. The analysis is based on two criteria: if the mean of WTP is consistent (consistency) and if the standard errors are more precise (efficiency) with those obtained from the Single-bounded model. Estimation of the mean of willingness-to-pay Model WTP mean conf. interval consistency efficiency Single-bounded 113.5 [98.1;138.2] Double-bounded 89.8 [84.4;96.5] no yes Anchoring & Shift 158.5 [122.6;210.7] yes no Heterogeneous 110.1 [99.0;125.2] yes yes Table 3 : 3 Random effects probit models (standard errors in italics) Table 4 : 4 Table4shows the WTP and WTA means for all the students (384, 175 for the WTA version and 184 for the WTP version) and for the two sub-groups (those who answer Diplomas and those who answer Skills). The last line presents the WTA/WTP ratio. WTA/WTP divergence All Diplomas Skills WTA 68.7 71.9 62.5 WTP 15.3 12.5 23.3 Ratio 4.5 5.8 2.7 Table 5 : 5 The Camargue survey: correlation coefficient. Table 6 : 6 Saturday classes survey: correlation coefficient. A survey of the theory and methods used to study social representations can be found in[START_REF] Wagner | Theory and method of social representations[END_REF] and[START_REF] Canter | Empirical approaches to social representation[END_REF] This method was originally developed in[START_REF] Flachaire | A new approach to anchoring: theory and empirical evidence from a contingent valuation survey[END_REF] See Claeys-Mekdade et al. (1999) for a complete description of the survey After categorization and deletion of doubles, the average number of attributes evoked by the respondents falls from 5.5 to 4.0. 5 A quick look at any website about the Camargue is also a way of confirming Fauna-Flora as the obvious aspect of the Camargue. Among many others see: www.parc-camargue.fr or www.camargue.com The anchoring bias appears in experimental settings in which subjects are asked to provide numerical estimations (e.g. the height of Mount Everest). Prior to the estimation stage, they are asked to compare their value to an externally provided value (e.g. 20 000 feet). This last value received the name of anchor as it was proved to have a great influence on subjects' valuations (i.e. a different anchor, or starting point, leads to a different valuation) Note that we do not exclude that some individuals may have a strong point of view which is in accordance with that of the mainstream. We only suggest that we can isolate some individuals holding A full description of the one we used is available in[START_REF] Hollard | Théorie du choix social et représentations : analyse d'une enquête sur le tourisme vert en camargue[END_REF] Whitehead, J. C. (2004). Incentive incompatibility and starting-point bias in iterative valuation questions: reply. Land Economics 80 (2), 316-319. Acknowledgements The authors thank Jason Shogren for useful comments and two anonymous referees.
38,783
[ "843051", "1331865" ]
[ "15080", "45168" ]
01760260
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01760260/file/final-draft.pdf
Florian Lemaitre email: [email protected]@[email protected] Benjamin Couturier Lionel Lacassagne Small SIMD Matrices for CERN High Throughput Computing System tracking is an old problem and has been heavily optimized throughout the past. However, in High Energy Physics, many small systems are tracked in real-time using Kalman filtering and no implementation satisfying those constraints currently exists. In this paper, we present a code generator used to speed up Cholesky Factorization and Kalman Filter for small matrices. The generator is easy to use and produces portable and heavily optimized code. We focus on current SIMD architectures (SSE, AVX, AVX512, Neon, SVE, Altivec and VSX). Our Cholesky factorization outperforms any existing libraries: from ×3 to ×10 faster than MKL. The Kalman Filter is also faster than existing implementations, and achieves 4 • 10 9 iter/s on a 2×24C Intel Xeon. I. INTRODUCTION The goal of the paper is to present a code generator and optimizations to get a fast reconstruction of a system trajectory (tracking) using Kalman filtering for SIMD multi-core architectures, for which it exists no efficient implementation. The constraints are strong: few milliseconds to track thousands of particles. Right now, the choice was to focus on generalpurpose processors (GPP) as SIMD extensions are present in every system (so all CERN researcher could benefit of it). GPUs were not selected when the work started in 2015 as the transfer time (through PCI) between the host and the GPU was longer than the amount of time allocated to the computation. With the rise of the last generation of GPU connected to a CPU with a high bandwidth bus, it becomes worth evaluating them. Even though optimizing Kalman filter tracking is an old problem [START_REF] Palis | Parallel Kalman filtering on the connection machine[END_REF], existing implementations are not efficient for many small systems. The code generator uses the template engine Jinja2 [START_REF]Python template engine[END_REF] and implements high level and low level optimizations. It is used to produce a fast Cholesky factorization routine and a fast Kalman filter in C. The generated code is completely generic and can be used with any system. It also supports many SIMD architectures: SSE, AVX, AVX512, Neon, SVE, Altivec and VSX. In order to have a representative Kalman filter and validate its implementation, a basic 4×4 system was selected. Depending on the experiment the matrix size can change. Some specific variants can also exist: 5×5 systems for High Energy Physics [START_REF] Fr Ühwirth | Application of Kalman filtering to track and vertex fitting[END_REF], using three steps: forward, backward, smoother. This work will be used in the next upgrade of the LHCb experiment to achieve real-time event reconstruction. // Backward substitution In this paper, we will first present Cholesky Factorization, the optimizations we applied to it, and their performance impact. Then, we will present Kalman Filter, its optimizations and their performance impact. 16 for i = n -1 : 0 do 17 s ← Y (i) 18 for j = i + 1 : n -1 do 19 s ← s -L(j, i) • X(j) 20 X(i) ← s/L(i, i) II. CHOLESKY FACTORIZATION A. Algorithm Cholesky Factorization (also known as Cholesky Decomposition) is a linear algebra algorithm used to express a symmetric positive-definite matrix as the product of a triangular matrix with its transposed matrix: A = L • L T . It can be combined with forward and backward substitutions to solve a linear system (algorithm 1). Cholesky Factorization of a n×n matrix has a complexity in terms of floating-point operations of n 3 /3 that is half of the LU one (2n 3 /3), and is numerically more stable [START_REF] Higham | Accuracy and stability of numerical algorithms[END_REF], [START_REF] Higham | Cholesky factorization[END_REF]. This algorithm is naturally in-place as every input element is accessed only once and before writing the associated element of the output: L and A can use the same storage. It requires n square roots and (n 2 + 3n)/2 divisions for n×n matrices which are slow operations especially on double precision. With small matrices, parallelization is not efficient as there is no long dimension. Therefore, matrices are grouped by batches in order to efficiently parallelize along this new dimension. The principle is to have a for-loop iterating over the matrices, and within this loop, compute the factorization of the matrix. This is also the approach used in [START_REF] Dong | LU factorization of small matrices: accelerating batched DGETRF on the GPU[END_REF]. B. Transformations Improving the performance of software requires transformations of the code, especially High Level Transforms (HLT). For Cholesky, we made the following transforms: • High Level Transforms: memory layout [START_REF] Allen | Optimizing compilers for modern architectures: a dependence-based approach[END_REF] and fast square root (the latter is detailed in II-C), • loop transforms (loop unwinding [START_REF] Lacassagne | High level transforms for SIMD and low-level computer vision algorithms[END_REF] and unroll&jam), • Architectural transforms: SIMDization. 1) Memory Layout Transform: The memory layout transform is the first transform to address as the other ones rely on it. The default memory layout in C is Array of Structure (AoS), but is not suited for SIMD. In order to enable SIMD, the layout should be modified into Structure of Arrays (SoA). A hybrid memory layout (AoSoA) is preferred to avoid systematic cache evictions. The alignment of the data is also crucial. Aligned memory allocations should be enforced by specific functions like posix_memalign, _mm_malloc or aligned_alloc (in C11). One might also want to align data with the cache line size (usually 64 bytes). This may improve cache hits by avoiding data being split into multiple cache lines when they fit within one cache line and avoid false sharing between threads. 2) Loop unwinding: Loop unwinding is the special case of loop unrolling where the loop is entirely unrolled. it has several advantages, especially for small matrices: • it avoids branching, • it allows to keep temporaries into registers (scalarization), • it helps out-of-order processors to efficiently reschedule instructions. This transform is very important as the algorithm is memory bound. One can see that the arithmetic intensity of the scalarized version is higher. This leads to algorithm 2 and reduces the amount of memory accesses (Table I). The register pressure is higher and the compiler may generate spill code to temporarily store variables into memory. 3) Loop Unroll & Jam: Cholesky Factorization of n×n matrices involves n square roots + n divisions for a total of ∼n 3 /3 floating-point operations (see Table I). The time before the execution of two data independent instructions (also known as throughput) is smaller than the latency. The latency of pipelined instructions can be hidden by executing another instruction in the pipeline without any data-dependence with the previous one. The ipc (instructions per cycle) is then limited by the throughputof the instruction and not by its latency. Current processors are Out-of-Order. But they are limited by the size of their rescheduling window. In order to help Algorithm 2: Cholesky system solving A • X = R unwound and scalarized for 4×4 matrices // Load A into registers 1 a00 ← A(0, 0) 2 a10 ← A(1, 0) a11 ← A(1, 1) 3 a20 ← A(2, 0) a21 ← A(2, 1) a22 ← A(2, 2) 4 a30 ← A(3, 0) a31 ← A(3, 1) a32 ← A(3, 2) a33 ← A(3, 3) // Load R into registers 5 r0 ← R(0) r1 ← R(1) r2 ← R(2) r3 ← R(3) // Factorize A 6 l00 ← √ a00 7 l10 ← a10/l00 8 l20 ← a20/l00 9 l30 ← a30/l00 10 l11 ← a11 -l10 2 11 l21 ← (a21 -l20 • l10) /l11 12 l31 ← (a31 -l30 • l10) /l11 13 l22 ← a22 -l20 2 -l21 2 14 l32 ← (a32 -l30 • l20 -l31 • l21) /l22 15 l33 ← a33 -l30 2 -l31 2 -l32 2 // Forward substitution 16 y0 ← r0/l00 17 y1 ← (r1 -l10 • y0) /l11 18 y2 ← (r2 -l20 • y0 -l21 • y1) /l22 19 y3 ← (r3 -l30 • y0 -l31 • y1 -l32 • y1) /l33 // Backward substitution 20 x3 ← y3/l33 21 x2 ← (y2 -l32 • x3) /l22 22 x1 ← (y1 -l21 • x2 -l31 • x3) /l11 23 x0 ← (y0 -l10 • x1 -l20 • x2 -l30 • x3) /l00 // Store X into memory 24 X(3) ← x3 X(2) ← x2 X(1) ← x1 X(0) ← x0 the processor to pipeline instructions, it is possible to unroll loops and to interleave instructions of data-independent loops (Unroll&Jam). Here, Unroll&Jam of factor 2, 4 and 8 is applied to the outer loop over the array of matrices. Its efficiency is limited by the throughput of the unrolled loop instructions and the register pressure. C. Precision and Accuracy Cholesky Factorization requires n square roots and (n 2 + 3n)/2 divisions for a n×n matrix. But these arithmetic operations are slow, especially for double precision (see [START_REF] Fog | Instruction tables: Lists of instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs[END_REF]) and usually not fully pipelined. Thus, square roots and divisions limit the overall Cholesky throughput. It is possible in hardware to compute them faster with less accuracy [START_REF] Soderquist | Area and performance tradeoffs in floating-point divide and square-root implementations[END_REF]. That is why reciprocal functions are available: they are faster but have a lower accuracy: usually 12 bits for a 23-bit mantissa in single precision. The accuracy is measured in ulp (Unit in Last Place). 1) Memorization of the reciprocal value: In the algorithm, a square root is needed to compute L(i, i). But L(i, i) is used in the algorithm only with divisions. The algorithm needs n 2 + 3n /2 of these divisions per n×n matrix. Instead of computing x/L(i, i), one can compute x • L(i, i) -1 . The algorithm then needs only n divisions. 2) Fast square root reciprocal estimation: The algorithm performs a division by a square root and therefore needs to Listing 1: Simple C loop 1 for (int i = 0; i < 4; i++) { 2 s = B[i] + C[i]; 3 A[i] = s / 2; 4 } Listing 2: Simple Jinja loop 1 {% f o r i i n r a n g e ( 4 ) %} 2 s{{ i }} = B[{{ i }}] + C[{{ i }}]; 3 A[{{ i }}] = s{{ i }} / 2; 4 {% e n d f o r %} compute f (x) = 1/ √ x. There are some ways to compute an estimation of this function depending on the precision. Most of current CPUs have a specific instruction to compute an estimation of the square root reciprocal in single precision. In fact, some ISA (Instruction Set Architecture) like Neon and Altivec VMX do not have any SIMD instruction for the square root and the division, but do have an instruction for a square root reciprocal estimation. On x86, ARM and Power, this instruction is as fast as the multiplication and gives an estimation with 12-bit accuracy. Unlike regular square root and division, this instruction is fully pipelined (throughput = 1) and thus avoids pipeline stall. 3) Accuracy recovering: Depending on the application, the previous techniques might not be accurate enough. The accuracy recovering (if needed) can be done with Newton-Raphson method or Householder's. All current SIMD architectures have FMAs instruction to apply those methods quickly. See [START_REF] Lemaitre | Cholesky factorization on simd multi-core architectures[END_REF] for more details. D. Code generation In order to help writing many different versions of the code, we used Jinja2 [START_REF]Python template engine[END_REF]: a template engine in Python. Using this tool, we can easily implement unrolling (both unwinding and unroll&jam) and intrinsics code. The syntax uses custom tags/tokens that control what is being output. As it is text substitution, it is possible to manipulate new identifiers. The generated code features all transformations and all sizes from 3×3 up to 12×12 for all the architectures supported and all SIMD wrappers. There is no actual limit for the unrolled size, but the bigger the matrices, the longer the compilation. This could be replaced by a C++ template metaprogram like in [START_REF] Masliah | Metaprogramming dense linear algebra solvers applications to multi and many-core architectures[END_REF]. The use of Jinja2 instead of more common metaprogramming methods allows us to have full access and control over the generated code. In some applicative domains, it is crucial to have access to the source code before the compilation in order to quickly track bugs. 1) unrolling: Unwinding can be done in Jinja by replacing the C for-loop (Listing 1) into a Jinja for-loop (Listing 2). The output of the template is the C code the compiler will see (Listing 3). Unroll&Jam uses a Jinja filter: a filter is a section that is interpreted by Jinja as usual, but the output is then passed to a function to transform it directly in Python. The unrollNjam Listing 3: Simple Jinja loop output 1 s0 = B[0] + C[0]; 2 A[0] = s0 / 2; 3 s1 = B[1] + C[1]; 4 A[1] = s1 / 2; 5 s2 = B[2] + C[2]; 6 A[2] = s2 / 2; 7 s3 = B[3] + C[3]; 8 A[3] = s3 / 2; filter duplicates lines with the symbol @, and replace the @ by 0, 1, 2. . . The template code in Listing 4 generates the code in Listing 5. 2) SIMD: The SIMD generation is handled via a custom Clike preprocessor written in Python. The interface consists in custom Python objects accessible from Jinja. When a Python macro is used within Jinja (Listing 6), it is replaced by a unique name that is detected by our preprocessor. It then acts like a regular C-preprocessor and replaces the macro call by its definition from the Python class (Listing 7). It is important to have a preprocessor as all the architecture intrinsics differ not only by their name, but also by their signature. The Altivec code (Listing 8) looks completely different from for SSE despite being generated from the same template (with VSX, the output would involve vec_mul instead of vec_madd). This tool can also be used to generate code for C++ SIMD wrappers. 3) SIMD wrappers: In order to see if it is worth writing intrinsics, SIMD wrappers have been integrated into the code and compared to the intrinsics and scalar code. The following libraries have been tested: Boost.SIMD [START_REF] Est Érie | SIMD: Generic programming for portable SIMDization[END_REF], libsimdpp [START_REF] Libsimdpp | Header-only zero-overhead c++ wrapper for simd intrinsics of multiple instruction sets[END_REF], MIPP [START_REF] Cassagne | An efficient, portable and generic library for successive cancellation decoding of polar codes[END_REF], UME::SIMD [START_REF] Karpi Ński | A high-performance portable abstract interface for explicit SIMD vectorization[END_REF], vcl [START_REF] Fog | C++ vector class library[END_REF]. Eigen has also been tested but is unable to compile Cholesky Factorization when the element type is an array. It would have been possible to write manually the factorization array element type, but this would defeat the whole point of Eigen. More libraries and tools exist like CilkPlus, Cyme, ispc [START_REF] Pharr | A SPMD compiler for highperformance CPU programming[END_REF], Sierra or VC [START_REF] Kretz | Vc: A C++ library for explicit vectorization[END_REF]. CilkPlus, Cyme and Sierra appear not to be maintained anymore. VC and ispc did not fit into our test code base without a lot of efforts, thus were not tested. Listing 4: Unroll&Jam in Jinja 1 {% f i l t e r u n r o l l N j a m ( r a n g e ( 4) ) %} 2 s@ = B[@] + C[@]; 3 A[@] = s@ / 2; 4 {% e n d f i l t e r %} Listing 5: Unroll&Jam output 1 s0 = B[0] + C[0]; 2 s1 = B[1] + C[1]; 3 s2 = B[2] + C[2]; 4 s3 = B[3] + C[3]; 5 A[0] = s0 / 2; 6 A[1] = s1 / 2; 7 A[2] = s2 / 2; 8 A[3] = s3 / 2; 2) Incremental speedup: Figure 1 gives the speedup of each transformation in the following order: unwinding, SoA + SIMD, fast square root, unroll&jam. The speedup of a transformation is dependent of the transformations already applied: the order is significant. If we look at the speedups on HSW (Figure 1a), we can see that unwinding the inner loops improves the performance well: from ×2 to ×3. Unwinding impact decreases when the matrix size increases: the register pressure is higher. SIMD gives a sub-linear speedup: from ×3.2 to ×6. In fact, SIMD instructions cannot be fully efficient on this function without fast square root (see subsubsection II-C2). With further analysis, we can see that the speedup of SIMD + fast square root is almost constant around ×6. The impact of the fast square root decreases as their number becomes negligible compared to the other floating-point operations. For small matrices, unroll&jam allows to get the last part of the expected SIMD speedup. SIMD + fast square root + unroll&jam: from ×6.5 to ×9. Unroll&jam loses its efficiency for larger matrices: the register pressure is higher. Speedups on Power8 are similar: Figure 1b. 3) Impact of unrolling: Figure 2 shows the performance for different AVX versions. Without any unrolling, all versions except "legacy" have similar performance: performance seems to be limited by the latency between data-dependent instructions. Unwinding can help Out-of-Order engine and thus reduces data-dependencies. The performance of the "non-fast" and "legacy" versions are limited by the square root and division instruction throughput. The performance has reached a limit and cannot be improved further this limitation, even with unrolling: both unwinding and unroll&jam are inefficient in this case. The "legacy" version is more limited as it requires more divisions. For "fast" versions, both unrolling are efficient. Unroll&jam achieves a ×3 speedup on regular code and ×1.5 speedup with unwinding. This transformation reduces pipeline stalls between data-dependent instructions (subsubsection II-B3). We can see that unroll&jam is less efficient when the code is already unwound but keeps improving the performance. Register pressure is higher when unrolling (unwinding or unroll&jam). The "unwind+fastest" versions give an important benefit. By removing the accuracy recovering instructions, we save many instructions (II-C3, Accuracy recovering). For such large matrices, unroll&jam slows down the code when it is already unwound because of the register pressure. 4) SIMD wrappers: Figure 3 shows the performance of SIMD wrappers compared to the intrinsics version. Optimizations not related to SIMD are applied the same way on all versions. With the default version, all the wrappers seem to have the performance until a point depending on the wrapper. The drop in performance is a bug of the compiler that stops With the "fast" version, most wrappers have similar performance in single precision. However, UME::SIMD does not implement the square root reciprocal approximation (despite being part of the interface). Moreover, only Boost.SIMD supports the fast square root in double precision. In that case, Boost.SIMD is a bit slower than the intrinsics code. 5) Comparison with MKL: The version 2018 now supports the SoA memory layout. It is designated by compact within the documentation. Figure 4 shows the performance comparison between our implementation and MKL. The compact layout improved the performance for small matrices, compared to the old functions. However, it is still slower than our version for matrices smaller than 90×90. First, MKL does not store the reciprocal and has to compute actual divisions during both factorization and substitution. This can be compared to our "legacy" version. Then, it uses a recursive algorithm for the substitution that has some overhead. 6) Summary: Figure 5 shows the performance of our best SIMD version against scalar versions and libraries (Eigen and MKL) for HSW, SKX, EPYC and Power8. Due to licensing limitations, MKL has only been tested on HSW. On aarch64, gcc has a performance bug 1 where the instrinsic vmlsq_f32(a,b,c) = a -b • c is compiled into two instructions instead of one. This bug also affects the instrinsic vfmsq_f32. As Cholesky Factorization mainly uses the latter intrinsic, the performance obtained on this machine has no meaning and was not considered here. On SKX, the scalar SoA performance drops from 9×9 matrices. This is due to the compiler icc that stops vectorizing the unwound scalar code from this point. On all tested machines, the scaling is strong with a parallel efficiency2 above 80%. III. KALMAN FILTER A. Kalman Filter algorithm Kalman Filter is a well-known algorithm to estimate the state of a system from noisy and/or incomplete measurements. It is commonly used in High Energy Physics and Computer Vision as a tracking algorithm (reconstruct the trajectory). It is also used for positioning systems like GPS. Kalman filtering involves few matrix multiplications and a matrix inversion that can be done with Cholesky Factorization (see algorithm 3). • v1: classic version of the algorithm (algorithm 3) • v2: optimized version of the algorithm (algorithm 4) • triangle: only half of symmetric matrices is accessed 2) Incremental speedup: Incremental speedups are reported on Figure 6. Like with Cholesky, the speedup comes mainly from unwinding and SoA memory layout that enables vectorization. The mathematical optimizations (v2+triangle) give a total extra speedup about +40%. Unlike with Cholesky, the fast square and unroll&jam give no benefit except on Power8. Indeed, the proportion of square roots and division is much lower on Kalman. Moreover, the operations are more independent from each other (more intrinsic parallelism). Therefore, unroll&jam is not efficient here. However, it is still interesting without unwinding. The last thing to notice is that writing SIMD intrinsics does not improve the performance, except on Power8 where gcc struggles to optimize for the Power architecture. 3) Overall performance: The machines available for testing at CERN are very different: two high-end bi-socket (Intel, ARM) and two mono-socket (AMD, Power). So in order to provide fair comparisons, we have normalized the results to focus on transform speedups, and not the raw performance. Looking at Figure 7, it appears clearly that it is not worth writing SIMD as compilers are able to vectorize the code. We still have to supply #pragma omp simd to ensure the vectorization. Otherwise, compiler heuristics would have stopped vectorizing. Doing that, the compiler is even able to provide slightly better code than SIMD code. The instruction scheduling and register allocation might be involved. On A72, the SIMD code is even slower than vectorized because of the gcc bug. Like with the Cholesky factorization, the scaling is strong with a parallel efficiency above 80%. 4) State-of-the-art: As previously said, each experiment implements some specific version of Kalman filtering, direct comparison cannot be done. Indeed, the problem dimensionality is different and the steps are different. Moreover, each step of the filter for HEP is lighter than the full Kalman filtering: no control vector, one-dimension measurement space. Nevertheless, the performance of the SIMD implementations for CMS [START_REF] Cerati | Kalman filter tracking on parallel architectures[END_REF], CBM [START_REF] Gorbunov | Fast SIMDized Kalman filter based track fit[END_REF] and LHCb [START_REF] Érez | LHCb Kalman Filter cross architecture studies[END_REF] is between 500 and 1500 cycle/iter (all steps). Our 4×4 implementation achieves 44 cycle/iter (Table III). This is an order of magnitude faster than existing implementations. As a matter of fact, the SKX machine reaches an overall performance of 4 • 10 9 iter/s. CONCLUSION In this paper, we have presented a code generator used to create an efficient and portable SIMD implementation of Cholesky Factorization for small matrices ( 12×12) and Kalman Filter for 4×4 systems. The generated code supports many SIMD architectures, and is AVX512/SVE ready. Being completely general, it can be used with any system and is not limited to 4×4 systems. Our Cholesky factorization outperforms any existing libraries. Even if there are some improvements with MKL, we are still ×3 up to ×10 faster on small matrices. Our Kalman filter implementation is not directly comparable to the State-of-the-Art because of its general form, but appears to be one order of magnitude faster. With this, we are able to reach 4 • 10 9 iter/s on a high-end Intel Xeon 2×24C. To reach such a high level of performance, the proposed implementation combines high level transforms (fast square root and memory layout), low level transforms (loop unrolling and loop unwinding), hardware optimizations (SIMD and OPENMP multithreading) and linear algebra optimizations. The code was automatically generated using Jinja2 to provide strong optimizations with simple source code. SIMD wrappers allow to write portable SIMD code, but require extra optimizations handled by our code generator. With GPUs directly connected to the main memory, the transfer bandwidth is much higher; thus, it would be worth considering GPUs for future work. Algorithm 1 : 2 s 4 s 2 5 9 s 12 s ← R(i) 13 for j = 0 : i -1 do 14 s 15 Y 1242912131415 Cholesky system solving A • X = R // Factorization 1 for j = 0 : n -1 do ← A(j, j) 3 for k = 0 : j -1 do ← s -L(j, k) Lj,j ← √ s 6 for i = j + 1 : n -1 do 7 s ← A(i, j) 8 for k = 0 : j -1 do ← s -L(i, k) • L(j, k) 10 L(i, j) ← s/L(j, j)// Forward substitution11 for i = 0 : n -1 do ← s -L(i, j) • Y (j) (i) ← s/L(i, i) Fig. 1 : 1 Fig. 1: Speedups of the transformations for Cholesky Fig. 2 :Fig. 3 : 23 Fig. 2: Performance of loop and square root transforms for the AVX 3×3 version of Cholesky on HSW Fig. 4 : 4 Fig. 4: Performance comparison between intrinsics code and MKL for Cholesky on HSW inlining the wrapper functions when the outer function is too big (unwinding+unroll&jam).With the "fast" version, most wrappers have similar performance in single precision. However, UME::SIMD does not implement the square root reciprocal approximation (despite being part of the interface). Moreover, only Boost.SIMD supports the fast square root in double precision. In that case, Boost.SIMD is a bit slower than the intrinsics code. Fig. 5 : 5 Fig. 5: Performance of Cholesky on SKX, EPYC and Power8 machines, mono-core Both Eigen and the classic routines of MKL are slower than our scalar AoS code and are barely visible on the plots. The "compact" routines of MKL are faster, but still much slower than the SIMD version.On SKX, the scalar SoA performance drops from 9×9 matrices. This is due to the compiler icc that stops vectorizing the unwound scalar code from this point.On all tested machines, the scaling is strong with a parallel efficiency 2 above 80%. Fig. 6 :Fig. 7 : 67 Fig. 6: Incremental speedup of the Kalman filter TABLE I : I Arithmetic Intensity (AI) version flop load + store AI classic 1 6 2n 3 + 15n 2 + 7n TABLE II : II Benchmarked machines CPU full name ISA frequency (GHz) cores/threads SIMD width #FMA SIMD SP parallelism (FLOP/cycle) cache (KB) per core per CPU L1 L2 L3 HSW E5-2683 v3 a AVX2 2.0 2× 14/28 256 2 32 32 256 35840 i9 i9-7900X a AVX512 3.3 10/20 512 2 64 32 1024 14080 SKX Platinum 8168 a AVX512 2.7 2× 24/48 512 2 64 32 1024 33792 EPYC EPYC 7351P b AVX2 2.4 16/32 256 1 16 32 512 65536 A72 Cortex A72 c Neon 2.4 2× 32/32 128 1 8 32 256 32768 Power8 Power 8 Turismo d VSX 3.0 4/32 128 2 16 64 512 8192 a Intel b AMD c ARM d IBM 30 unroll&jam x1 unroll&jam x2 unroll&jam x4 SIMD 256 Gflops 20 SIMD 256 fast SIMD 256 fastest SIMD 256 legacy SIMD 256 unwind 10 SIMD 256 unwind fast SIMD 256 unwind fastest SIMD 256 unwind legacy 0 solve TABLE III : III Rough comparison with State-of-the-Art Kalman filters for HEP Timing of other implementations have been estimated from their article Implementation steps cycle/iter our code (4×4) FWD 44 our code (5×5) FWD 74 CMS (5×5) FWD+BWD+smooth 520 CBM (5×5) FWD+BWD+smooth 550 LHCb (5×5) FWD+BWD+smooth 1440 https://gcc.gnu.org/bugzilla/show bug.cgi?id=82074 The parallel efficiency is defined as the speedup of the multi-core code over the single core code divided by the number of cores. SIMD wrappers in C++ are much longer to compile than plain C with intrinsics. The biggest file took more than 30 hours and required more than 10 GB of memory to compile. Thus, it was decided to stop the generation of unrolled code for matrices bigger than 12×12. E. Benchmarks 1) Benchmark protocol: In order to evaluate the impact of the transforms, we used exhaustive benchmarks. The algorithms were benchmarked on six machines whose specifications are provided in Table II. On x86, the code has been compiled with Intel icc v18.0 with the following options: -std=c99 -O3 -vec -ansi-alias. The time is measured in cycles with _rdtsc(). On other architectures, gcc 7.2 has been used with the following options: -std=c99 -O3 -ffast-math -fstrict-aliasing. Time is measured with clock_gettime(CLOCK_MONOTONIC, . . . ). In all the cases, the code is run multiple times with multiple batch sizes, and the best time is kept. The plots use the following conventions: • scalar: scalar code. The SoA versions are vectorized by the compiler though. • SIMD: SIMD intrinsics code executed on the machine. • unwind: inner loops unwound+scalarized (ie: fully unrolled). • legacy: no reciprocal storing (base version). • fast: use of fast square root reciprocal estimation. • fastest: "fast" without any accuracy recovering. • ×k: order of the outer loop unrolling (unroll&jam) We focus our explanations on the HSW machine and single precision as the accuracy is enough. See [START_REF] Lemaitre | Cholesky factorization on simd multi-core architectures[END_REF] for the analysis of the double precision computation. All the machines have similar behaviors unless explicitly specified otherwise. We first present the impact of the transforms on performance. Then, we compare our best version written in intrinsics with SIMD wrappers and MKL [START_REF] Mkl | Intel(R) math kernel library[END_REF]. Finally, we show the performance on multiple machines. We focus on 4×4 Kalman filtering in order to validate the implementation while keeping a representative filter. However, the code is not limited to 4×4 systems and actually supports all sizes. The filtered system is an inertial point in 2D with the following state: (x, y, ẋ, ẏ). B. Transformations All transformations applied to Cholesky Factorization have been tested on Kalman Filter. A few other optimizations have been implemented and tested: algebraic optimizations and memory access optimizations. 1) Algebraic optimizations: When optimizing an algorithm like Kalman filtering, one can try to optimize the mathematical operations. The first thing to consider is avoiding the recomputation of temporaries that are used several times. For the Kalman filter from algorithm 3, it is possible to keep the temporary product P H (line 4) to compute K (line 5). It is also possible to keep S in its factorized form and expand the expression of K in the expression of x and P : algorithm 4. This ends up being less arithmetic operations as long as matrix-vector products are preferred over matrixmatrix products. Algorithm 4: Kalman filter Optimized in/out : x, P // state, covariance input : u, z // control, measure input : A, B, Q, H, R // Parameters of the Kalman filter // Predict 2) Memory access of symmetric matrices: One can save many memory loads and stores by accessing only half of the symmetric matrices. Indeed, those matrices are used a lot within Kalman filtering for covariance matrices. When the matrices are in AoS, accessing only half of a symmetric matrix decreases a lot the vectorization efficiency, especially with small matrices. Indeed, the pattern to access the near-diagonal elements is not regular. However, when matrices are in SoA, there is no such penalty as we always load entire registers. Therefore, the vectorization efficiency is the same as for square matrices, except with fewer operations and memory accesses. C. Benchmarks 1) Benchmark protocol: We use essentially the same protocol to test our Kalman filter as for the Cholesky factorization. The Kalman filter considered has a 4-dimensional state space (x, y, ẋ, ẏ). Many of these systems are tracked together. The time is measured per iteration. The plots use the same conventions as for Cholesky, plus these extra:
32,511
[ "14384", "1009" ]
[ "541712", "495918", "495918", "541712" ]
00176033
en
[ "shs" ]
2024/03/05 22:32:13
2007
https://shs.hal.science/halshs-00176033/file/Flachaire_Hollard_07c.pdf
Emmanuel Flachaire Guillaume Hollard Model Selection in Iterative Valuation Questions by Keywords: starting point bias, preference uncertainty, contingent valuation JEL Classification: Q26, C81 , outperforms other standard models and confirms that, when uncertain, respondents tend to accept proposed bids. Introduction The NOAA panel recommends the use of a dichotomous choice format in contingent valuation (CV) surveys [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. To improve the efficiency of dichotomous choice contingent valuation surveys, follow-up questions are frequently used. While these enhance the efficiency of dichotomous choice surveys, several studies have found that they yield willingness-to-pay estimates that are substantially different from estimates implied by the first question alone. This is the so-called starting point bias. 1 Many authors have proposed some specific models to handle this problem [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Lechner | A modelisation of the anchoring effect in closed-ended question with follow-up[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF]. In [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], we proposed a model, called the Range model, in which individuals hold a range of acceptable values, rather than a precisely defined value of their willingness-to-pay. 2 In the Range model, starting point bias occurs as a result of respondent uncertainty when answering the first question, while existing models assume that starting point bias occurs while answering the second question. 3This paper proposes further tests of the Range model: (1) we test the Range model on another dataset and (2) we test the Range model against most existing models. An additional result of this paper is a clarification of the relation among existing models. It is shown that existing models can be derived from three general ones. In some favorable cases, this allows us to compare the performance of existing models. The article is organized as follows. The following section presents the Range model. The subsequent sections present other standard models, the interrelation between all the models and an application. The final section concludes. Range model The Range model, developed in [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], is a dichotomous choice model which explains starting point bias by respondent's uncertainty. It models the individual decision process, using the principle of "coherent arbitrariness" [START_REF] Ariely | Coherent arbitrariness: Stable demand curves without stable preferences[END_REF],4 and can be estimated from a bivariate probit model. Decision process In dichotomous choice contingent valuation with follow-up questions, two questions are presented to respondents. The first question is "Would you agree to pay x$?". The second, or follow-up, question is similar but asks for a higher bid offer if the initial answer is yes and a lower bid offer otherwise. The Range model is based on the following decision process : 1. Prior to a valuation question, the respondent holds a range of acceptable values: wtp i ∈ W i , W i with W i -W i = δ (1) where W i is the upper bound of the range. 2. Confronted with a first valuation question, the respondent selects a value inside that range according to the following rule: W i = Min wtp i |wtp i -b 1i | with wtp i ∈ W i , W i (2) A respondent selects a value so as to minimize the distance between his range of willingness-to-pay and the proposed bid b 1i . In other words, W i = b 1i if the bid falls within the WTP range, W i is equal to the upper bound of the range if b 1i is greater than the upper bound of the WTP range, and W i is equal to the lower bound of the range if b 1i is less than the lower bound of the WTP range. 3. The respondent answers the questions according to the selected value: [ ] W i ___ __ Wi { { WTP YES NO { ? x x x b i b i b i He will agree to pay any amount below W i and refuse to pay any amount that exceeds W i . When the first bid falls within the WTP range, he can answer yes or no (?): we assume in such case that a respondent answers yes to the first question with a probability ξ and no with a probability 1 -ξ. If respondents always answer yes when the first bid belongs to the interval of acceptable values (ξ = 1), the model is called the Range yes model. If respondents always answer no when the first bid belongs to the interval of acceptable values (ξ = 0), the model is called the Range no model. Estimation In [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], we show that the Range model can be estimated from a more general random effect probit model, that also encompasses the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. If we use a linear model and if we assume that the distribution of WTP is Normal, the probability that the individual i answers yes to the j th question, j = 1, 2 equals to: M 1 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + λ 1 D j r 1i + λ 2 D j (1 -r 1i ) (3) where r 1i is the response to the first payment question, D 1 = 0 and D 2 = 1, α = β/σ, λ 1 = δ 1 /σ and λ 2 = δ 2 /σ. Based on this equation, the parameters are interrelated according to: β = α σ, δ 1 = λ 1 σ and δ 2 = λ 2 σ. (4) When we use just the responses to the initial payment question (j = 1), this equation simplifies to: P (yes) = P (W 1i > b 1i ) = Φ X i α - 1 σ b 1i (5) Moreover, the probability that the individual i answers yes to the initial and the followup questions (r 1i = 1, j = 2) is equal to: P (yes, yes) = Φ X i α - 1 σ b 2i + δ 1 σ (6) From the estimation based on M 1 , different models can be considered: • δ 1 < 0 and δ 2 > 0 corresponds to the Range model (with δ 2 -δ 1 = δ). • δ 1 < 0 and δ 2 = 0 corresponds to the Range yes model • δ 1 = 0 and δ 2 > 0 corresponds to the Range no model • δ 1 = δ 2 corresponds to the Shift model • δ 1 = δ 2 = 0 corresponds to the Double-bounded model. It is clear that the Range model and the Shift model are non-nested (one model is not a special case of the other); they can be tested through M 1 . Interpretation Estimation of the Range model provides estimates of β, σ, δ 1 and δ 2 , from which we can estimate a mean of WTP µ ξ and a dispersion of WTP σ. This last mean of WTP would be similar to the mean of WTP estimated using the first questions only, that is, based on the single-bounded model. Additional information is obtained from the use of follow-up questions: estimates of δ 1 and δ 2 allow us to estimate a range of means of WTP: [µ 0 ; µ 1 ] = [µ ξ + δ 1 ; µ ξ + δ 2 ] with δ 1 ≤ 0, and δ 2 ≥ 0. The lower bound µ 0 corresponds to the case where respondents always answer no if the bid belongs to the range of acceptable values (ξ = 0). Conversely, the upper bound µ 1 corresponds to the case where respondents always answer yes if the bid belongs to the range of acceptable values (ξ = 1). How respondents answer the question when the bid belongs to the range of acceptable values can be tested as follows: • respondents always answer no corresponds to the null hypothesis H 0 : δ 1 = 0 • respondents always answer yes corresponds to the null hypothesis H 0 : δ 2 = 0. Interrelation with standard models Different models are proposed in the literature to control for starting point bias: anchoring bias, structural shift effects and ascending/descending sequences. All these models assume that the second answer is sensitive to the first bid offer. They assume that a prior willingness-to-pay W i is used to answer the first bid offer, and an updated willingness-topay W ′ i is used by the respondents to answer the second bid. It follows that an individual answers yes to the first and to the second bids if: r 1i = 1 ⇔ W i > b 1i and r 2i = 1 ⇔ W ′ i > b 2i (8) Each model leads to a specific definition of W ′ i . In the following subsections, we briefly review some standard models, their estimation and the possible interrelations between them and the range model previously defined. Models Anchoring model: [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] propose a model where the respondents combine their prior WTP with the value provided by the first bid as follows: W ′ i = (1 -γ) W i + γ b 1i (9) The first bid offer plays the role of an anchor: it causes the WTP to come to it. Shift model: [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] propose a model where the WTP systematically shifts between the two answers: W ′ i = W i + δ (10) The first bid offer is interpreted as providing information about the cost or the quality of the object. Indeed, a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object. Anchoring & Shift model: [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF] proposes a model that combines anchoring and shift effects: W ′ i = (1 -γ) W i + γ b 1i + δ (11) In addition, see [START_REF] Aadland | Incentive incompatibility and starting-point bias in iterative valuation questions: comment[END_REF] and [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions: reply[END_REF] for estimation details. Framing model: DeShazo ( 2002) proposes de-constructing iterative questions into their ascending and descending sequences. His results show that the answers that follow an initial yes cause most of the problems. He recommends using the decreasing follow-up questions only: W ′ i = W i if r 1i = 0 (12) Using prospect theory [START_REF] Kahneman | Prospect theory: an analysis of decisions under risk[END_REF], Deshazo argues that the first bid offer is interpreted as a reference point if the answer to the first question is yes: the follow-up question is framed as a loss and the respondents are more likely to answer no to the second question. Framing & Anchoring & Shift model: [START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF] propose applying anchoring and shift effects in ascending sequences only: W ′ i = W i + γ (1 -W i ) r 1i + δ r 1i (13) It takes into account questions that follow an initial yes. Empirical results suggest that gains in efficiency can be obtained compared to the Framing model. Note that this model is not based on the underlying decision process defined in section 2. Estimation Implementation of the Anchoring & Shift model can be based on a random effect probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to: M 2 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j + λD j ( 14 ) where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to: β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (15) Implementation of the Anchoring model and of the Shift model can be derived from this last probability, respectively with δ = 0 and γ = 0. The Double-bounded model corresponds to the case δ = γ = 0. The Framing & Anchoring & Shift model differs from the previous model by the fact that anchoring and shift effects occur in ascending follow-up questions only. Its implementation can be based on a random effect probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to: M 3 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j r 1i + λ D j r 1i (16) where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to (15). Interrelation between all the models It can be helpful to see the interrelations between all the models. Indeed, some models are nested and thus, we can test a restricted model against an unrestricted model with standard inference based on a null hypothesis. Table 1 shows the restrictions to apply to the probabilities M 1 , M 2 and M 3 , defined in equations ( 3), ( 14), ( 16), in order to estimate the different models. For instance, it is clear that the Shift and the Range models are non-nested, but they are both special cases of M 1 . Thus, a Shift model can be selected against a Range model through the general form M 1 . Model M 1 M 2 M 3 Double δ 1 = δ 2 = 0 γ = δ = 0 γ = δ = 0 Anchoring δ = 0 Shift δ 1 = δ 2 γ = 0 Anch-Shift n. c. Fram-Anch-Shift n. c. Range δ 1 ≤ 0 ≤ δ 2 Range yes δ 1 ≤ 0, δ 2 = 0 γ = 0 Application In this application, we use a survey that involves a sample of users of the natural reserve of the Camargue, a major wetland in the south of France. The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay as an entrance fee to contribute to the preservation of the natural reserve. The survey was administered to 218 recreational visitors during spring 1997, using face to face interviews. Recreational visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up.5 For a complete description of the contingent valuation survey, see [START_REF] Claeys-Mekdade | Quelle valeur attribuer à la Camargue? Une perspective interdisciplinaire économie et sociologie[END_REF]. Mean values of the WTP were estimated using a linear model [START_REF] Mcfadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. Indeed, [START_REF] Crooker | Parametric and semi-nonparametric estimation of willingness-to-pay in the dichotomous choice contingent valuation framework[END_REF] show that the simple linear probit model is often more robust in estimating the mean WTP than other parametric and semi-parametric models. The mean and the dispersion of WTP estimated from a single bounded model are: The confidence interval of μ is obtained by simulation with the Krinsky and Robb procedure, see Haab and McConnell (2003, ch.4) for more details. μ = 113. Let us consider the following standard models: double-bounded, anchoring, shift, anchoring & shift models. These models can be estimated from M 2 , with or without some specific restrictions, see ( 14). Table 2 As expected, the confidence interval of the mean WTP and the standard error of the dispersion of the WTP decrease significantly when we use the usual double-bounded model (Double) instead of the previous single-bounded model. However, estimates of the mean WTP in both models are very different (89.8 vs. 113.5). Such inconsistent results suggest a problem of starting-point bias. It leads us to consider the Anchoring & Shift model (Anch-Shift) to control for such effects, in which the Double, Anchoring and Shift models are nested. We can compute a likelihood-ratio statistic (LR) to test a restricted model against the Anchoring & Shift model. The LR test is twice the difference between the maximized value of the loglikelihood functions (given in the last column), which is asymptotically distributed as a Chi-squared distribution. We can test the Double model against the Anch-Shift model with the null hypothesis H 0 : γ = δ = 0, for which LR = 10.4. A P -value can be computed and is equal to P = 0.0055: we reject the null hypothesis and thus the Double model. We can test the Anchoring model against the Anch-Shift model (H 0 : δ = 0): we reject the null hypothesis (P = 0.0127). Finally, we can test the Shift model against the Anch-Shift model (H 0 : γ = 0): we do not reject the null (P = 0.1572). From this analysis, the Shift model is selected. It is interesting to observe that, when we compare the results between the Shift and the Single-bounded models, the confidence intervals and standard errors are not significantly different. This supports the conclusion of [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]: they argue that once we have controlled for the starting-point effect, the efficiency gains from the follow-up questioning can be small. To go further, we consider a model where anchoring and shift effects occur in ascending sequences, but not in descending sequences (Fra-Anc-Shi). The case with shift effect in ascending sequences (no anchoring) corresponds to the Range model where respondents always answer yes if the initial bid belongs to their range of acceptable values. Thus, we call this last model Range yes rather than Fra-Shi in the table. The models can be estimated from M 3 , with or without some specific restrictions, see ( 16). Estimation results are given in Table 3 If we compute a LR statistic to test the Double model against the Fra-Anc-Shi model (H 0 : γ = δ = 0), we reject the null hypothesis (P = 0.0033). Conversely, if we test the Range yes model against the Fra-Anc-Shi model (H 0 : γ = 0), we do not reject the null hypothesis (P = 0.6547). From this analysis, the Range yes model is selected. It is interesting to observe that the Range yes model provides efficiency gains compared to the single-bounded and Shift models: confidence intervals and standard errors of the mean and of the dispersion of WTP are smaller. However, the Shift model is selected from M 2 and the Range yes is selected from M 3 : these two models are non-nested and no inference is used to select one model. Next, we consider the model developed in this article, that considers starting pointbias with respondent's uncertainty. This model can be estimated from a more general model M 1 and corresponds to the case δ 1 ≤ 0 ≤ δ 2 , see (3). An interesting feature of M 1 is that the Double and the Shift model are special cases, respectively with the restrictions δ 1 = δ 2 = 0 and δ 1 = δ 2 . Thus, even if the Range and the Shift models are non-nested, we can test them through M 1 . Estimation results are given in Table 4. The estimation result, obtained with no restrictions, provides δ1 ≤ 0 ≤ δ2 . It corresponds to the case of the Range model and thus, estimation results with no constraints are presented in the line called Range. This result suggests that the Range model is more : δ 2 = 0). We do not reject the null hypothesis (P = 0.5270). From this analysis, the Range yes model is selected. Finally, inference based on M 1 , M 2 and M 3 leads us to select a Range model, where the respondents answers yes if the initial bid belongs to their range of acceptable values6 . This model gives an estimator of the mean WTP close to the single-bounded model (117.0 vs. 113.5) with a smaller confidence interval ([106.7;129.8] vs. [98.1;138.2]) and smaller standard errors (12.8 vs. 17.9). Table 5 presents full econometric results of this model with the single-bounded model. It is clear from this table that the standard errors in the Range yes are always significantly reduced compared to the standard errors in the single-bounded model. In other words, the selected Range model provides both consistent results with the single-bounded model and efficiency gains. Furthermore, we can draw additional information from the Range model. Indeed, from (7) we have: This model provides a range of values, rather than a unique WTP mean value. From our results, we can make a final observation. Estimation of a random effect probit model with an estimated correlation coefficient ρ less than unity suggests that respondents use two different values of WTP to answer the first and the second questions. This is a common interpretation in empirical studies; see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. If we restrict our analysis to the standard models (Double, Anchoring, Shift and Anch-Shift), our results leads us to select the Shift model, for which ρ = 0.63 (significantly less than 1). However, if we consider a more general model M 1 that encompasses the Range and the Shift models, estimation results leads us to select the Range yes model for which ρ = 1 (the estimation does not restrict the parameter ρ to be equal to one, this estimated value equals to one is obtained from an unrestricted estimation). It suggests that respondents [ Table 1 : 1 Nested Models (n.c.: no constraints) Table 2 : 2 presents estimated means of WTP μ and the dispersion of WTP distributions σ. Standard errors are given in italics and confidence intervals of μ are presented in brackets; they are obtained by simulation with the Krinsky and Robb procedure. Random effect probit models estimated from M 2 M 2 constraint mean WTP disp WTP anchor shift corr. µ c.i. σ s.e. γ s.e. δ s.e. ρ s.e. ℓ Double γ = δ = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3 Anchoring δ = 0 133.8 [108.4;175.2] 92.0 44.5 0.51 0.23 - 0.78 0.14 -175.2 Shift γ = 0 119.4 [105.7;139.7] 69.0 19.9 - -26.7 9.1 0.63 0.17 -173.1 Anch-Shift n. c. 158.5 [122.6;210.7] 100.8 53.5 0.46 0.29 -17.1 13.9 0.73 0.16 -172.1 . M 3 constraint mean WTP disp WTP anchor shift corr. µ c.i. σ s.e. γ s.e. δ s.e. ρ s.e. ℓ Double γ = δ = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3 Range yes γ = 0 117.0 [106.7;129.8] 65.0 12.8 - -30.7 13.2 1 -171.7 Fra-Anc-Shi n. c. 116.4 [104.6;132.7] 65.1 12.8 -0.02 0.41 -31.7 21.2 1 -171.6 Table 3 : 3 Random effect probit models estimated from M 3 Table 4 : 4 Random effect probit models estimated from M 1 appropriate than the Shift model, otherwise we would have had δ 1 and δ 2 quite similar and with the same sign. This can be confirmed by testing the Shift model against the Range model (H 0 : δ 1 = δ 2 ); we reject the null hypothesis (P = 0.0736) at a nominal level 0.1. In addition, we test the Range yes model against the Range model (H 0 M 1 constraint mean WTP disp WTP shift 1 shift 2 corr. µ c.i. σ s.e. δ 1 s.e. δ 2 s.e. ρ s.e. ℓ Double δ 1 = δ 2 = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3 Shift δ 1 = δ 2 119.4 [105.7;139.7] 69.0 19.9 -26.7 9.1 -26.7 9.1 0.63 0.17 -173.1 Range n. c. 126.0 [110.7;147.3] 73.5 21.6 -43.7 27.6 6.5 8.8 1 -171.5 Range yes δ 1 ≤ 0, δ 2 = 0 117.0 [106.7;129.8] 65.0 12.8 -30.7 13.2 - 1 -171.7 Other response effects could explain the difference between estimates of mean WTP, as framing, respondents assumptions about the scope of the program and wastefulness of the government, see[START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] for a dicussion. This is in line with studies putting forward that individuals are rather unsure of their own willingness-to-pay[START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF][START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], 2001[START_REF] Welsh | Elicitation effects in contingent valuation: comparisons to a multiple bounded discrete choice approach[END_REF][START_REF] Van Kooten | Preference uncertainty in non-market valuation: a fuzzy approach[END_REF][START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF][START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF]. A notable exception is[START_REF] Lechner | A modelisation of the anchoring effect in closed-ended question with follow-up[END_REF] These authors conducted a series of valuation experiments. They observed that "preferences are initially malleable but become imprinted (i.e. precisely defined and largely invariant) after the individual is called upon to make an initial decision". The first bid b 1i is drawn randomly from{5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100}. If the answer to the first bid is no, a second bid b 2i < b 1i is drawn randomly. If the answer to the first bid is yes, a second bid b 2i > b 1i is drawn randomly. There was a high response rate (92.6 %). The Range yes model is empirically equivalent to a special case developed in[START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF], with an anchoring parameter equal to zero. In this last article, the results suggested a specific behavior (shift effect) in ascending sequences only. This interpretation was based on empirical results only, with an unknown underlying decision process. Here, we obtain similar empirical results, but the interpretation of the response behavior is very different. answer both questions according to the same value, contrary to the results obtained with the standard models. Conclusion In this article, we propose a unified framework that accomodates many of the existing models for dichotomous choice contingent valuation with follow-up and allows to discriminate between them by simple parametric tests of hypothese. We further test the Range model, developped in [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], against several others standard models. Our empirical results show that the Range model outperforms other standard models and that, when uncertain, respondents tend to accept proposed bids. It confirms that respondent uncertainty is a valid explanation of various anomalies arising in contingent valuation surveys.
26,350
[ "843051", "1331865" ]
[ "15080", "45168" ]
01760338
en
[ "shs" ]
2024/03/05 22:32:13
2003
https://insep.hal.science//hal-01760338/file/149-%20Bernard-Hausswirth_CyclingBJSP-2003-37-2-154-9.pdf
Thierry Bernard Fabrice Vercruyssen F Grego Christophe Hausswirth R Lepers Jean-Marc Vallier Jeanick Brisswalter email: [email protected] Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes come Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes uring the last decade, numerous studies have investigated the effects of the cycle-run transition on subsequent running adaptation in triathletes. [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] Compared with an isolated run, the first few minutes of triathlon running have been reported to induce an increase in oxygen 2-4 fatigue and/or metabolic load induced by a prior cycling event on subsequent running performance. To the best of our knowledge, few studies have examined the effect of cycling task characteristics on subsequent running performance. [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] Hausswirth et al 15 16 indicated that riding in a continuous uptake (V ~O2 ) and heart rate (HR), an alteration in drafting position, compared with the no draft modality, ventilatory efficiency (V ~E), [START_REF] Hue | The influence of prior cycling on biomechanical and cardiorespiratory response profiles during running in triathletes[END_REF] and haemodynamic modifications-that is, changes in muscle blood flow. [START_REF] Kreider | Cardiovascular and thermal response of triathlon performance[END_REF] Moreover, changes in running pattern have been observed after cycling, such as an increase in stride rate 3 6 and modifications in trunk gradient, knee angle in the nonsupport phase, and knee extension during the stance phase. [START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] These changes are generally related to the appearance of leg muscle fatigue characterised by perturbation of electromyographic activity of different muscle groups. [START_REF] Witt | Coordination of leg muscles during cycling and running in triathlon[END_REF] Recently, from a laboratory study, Vercruyssen et al [START_REF] Vercruyssen | Influence of cycling cadences on subsequent running performance in triathlon[END_REF] reported that it is possible for triathletes to improve the adaptation from cycling to running at an intensity corresponding to Olympic distance competition pace (80-85% maximal oxygen uptake (<V>O 2 MAX)). They showed a lower metabolic load during a running session after the adoption of the energetically optimal cadence (73 rpm) calculated from the V ~O2cadence relation [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF][START_REF] Coast | Linear increase in optimal pedal rate with increased power output in cycle ergometry[END_REF][START_REF] Marsh | The association between cycling experience and preferred and most economical cadences[END_REF][START_REF] Marsh | Effect of cycling experience, aerobic power and power output on preferred and most economical cycling cadences[END_REF] compared with the freely chosen cadence (81 rpm) or the theoretical mechanical optimal cadence (90 rpm). [START_REF] Neptune | A theorical analysis of preferred pedaling rate selection in endurance cycling[END_REF] Furthermore, Lepers et al [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] indicated that, after cycling, neuromuscular factors may be affected by exercise duration or choice of pedalling cadence. They observed, on the one hand, the appearance of neuromuscular fatigue after 30 minutes of cycling at 80% of maximal aerobic power, and, on the other hand, that the use of a low (69 rpm) or high (103 rpm) cycling cadence induced a specific neuromuscular adaptation, assessed by the variation in RMS/M wave ratio interpreted as the central neural input change. From a short distance triathlon race perspective characterised by high cycling or running intensities, these observations raise a major question about the effect of neuromuscular significantly reduced oxygen uptake during cycling and improved the performance of a 5000 m run in elite triathletes. In addition, Garside and Doran [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF] showed in recreational triathletes an effect of cycle frame ergonomics: when the seattube angle was changed from 73° to 81°, the performance of the subsequent 10 000 m run was improved-that is, there was a reduction in race time. Therefore, the aim of this study was to examine in outdoor conditions the effects of different pedalling cadences (within the range 60-100 rpm) on the performance of a subsequent 3000 m track run, the latter depending mainly on both metabolic and neuromuscular factors. 17 18 METHODS Participants Nine well motivated male triathletes currently competing at the national level participated in the study. They had been training regularly and competing in triathlons for at least four years. For all subjects, triathlon was their primary activity; their mean (SD) times for Olympic distance and sprint distance triathlons were 120 minutes 37 seconds (3.2) and 59 minutes 52 seconds (3.4) respectively. Mean (SD) training distances a week were 9.1 (1.9) km for swimming, 220.5 (57.1) km for cycling, and 51.1 (8.9) km for running. The mean (SD) age of the subjects was 24.9 (4.0) years. Their mean (SD) body weight and height were 70.8 (3.8) kg and 179 (3.9) cm respectively. The subjects were asked to abstain from exhaustive training throughout the experiment. Finally, they were fully informed of the content of the experiment, and written consent was obtained before all testing, according to local ethical committee guidelines. Maximal cycling test Subjects first performed a maximal test to determine V ~O2 MAX and ventilatory threshold. This test was carried out on an electromagnetically braked ergocycle (SRM; Jülich, Welldorf, Germany), 19 20 on which the handle bars and racing seat are fully adjustable both vertically and horizontally to reproduce the positions of each subject's bicycle. No incremental running test was performed in this study, as previous investigations indicated similar V ~O2 MAX values whatever the locomotion mode in triathletes who began the triathlon as their first sport. 21 [22 ] This incremental session began with a warm up of 100 W for six minutes, after which the power output was increased by 30 W a minute until volitional exhaustion. During this protocol, V ~O2 , V ~E, respiratory exchange ratio, and HR were continuously recorded every 15 seconds using a telemetric system collecting gas exchanges (Cosmed K4 , Rome, Italy) previously validated by Hausswirth et al. [START_REF] Hausswirth | The cosmed K4 telemetry system as an accurate device for oxygen uptake measurements during exercise[END_REF] V ~O MAX was determined according to criteria described by Howley et al [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF] that is, a plateau in V ~O2 despite an increase in power output, a respiratory exchange ratio value of 1.15, or an HR over 90% of the predicted maximal HR (table 1). The maximal power output reached during this test was the mean value of the last minute. Moreover, the ventilatory threshold was calculated during the cycling test using the criterion of an increase in V ~E/V ~O with no concomitant increase in V ~E/V ~CO . [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF] Cycle-run performance sessions All experiments took place in April on an outdoor track. Outside temperature ranged from 22 to 25°C, and there was no appreciable wind during the experimental period. Each athlete completed in random order three cycle-run sessions (20 minutes of cycling and a 3000 m run) and one isolated run (3000 m). These tests were separated by a 48 hour rest period. Before the cycle-run sessions, subjects performed a 10 minute warm up at 33% of maximal power. [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] During the cycling bout of the cycle-run sessions, subjects had to maintain one of three pedalling cadences corresponding to 60, 80, or 100 rpm. These cycling cadences were representative of the range of cadences selected by triathletes in competition. 15 26 Indeed, it was recently reported that, on a flat road at 40 km/h, cycling cadences could range from 67 rpm with a 53:11 gear ratio to 103 rpm with a 53:17 gear ratio. [START_REF] Lepers | Effect of pedalling rates on physiological response during an endurance cycling exercise[END_REF] However, 60 rpm is close to the range of energetically optimal cadence values, [START_REF] Marsh | Effect of cycling experience, aerobic power and power output on preferred and most economical cycling cadences[END_REF] 80 rpm is near the freely chosen cadence, 6 8 and 100 rpm is close to the cadence used in a drafting situation. 15 16 According to previous studies of the effect of a cycling event on running adaptation, 2 5 the cycling bouts were performed at an intensity above the ventilatory threshold corresponding to 70% of maximal power output (80% V ~O2 MAX) and were representative of a sprint distance simulation. 15 16 The three cycling bouts of the cycle-run sessions were conducted on the SRM system next to the running track. The SRM system allowed athletes to maintain constant power output independent of cycling cadence. In addition, feedback on selected cadence was available to the subjects via a screen placed directly in front of them. After cycling, the subjects immediately performed the 3000 m run on a 400 m track. The mean (SD) transition time between the cycling and running events (40.4 (8.1) seconds) was the same as that within actual competition. [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] During the running bouts, race strategies were free, the only instruction given to the triathlete being to run as fast as possible over the whole 3000 m. Measurement of physiological variables during the cycle-run sessions V ~O2 , V ~E, and HR were recorded every 15 seconds with a K4 RQ . The physiological data were analysed during the cycling bouts at the following intervals: 5th-7th minute (5-7), 9th-11th minute (9-11), 13th-15th minute (13-15), 17th-19th minute (17-19), and every 500 m during the 3000 m run (fig 1). Measurement of biomechanical variables during the cycle-run sessions Power output and pedalling cadence were continuously recorded during cycling bout. During the run, kinematic data were analysed every 500 m using a 10 m optojump system (MicroGate, Timing and Sport, Bolzano, Italy). From this system, speed, contact, and fly time attained were recorded every 500 m over the whole 3000 m. The stride rate-stride length combination was calculated directly from these values. Thus the act of measuring the kinematic variables had no effect on the subjects' running patterns within each of the above 10 m optical bands. Blood sampling Capillary blood samples were collected from ear lobes. Blood lactate was analysed using the Lactate Pro system previously validated by Pyne et al. [START_REF] Pyne | Evaluation of the lactate pro blood lactate analyser[END_REF] Four blood samples were collected: before the cycle-run sessions (at rest), at 10 and 20 minutes during the cycling bouts, and at the end of the 3000 m run. Statistical analysis All data are expressed as mean (SD). The stability of the running pattern was described using the coefficient of variation ((SD/mean) 100) for each athlete. [START_REF] Maruyama | Temporal variability in the phase durations during treadmill walking[END_REF] A two way analysis of variance (cadence period time) for repeated measures was performed to analyse the effects of time and cycling cadence using V ~O2 , V ~E, HR, speed velocity, stride variability, speed variability, stride length, and stride rate as dependent variables. For this analysis, the stride and speed variability (in %) were analysed by an arcsine transformation. A Newmann-Keuls post hoc test was used to determine differences among all cycling cadences and periods during exercise. In all statistical tests, the level of significance was set at p<0.05. RESULTS m performances In this study, the performance of the isolated run was significantly better than the run performed after cycling (583.0 (28.3) and 631. Running bouts of cycle-run sessions Table 2 gives mean values for V ~O2 , V ~E, and HR for the running bouts. The statistical analysis indicated a significant interac-running performance. A classical view is that performance in triathlon running depends on the characteristics of the preceding cycling event, such as power output, pedalling 1 29 tion effect (period time + cycling cadence) on V ~O2 during sub-cadence, and metabolic load. Previous investigations have sequent running (p<0.05). V ~O2 values recorded during the run section of the 60 rpm session were significantly higher than during the 80 rpm or the 100 rpm sessions (p<0.05, table 2). These values represent respectively 92.3 (3.0)% (60 rpm run), 85.1 (0.6)% (80 rpm run), and 87.6 (1.2)% (100 rpm run) of cycle V ~O2 MAX, indicating a significantly higher fraction of V ~O2 MAX sustained by subjects during the 60 rpm run session from 1000 to 3000 m than under the other conditions (p<0.05, fig 3). Changes in stride rate within the first 500 m of the 3000 m run were significantly greater during the 80 and 100 rpm run sessions than during the 60 rpm run session (1.52 (0.05), 1.51 (0.05), and 1.48 (0.03) Hz respectively). No significant effect of cycling cadence was found on either stride variability during the run or blood lactate concentration at the end of the cycle-run sessions (table 2). DISCUSSION The main observations of this study confirm the negative effect of a cycling event on running performance when compared with an isolated run. However, we observed no effect of the particular choice of cycling cadence on the performance of a subsequent 3000 m run. However, our results highlight an effect of the characteristics of the prior cycling event on metabolic responses and running pattern during the subsequent run. Cycle-run sessions v isolated run and running performance shown a systematic improvement in running performance when the metabolic load of the cycling event was reduced either by drafting position [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] or racing on a bicycle with a steep seat-tube angle (81°). [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF] Unlike a 3000 m run which is characterised by neuromuscular and anaerobic factors, 17 18 the improvement in running performance in these previous studies was observed over a variety of long distances (5-10 km) where the performance depends mainly on the capacity of the subject to minimise energy expenditure over the whole race. 1 14 15 29 Therefore one explanation for our results is that minimisation of metabolic load through cadence choice during cycling has a significant effect on the running time mainly during events of long duration. Further research is needed into the effect of cadence choice on total performance for running distances close to those of Olympic and Iron man triathlon events. However, despite the lack of cadence effect on 3000 m race time, our results indicate an effect of cadence choice (60-100 rpm) on the stride pattern or running technique during a 3000 m run. This difference was mainly related to the higher velocity preferred by subjects immediately after cycling at 80 and 100 rpm and to the lower velocity from 1500 to 2500 m after cycling at high cadences. These results may suggest that the use of a low pedalling cadence (close to 60 rpm) reduces variability in running velocity-that is, one of the factors of running technique-during a subsequent run. For running speeds above 5 m/s (> 18 km/h) and close to maximum values, the change in stride rate is one of the most To our knowledge only one study has analysed the effect of important factors in increasing running velocity. In our cycling events on subsequent running performance when compared with an isolated run. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] The study showed, during a sprint distance triathlon (0.75 km swim, 20 km bike ride, 5 km run), a significant difference betweena5 km run after cycling (alone and in a sheltered position) and the run performed study, the significant increase in running speed observed during the first 500 m of the 80 and 100 rpm run sessions was associated with a significantly higher stride rate (1.51-1.52 Hz) than in the 60 rpm run session (1.48 Hz). The relation between stride rate and cycling cadence has been reported by 16 without a prior cycling event (isolated run). The cycling event Hausswirth et al in elite subjects participating in a sprint discaused an increase in mean 5 km race time (1014 seconds) and a decrease in mean running velocity (17.4 km/h) compared with the isolated run (980 seconds and 18.2 km/h). Our results are in agreement, showing an impairment in running performance after the cycling event whatever the choice of pedalling cadence. There was an increase in mean running time (631 seconds) and a decrease in mean running velocity (17.2 km/h) compared with the performance in the isolated run (583 seconds and 18.5 km/h). Therefore, one finding of our study is that a prior cycling event can affect running performance over the 3 km as well as the 5 km and 10 km distances. 1 29 One hypothesis to explain the alteration in running performance after cycling could be the high metabolic load sustained by subjects at the end of cycling characterised by an increase in blood lactate concentration (4-6 mmol/l) associated with a high V ~O2 MAX (81-83%) and HR max (88-92%). On the other hand, Lepers et al [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] have recently shown in well trained triathletes a reduction in muscular force relating to both central and peripheral factors-that is, changes in M wave and EMG RMS-after 30 minutes of cycling performed at different pedalling cadences (69-103 rpm). We hypothesise that these modifications of neuromuscular factors associated with increasing metabolic load during cycling could increase the development of fatigue just before running, whatever the choice of pedalling cadence. Cycling cadences and physiological and biomechanical characteristics of running Our results show no effect of different cycling cadences (60-100 rpm) commonly used by triathletes on subsequent tance triathlon, indicating a significantly higher stride rate after cycling at 102 rpm (1.52 Hz) than after cycling at 85 rpm (1.42 Hz) for the first 500 m of the run. These observations suggest that immediately after the cycle stage, triathletes spontaneously choose a race strategy directly related to the pedalling cadence, but this effect seems to be transitory, as no significant differences between conditions were reported after the first 500 m of running. This is in agreement with previous studies in which changes in stride pattern and running velocity were found to occur only during the first few minutes of the subsequent run. 1 3 5 6 Furthermore, the fact that triathletes prefer to run at a high pace after cycling at 80 and 100 rpm seems to confirm different anecdotal reports of triathletes. Most triathletes prefer to adopt a high pedalling cadence during the last few minutes of the cycle section of actual competition. Three strategies may be evoked to characterise the choice of cycling cadence: speeding up in the last part of the cycle stage in order to get out quickly on the run (when elite triathletes compete in draft legal events) [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] ; reducing power output and spin to minimise the effects of the bike-run transition; maintaining power output while increasing cadence. However, our results show that such a strategy is associated with higher metabolic cost during the cycling stage and greater instability in running pattern, suggesting that it is not physiologically beneficial for the athlete to adopt high pedalling cadences in triathlon competition. During our study, cycling at 100 rpm was associated with an increase in metabolic cost as classically observed in previous studies for a high cadence such as an increase in V ~O2 , HR, V ~E, [START_REF] Hagan | Effect of pedal rate on cardiorespiratory responses during continuous exercise[END_REF] and blood lactate concentration. [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF] At the end of the 100 rpm cycling task, mean blood lactate concentration was 7.0 (2.0) mmol/l, suggesting a high contribution of anaerobic metabolism, 8 whereas it was 4.6 (2.1) mmol/l after cycling at 60 rpm. The effect of pedalling rate on physiological adaptation during prolonged cycling has recently been investigated. 8 13 32 Brisswalter et al [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF] indicated that cycling at a cadence higher than 95 rpm induces a significant increase in V ~O2 , V ~E, and lactate concentration after 30 minutes of exercise in triathletes. Moreover, our results show an effect of cycling cadence on aerobic contribution during maximal running performance. The subjects were able to sustain a higher fraction of V ~O2 MAX during the 60 rpm run session-that is, 92%-than during the 80 and 100 rpm run sessions-84% and 87% of V ~O2 MAX respectively-(fig 3). These results suggest that the contribution of the anaerobic pathway 17 is more important after the higher cycling rates (80 and 100 rpm) than after the 60 rpm ride and could lead during a prolonged running exercise to earlier appearance of fatigue caused by metabolic acidosis. 33 34 In conclusion, our results confirm the alteration in running performance after a cycling event compared with an isolated run. The principal aim of our investigation was to evaluate the impact of different pedalling rates on subsequent running performance. No significant effect of cycling cadence was found on 3000 m running performance, despite some changes in running strategies, stride rate, and metabolic contributions. We chose a running distance of 3000 m to analyse the possible effect of neuromuscular fatigue-previously reported after a 30 minute cycling exercise at the same intensity [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] -on running performance when neuromuscular and anaerobic factors make important contributions. 17 18 As the effect observed was not significant, the choice of cadence within the usual range does not seem to influence the performance of a middle distance run. One limiting factor of this study may be the choice of a short exercise duration because an effect of metabolic load reduction during the cycling stage on running performance was previously observed for a run longer than 5000 m. For multidisciplinary activities such as triathlon and duathlon, further applied research on the relation between cycling cadence and performance of the subsequent run is required to evaluate the influence of the practical conditions and constraints of actual competition. Take home message Compared with an isolated run, completion of a cycling event impairs the performance of a subsequent run independently of the pedalling cadence. However, running strategy, stride rate, and metabolic contribution seem to be improved by the use of a low pedalling cadence (60 rpm). The choice of cycling cadence may have an effect on the running adaptation during a sprint or short distance triathlon. Much research has been conducted on the effects of cycling on physiological variables measured during subsequent running in triathletes. Few authors, however, have examined the effect of variation in cycling task characteristics on either such variables or overall run performance. This study, examining the effect of different pedalling cadences during a cycle at about 80% V ~O2 MAX on performance within a succeeding 3 km run by well trained male triathletes, adds to the published work in this area. V Vleck Chair, Medical and Research Committee of the European Triathlon Union and Senior Lecturer, School of Chemical and Life Sciences, University of Greenwich, London, UK [email protected] Figure 1 1 Figure 1Representation of the three cycle-run sessions. TR, Cycle-run transition; BS, blood samples taken; M 1 -M 4 , measurement intervals during cycling at 5-7, 9-11, 13-15, and 17-19 minutes; M 5 -M 10 , measurement intervals during running at 500, 1000, 1500, 2500, and 3000 m; WU, warm up for each condition. Figure 2 3 23 Figure 2 Race strategies expressed as the evolution in running Figure 3 Changes in fraction of V ˙ O MAX (FV ˙ O MAX) sustained by Ergonomie et performance sportive, UFR STAPS, Université de Toulon-Var, France C Hausswirth, Laboratoire de physiologie et biomécanique, INSEP, Paris, France R Lepers, Groupe analyse du mouvement, UFR STAPS, Université de Bourgogne, France . ................. COMMENTARY .................. Table 1 1 Physiological characteristics of the subjects obtained during a maximal cycling testValues are expressed as mean (SD). V ˙ O 2 MAX, maximal oxygen uptake (ml/min/kg); V ˙ EMAX, maximal ventilation (litres/min); HR max , maximal heart rate (beats/min); VT, ventilatory threshold; MAP, maximal power output (W). RQ 2 Table 2 2 Mean values for power output and speed, oxygen uptake, expiratory flow, heart rate, blood lactate, and running performance obtained during the cycle-run sessions Cycle Cycle Cycle Parameter (60 rpm)_ Run (80 rpm) Run (100 rpm) Run Power output (W)/speed (km/h) 275.4 (19.4) 17.3 (1.1) 277.1 (18.6) 17.2 (1.20 277.2 (17.2) 17.1 (1.5) Oxygen uptake (ml/min/kg) 55.6 (4.6) 62.8 (7.3)* 55.3 (4.0) 57.9 (4.1) 56.5 (4.3) 59.7 (5.6) Expiratory flow (litres/min) 94.8 (12.2) 141.9 (15.9) 98.2 (9.2) 140.5 (14.6) 107.2 (13.0)* 140. 5 (21.8) Heart rate (beats/min) 163.5 (9.5) 184.2 (4.6) 166.1 (10.4) 185.8 (3.1) 170.7 (4.7)* 182. 6 (5.0) Lactataemia (mmol/l) 4.6 (2.1) 9.0 (1.9) 5.1 (2.1) 9.2 (1.2) 7.0 (2.0)* 9.9 (1.8) Stride rate (Hz) 1.48 (0.01) 1.49 (0.01) 1.48 (0.02) Running performance (s) 625.7 (40.1) 630.0 (44.8) 637.6 (57.9) *Significantly different from the other cycle-run sessions, p<0.05. 1 (47.6) seconds for the isolated run and mean cycle-run sessions respectively). No significant effect of cycling cadence was observed on subsequent 3000 m running performance. Running times were 625.7 (40.1), 630.0 (44.8), and 637.7 (57.9) seconds for the 60, 80, and 100 rpm run sessions respectively (table2). The mean running speed during the first 500 m (fig2) was significantly lower after the 60 rpm ride than after the 80 and 100 rpm cycling bouts(17. Cycling bouts of cycle-run sessions During the 20 minutes at 60, 80, and 100 rpm cycling bouts, average cadences were 61.6 (2.6), 82.7 (4.3) and 98.2 (1.7) rpm respectively. Mean HR and V ~E recorded during the 100 rpm cycling bout were significantly higher than in other cycling conditions. Furthermore, blood lactate concentrations were significantly higher at the end of the 100 rpm bout than after the 60 and 80 rpm cycling bouts (7.0 (2.0), 4.6 (2.1) and 5.1 (2.1) mmol/l respectively, p<0.05). Conversely, no effect of either pedalling rate or exercise duration was found on V ~O2 (table 2, p>0.05). 5 (1.1), 18.3 (1.1), and 18.3 (1.2) km/h respectively). In addition, the speed variability (from 500 to 2500 m) was significantly lower during the 60 rpm run session than for the other cycle-run conditions (2.18 (1.2)%, 4.12 (2.0)%, and 3.80 (1.8)% for the 60, 80, and 100 rpm run respectively). 2 velocity during the run bouts (60, 80, 100 rpm). *Significantly different from the running velocity during the 60 rpm run session, p<0.05. subjects during the running bouts (60, 80, and 100 rpm). *Significantly different from the initial period, p<0.05; †significantly different from the other conditions, p<0.05.
29,455
[ "19845", "752657", "1012603", "21253", "1029443" ]
[ "303091", "303091", "303091", "441096", "452825", "303091", "303091" ]
01760353
en
[ "info" ]
2024/03/05 22:32:13
2017
https://hal.science/hal-01760353/file/ifacTechReport.pdf
Oscar Tellez email: [email protected] Samuel Vercraene email: [email protected] Fabien Lehuédé email: [email protected] Olivier Péton email: [email protected] Thibaud Monteiro email: [email protected] Dial-a-ride problem for disabled people using vehicles with reconfigurable capacity Keywords: Transportation logistics, optimization, dial-a-ride problem, large neighborhood search meta heuristic, set covering problem The aim of this paper is to address the dial-a-ride problem with heterogeneous users in which the vehicle capacity can be modified en-route by reconfiguring its internal layout. The work is motivated by the daily transport of children with disabilities performed by a private company based in Lyon Métropole, France. Every day, a fleet of configurable vehicles is available to transport children to medico-social establishments. The objective of this work is then to help route planners with the fleet dimensioning and take reconfiguration opportunities into consideration in the design of routes. Due to the number of passengers and vehicles, real-size instances are intractable for mix-integer programing solvers and exact solution methods. Thus, a large neighborhood search meta-heuristic combined with a set covering component is proposed. The resulting framework is evaluated on real life instances from the transport company. INTRODUCTION The standard Dial-a-Ride Problem (DARP) consists in designing vehicle routes in order to serve transportation demands scattered through a geographic area. The global objective is to minimize the transportation cost while satisfying demands. In contrast to the Pickup and Delivery Problem (PDP), DARP applications concern the transportation of persons. Hence constraints or objectives related to the quality of service should be taken into consideration. In the context of door-to-door transportation of elderly and disabled people, the number of applications has considerably grown recently. Population is aging in developed countries, and many people with disabilities cannot use public transport. As a result, new transport modes, public and private, arise to satisfy their transportation needs. Demand from people with disabilities differ in their need of special equipments such as wheelchair spaces or stretchers thus obliging the use of adapted vehicles. [START_REF] Parragh | Introducing heterogeneous users and vehicles into models and algorithms for the dial-a-ride problem[END_REF] introduced the DARP with heterogeneous users and vehicles, and solved instance with up to 96 user requests. [START_REF] Qu | The heterogeneous pickup and delivery problem with configurable vehicle capacity[END_REF] extended this problem by considering vehicles with configurable capacity. Different categories of users express special needs such as regular seats or wheelchair spaces. These demands are served by configurable vehicles. The goal is to find the most convenient vehicle configuration for each route. In this paper we present a generalization of the PDP with configurable vehicle capacity. Contrary to the work by [START_REF] Qu | The heterogeneous pickup and delivery problem with configurable vehicle capacity[END_REF], we allow vehicles to be reconfigured enroute on. The other difference is that we determine the fleet dimension instead of starting with limited fleet. We call this variant dial-a-ride problem with reconfigurable vehicle capacity (DARP-RC). Note it is not an heterogeneous variant because only one vehicle type is considered. The use of hybrid methods or matheuristics has become quite popular in recent years for routing problems. Our solution method has been inspired by the framework of [START_REF] Grangier | A matheuristic based on large neighborhood search for the vehicle routing problem with crossdocking[END_REF] to solve the vehicle routing problem with cross-docking. We combine a Large Neighborhood Search (LNS) metaheuristic with a Set Covering Problem (SCP). The contribution of this paper is therefore to introduce and solve the DARP-RC. Moreover, we compare the results of the combined LNS-SCP approach with pure LNS and adaptive large neighborhood search (ALNS). INDUSTRIAL CONTEXT This work is motivated by the daily transport of people with disabilities at Lyon Métropole. One segment of this service is operated by the GIHP company on regular basis. Every day, a fleet of configurable vehicles transport around 500 children from and to Medico-Social Establishments (MSE) for rehabilitative treatment. GIHP serves around 60 MSEs with around 180 adapted vehicles. One of the particularities of the vehicle fleet is the capacity to reconfigure its internal layout to trade off seats by wheelchair spaces as per convenience. 1 For MSEs, transportation is often considered the second biggest expense after wages. As a consequence, optimizing the transport becomes a priority. Every year, the company makes strategic choices in the definition and constitution of this fleet. Then, routing decisions are re-revalued daily by route planners. These decisions are often taken without help of decision making tools like a vehicle routing software. This is why route planners conceive suboptimal assumptions such as designing separate routes for each MSE or ignore vehicle reconfiguration possibilities. As such, this measures can reduce pooling gains and increase operating cost. PROBLEM DEFINITION In the classic DARP, a homogeneous vehicle fleet is assumed. All vehicles have the same single capacity type and are located at a single depot [START_REF] Cordeau | A tabu search heuristic for the static multi-vehicle dial-a-ride problem[END_REF]. The proposed DARP-RC constitutes an extension for DARP problem considering more realistic assumptions such us heterogeneous users (e.g. seats, wheelchairs, stretchers) and vehicles with reconfigurable capacity. Reconfigurable vehicles Vehicles with configurable capacity were introduced in [START_REF] Qu | A Branch-and-Price-and-Cut Algorithm for Heterogeneous Pickup and Delivery Problems with Configurable Vehicle Capacity[END_REF]. In their problem, vehicles were not allowed to change configurations all along the route. This assumption grant configurations to be treated as vehicles types with some extra dimensioning constraints. DARP-RC instead, by allowing reconfigurations, introduces the challenge of tracking each configuration for every visited node. Vehicles can have one or several configurations. Each configuration is characterized by the capacity for each user type. Consider for example vehicle in Fig. 1. In the first configuration it can handle 7 seated people and 1 wheelchair; in the second one, 6 seated people and 2 wheelchairs; in the third one, 4 seated people and 3 wheelchairs. Note that there is no linear relationship between the capacity in the two types of users (one wheelchair cannot be simply converted into one or two seat spaces). Also that unused chairs are folded and not removed from the vehicle. Example. The following example illustrates how vehicles with reconfigurable capacity can reduce operating cost. Consider a vehicle with two configurations {c 1 , c 2 } as shown in Fig. 2. The first configuration consists of 2 seats and 1 wheelchair space. The second configuration has 4 seats only. Users a and c go to destination M 1 while b, d, e and f go to M 2. In order to satisfy all user demands, the reconfigurable vehicle can follow the route D → a → b → c → M 1 using configuration c 1 and d → e → f → M 2 → D with configuration c 2 . Performing the same route without reconfiguring capacity would imply making an extra detour d → M 2 → d using configuration c 1 only (see dotted line); therefore increasing transportation costs. Problem Description The DARP-RC is defined as a network composed of a set V of vertices containing the set O + of vehicle starting depots, the set O -of vehicle arrival depots, the set P of pickup locations and the set D of delivery locations. Without loss of generality, we address the case of morning routes where the set P corresponds to people home or any nearby address, and the set D corresponds to MSEs. The set of users is partitioned into several categories u ∈ U, i.e. seat, wheelchairs. An user request r ∈ R is composed of a pickup at location p r ∈ P, a delivery at location d r ∈ D and the number q ru of persons of each type u ∈ U to be transported. Moreover, each request has a maximum ride time Tr that guarantees a given quality of service. The fleet of vehicles is homogeneous. Each vehicle has the set C of possible configurations that provides different capacity vectors. Hence, the vehicle capacity is expressed by quantities Q c u representing the maximal number of users type u ∈ U that can be transported at a time by the same vehicle when using configuration c ∈ C. Without loss of generality, we consider that if i ∈ P represents a pickup node, then the node i+n is the delivery node associated with the same request. It is assumed that different nodes can share the same geographical location. Consider the directed graph G = (A, V) shown in Fig. 3. V = P ∪ D ∪ O corresponds to the set of nodes A is the set of arcs connecting the nodes. Every arc (i, j) in the graph represents the shortest path in time between nodes i and j. Its travel time is denoted as t ij while its length is denoted as d ij . Note that reconfiguration time is not considered because it is negligible compared with service times and it can be performed at MSEs while massive drop-offs take place. The objective function is to minimize the total transportation cost. Three main costs are considered: a fixed cost associated with each vehicle (amortization cost), a time related cost proportional to the total route duration (driver cost) and a distance related cost (vehicle fuel and maintenance costs). RESOLUTION METHOD An exact method has been proposed in [START_REF] Qu | A Branch-and-Price-and-Cut Algorithm for Heterogeneous Pickup and Delivery Problems with Configurable Vehicle Capacity[END_REF] to solve the heterogeneous PDP with configurable vehicle capacity for up to 50 requests. For large scale applications usually found in real-life situations, metaheuristics provide a good alternative. We propose a matheuristic that combines the metaheuristic LNS with SCP. It will be denoted as LNS-SCP. Matheuristic framework (LNS-SCP) The matheuristic framework consists of an LNS with a nested SCP solved periodically. LNS was first proposed by [START_REF] Shaw | Using constraint programming and local search methods to solve vehicle routing problems[END_REF] and introduced under the name ruin and recreate in [START_REF] Schrimpf | Record breaking optimization results using the ruin and recreate principle[END_REF]. In LNS the current solution is improved following an iterative process of destroying (i.e. removing parts of the solution) and repairing the current solution. This process is repeated until a stopping criterion is reach. In our case the stopping criterion is a maximal number of iterations or a maximum computational time. The potential of the approach was revealed by Ropke and Pisiuger [START_REF] Ropke | An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows[END_REF] who proposed an ALNS consisting in multiple search operators adaptively selected according to their past performance in the algorithm. Algorithm s ← s If (cost of s is better than cost of s ) s * ← s If (it modulo η = 0) { /*set covering component*/ s ← solve set covering problem(Ω) Update s * ← s if s is cheaper than s* Update s ← s if s is cheaper than s Perform pool management } it ← it + 1 } Return s * -End - LNS manages 3 solutions at a time: the current solution s, the temporary solution s generated after destroying and repairing a copy of s, and the best solution found so far s * . LNS consists in 3 fundamental steps: (1) Determine the number of requests to remove Φ. We first randomly select the percentage of requests to be removed in the interval [α, β]. (2) Destroy and repair the current solution with a randomly selected operator σ -∈ Σ -and σ + ∈ Σ + respectively. This step results in a new temporary solution. (3) Accept or reject the new temporary solution s , according to the record-to-record criterion of [START_REF] Dueck | New optimization heuristics[END_REF]: if objective(s ) ≤ (1+χ) * objective(s * ), where χ is a small positive value, s is accepted as the new current solution. The SCP component is performed every η iterations. The purpose of the SCP component is to correct the LNS bias of discarding good routes that are part of costly solutions. Every new route is a candidate to be stored into a pool Ω of routes. Implementation details are presented in Section 4.3. LNS operators A key aspect of the LNS are the set of destroy and repair operators. With the goal of keeping the framework as simple as possible, we try to reduce the number of used operators without sacrificing the solution quality. At the end, only 2 destroy and 2 repair operators were kept in the framework based on the performance obtained in benchmark instances in Section 5. Destroy operators determine the set of requests to be removed from the solution according to a certain criterion. In the framework of [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF] 7 destroy operators are proposed. After testing our framework on literature instances we find out that random removal and historical node-pair removal were sufficient to obtain competitive results. For details about the implementation of these operators, please refer to [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF]. Repair operators rebuild a partially destroyed solution in order to restore a complete feasible solution. This operation consists in reinserting nodes one by one from the request bank into the solution according to a specific criterion. Every insertion must satisfy all problem constraints. In our case time windows, ride times and capacity constraints have to be respected. If the operator does not fully repair the solution due to feasibility requirements, the objective function is strongly penalized by multiplying it by a big constant value. The two most common repair operators for the DARP are the cheapest insertion and the k-regret heuristics, both employed in the LNS-SCP. We implemented the k-regret heuristics with values of k varying from 2 to 4. Set Covering Problem (SCP) In LNS, a solution is rejected solely based on its cost with respect to the best solution so far. A rejected solution may contain some good routes, which are also removed. This issue is addressed by storing the routes found by LNS and using them in a SCP to find new solutions. In the following lines, we present the mathematical model and key components for its implementation. Let Ω be the set of routes in the pool collected through the LNS iterations, and V ω ∈ R + the cost of route ω ∈ Ω. To describe the itinerary followed by each route, we define the values R rω , which are set at 1 if request r ∈ R is served by route ω ∈ Ω, and 0 otherwise. The set covering problem aims at determining the value of binary variables y ω , where y ω = 1 if route ω ∈ Ω is part of the solution and 0 otherwise. The set covering problem is defined by the following model. min ω∈Ω V ω y ω , (1) s .t. ω∈Ω R rω y ω ≥ 1 ∀r ∈ R, (2) y w ∈ {0, 1} ∀ω ∈ Ω. (3) In every iteration, after calling the repair operator, the current routes are memorized in the pool Ω. In order to reduce the number of variables for the SCP, only nondominated routes are saved according to Proposition 1. Proposition 1. (Route dominance). Route ω 1 dominates route ω 2 , if ω 1 visits the same set of nodes as ω 2 at a lower cost. The SCP is solved with a MILP solver every η iterations. The solver is initialized with the best known solution and solved given a time limit T limit . This implies the SCP is not always solved to optimality. If the obtained solution is better that the current solution s then the current solution is updated. Otherwise s remains unchanged. Similarly, the best solution s * is updated if a better solution is found. As constraint (2) of the set covering model allows request duplicates in the output solution, all duplicates are removed to obtain a consistent solution. The cheapest one is always conserved. If the solver fails to find an optimal solution within the time limit T limit , the pool is cleared and filled again with the routes of the best known solution. This step is refereed as pool management in Alg.1. RESULTS In order to determine the added value of the set covering component, we implemented three LNS variants: a classic LNS (using operators: k-regret, random removal, historical node-pair removal, worse removal, time related removal and distance related removal), an Adaptive LNS using the same set of operators and the proposed LNS-SCP (using only best insertion, k-regret, random removal and historical node-pair removal). In all cases k=2,3,4 regret was used. The MILP solver for solving the SCP is IBM Ilog Cplex 12.6, running on a single thread. The SCP is solved every η = 1000 iterations with a time limit of T limit = 3 seconds. The acceptance criteria is set to χ = 5% and the percentages used in destroy operators are α = 10% and β = 40% as in [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF]. In order to test the framework, a set of 9 instances of different sizes is proposed. The first number in the instance name refers to the number of considered requests. The service time for valid people was set to 2 minutes for pickup and 1 minute for delivery. While the service time for wheelchairs was 2 minutes for pickup and 5 minutes for delivery. There is not time window in pickup locations. There is a single depot for vehicles with infinite capacity. Vehicles are all of the type illustrated in Fig 1. Two experiments were completed. One limiting the number of iterations (Table 1) and the other one limiting the computation time (Table 2). In both cases it is compared LNS, ALNS, and LNS-SCP metaheuristics. 5 runs per instance type were considered. The average gap (Gap avg ) was computed with respect to the best known solution (BKS) found throughout both experiments. In Table 1 can be observed that in most of cases, SCP component improves the objective function on average in 2.96% (= 3.99% -1.03%) with regard to the LNS and in 1.21% (= 2.24% -1.03%) with respect to the ALNS version of the algorithm. By limiting the computational May 2017 time to 30 minutes it can be observed a similar behavior as shown Table 2, however with a higher gap for the lasts instances. This is expected as the computation time is not big enough as to obtain good results in big instances. Looking at very large instances, in both benchmarks, ALNS outperforms the other methods which may indicate that ALNS-SCP may be relevant for evaluation. On going experimentation also aim to establish the best value of parameter T limit . In particular a greater value for this parameter may be beneficial for large size instances. Table 3 shows detailed information of the best solution found for each instance. It shows computational time (t), the best cost among the five runs (obj), the number of reconfigurations (rec), the number of routes in the solution(routes) and the average number of requests per route (requests/route). The maximum number of reconfigurations was 1 for most of instances. These results is influenced by the vehicle fixed cost and maximum ride time constrains. Nevertheless, a deeper study should be done to determine precisely the key factor for reconfiguration. Finally we can observe the average number of requests per route of 7.31 that is higher than the vehicle capacity. CONCLUSION Through the paper we have described the dial-a-ride problem using vehicles with en-route reconfigurable capacity and a solution procedure based on LNS and SCP. By analyzing the performance of the LNS-SCP on real life instances, we could observe some significant gains when compared with LNS and ALNS. We also observed that best LNS-SCP solutions reconfigure en-route at most once for most of the instances. The perspectives are therefore to extend the study with heterogeneous vehicles and to make an exhaustive evaluation to characterize the key factors of vehicle reconfiguration. From the solution method side, we aim to establish the best value for the SCP time limit in particular for large size instances and include the ALNS-SCP for evaluation. Fig. 1 . 1 Fig. 1. Example of reconfigurable vehicle (source: www.handynamic.fr) Fig. 2. Comparison of routes with and without capacity reconfiguration 2 1 shows the general structure of the LNS-SCP. Algorithm 1. (The LNS-SCP framework).Input: Σ -: set of destroy operators, Σ + : set of repair operators, η: nb. of iterations between two calls to the SCP. Output: best solution found s * . -Begin- Pool of routes: Ω ← ∅ Request bank: B ← ∅ s, s : current and temporary solutions it ← 0 While (termination criterion not met) { s ← s Destroy quantity: randomly select a value Φ Operator selection: select σ -∈ Σ -and σ + ∈ Σ + Destroy: apply σ -to remove Φ requests from s Copy the Φ requests into B Repair: apply σ + to insert requests from B into s Ω ← Ω ∪ routes(s ) /* add routes into the pool */ If (acceptance criterion is met) Table 2 . 2 Performance comparison of metaheuristics in 30 minutes over 5 runs Table 3 . 3 Best results
21,730
[ "743830", "4418", "202", "2487", "2959" ]
[ "145304", "145304", "489559", "481384", "489559", "481384", "145304" ]
01760448
en
[ "chim", "sdu", "sde" ]
2024/03/05 22:32:13
2017
https://hal.univ-lorraine.fr/hal-01760448/file/Abuhelou%20et%20al%2C%202017%2C%20ESPR.pdf
Fayez Abuhelou Laurence Mansuy-Huault Catherine Lorgeoux Delphine Catteloin Valéry Collin Allan Bauer Hussein Jaafar Kanbar Renaud Gley Luc Manceau Fabien Thomas Emmanuelle Montargès-Pelletier Suspended Particulate Matter Collection Methods influence the Quantification of Polycyclic Aromatic Compounds in the River System Keywords: Continuous Flow Centrifuge, Filtration, Polycyclic Aromatic Compounds, Suspended Particulate Matter In this study, we compared the influence of two different collection methods, filtration (FT) and continuous flow field centrifugation (CFC) on the concentration and the distribution of polycyclic aromatic compounds (PACs) in suspended particulate matter (SPM) occurring in river waters. SPM samples were collected simultaneously with FT and CFC from a river during six sampling campaigns over two years, covering different hydrological contexts. SPM samples were analyzed to determine the concentration of PACs including 16 polycyclic aromatic hydrocarbons (PAHs), 11 oxygenated PACs (O-PACs) and 5 nitrogen PACs (N-PACs). Results showed significant differences between the two separation methods. In half of the sampling campaigns, PAC concentrations differed from a factor 2 to 30 comparing FT and CFC collected SPMs. The PAC distributions were also affected by the separation method. FT-collected SPM were enriched in 2-3 ring PACs whereas CFC-collected SPM had PAC distributions dominated by medium to high molecular weight compounds typical of combustion processes. This could be explained by distinct cut-off threshold of the two separation methods and strongly suggested the retention of colloidal and/or fine matter on glass-fiber filters particularly enriched in low molecular PACs. These differences between FT and CFC were not systematic but rather enhanced by high water flow rates. Introduction PACs constitute a wide group of organic micropollutants, ubiquitous in aquatic environments. They include the 16 PAHs identified as priority pollutants by the United States Environmental Protection Agency due to their mutagenic and carcinogenic properties [START_REF] Keith | ES&T Special Report: Priority pollutants: I-a perspective view[END_REF]) (e.g. benzo(a)pyrene, benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(k)fluoranthene, indeno(123-cd)pyrene and dibenzo(a,h)anthracene). PAHs originate from pyrolytic or petrogenic sources and are used as markers of combustion processes, fuel spills or tar-oil contaminations to trace inputs in the environment. Among PACs, oxygen (O-PACs) and nitrogen (N-PACs) containing polycyclic aromatic compounds are emitted from the same sources as PAHs but can also be the products of photochemical, chemical or microbial degradation of PAHs [START_REF] Kochany | Abiotic transformations of polynuclear aromatic hydrocarbons and polynuclear aromatic nitrogen heterocycles in aquatic environments[END_REF][START_REF] Bamford | Nitro-polycyclic aromatic hydrocarbon concentrations and sources in urban and suburban atmospheres of the Mid-Atlantic region[END_REF][START_REF] Tsapakis | Diurnal Cycle of PAHs, Nitro-PAHs, and oxy-PAHs in a High Oxidation Capacity Marine Background Atmosphere[END_REF][START_REF] Lundstedt | Sources, fate, and toxic hazards of oxygenated polycyclic aromatic hydrocarbons (PAHs) at PAH-contaminated sites[END_REF][START_REF] Biache | Bioremediation of PAH-contamined soils: Consequences on formation and degradation of polar-polycyclic aromatic compounds and microbial community abundance[END_REF]. These polar PACs have recently received increasing attention in the monitoring of coking plant sites because of their toxicity. More soluble than their parent PAHs, their transfer from soil to river should be enhanced but reports on their occurrence in aquatic environments are scarce [START_REF] Qiao | Oxygenated, nitrated, methyl and parent polycyclic aromatic hydrocarbons in rivers of Haihe River System, China: Occurrence, possible formation, and source and fate in a water-shortage area[END_REF][START_REF] Siemers | Development and application of a simultaneous SPE-method for polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, heterocyclic {PAHs} (NSO-HET) and phenols in aqueous samples from German Rivers and the North Sea[END_REF]. PACs enter the river systems through gas exchange at the air-water interface for the most volatile compounds, or associated to soot particles for the high molecular weight PACs, through atmospheric deposition and run-off or leaching of terrestrial surfaces [START_REF] Cousins | A review of the processes involved in the exchange of semi-volatile organic compounds (SVOC) across the air-soil interface[END_REF][START_REF] Heemken | Temporal Variability of Organic Micropollutants in Suspended Particulate Matter of the River Elbe at Hamburg and the River Mulde at Dessau, Germany[END_REF][START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF][START_REF] Gocht | Accumulation of polycyclic aromatic hydrocarbons in rural soils based on mass balances at the catchment scale[END_REF]). These compounds partition among the entire water column, depending on their physical-chemical properties (solubility, vapor pressure, and sorption coefficient), and the hydrologic conditions in the river [START_REF] Zhou | The partition of fluoranthene and pyrene between suspended particles and dissolved phase in the Humber Estuary: a study of the controlling factors[END_REF]. The low molecular weight PAHs are found in the dissolved phase whereas the high molecular PAHs are associated to particulate or colloidal matter [START_REF] Foster | Hydrogeochemistry and transport of organic contaminants in an urban watershed of Chesapeake Bay (USA)[END_REF][START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF]. Although less studied than the sediments or the dissolved phase of the water column, the suspended particulate matter (SPM) plays a major role in the transport and fate of hydrophobic micropollutants in rivers and numerous studies focus on their characterization [START_REF] Fernandes | Polyaromatic hydrocarbon (PAH) distributions in the Seine River and its estuary[END_REF][START_REF] Bianchi | Temporal variability in terrestrially-derived sources of particulate organic carbon in the lower Mississippi River and its upper tributaries[END_REF][START_REF] Patrolecco | Occurrence of priority hazardous PAHs in water, suspended particulate matter, sediment and common eels (Anguilla anguilla) in the urban stretch of the River Tiber (Italy)[END_REF][START_REF] Maioli | Distribution and sources of aliphatic and polycyclic aromatic hydrocarbons in suspended particulate matter in water from two Brazilian estuarine systems[END_REF][START_REF] Chiffre | PAH occurrence in chalk river systems from the Jura region (France). Pertinence of suspended particulate matter and sediment as matrices for river quality monitoring[END_REF][START_REF] Meur | Spatial and temporal variations of Particulate Organic Matter from Moselle River and tributaries: A multimolecular investigation[END_REF]. In this perspective, the reliability of the process of sampling collection is a crucial prerequisite to ensure the quality of the analyses and the conclusions that can be drawn from their study. Several methods are used to collect SPM from aquatic systems (e.g. [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF][START_REF] Rossé | Effects of continuous flow centrifugation on measurements of trace elements in river water: intrinsic contamination and particle fragmentation[END_REF][START_REF] Ademollo | The analytical problem of measuring total concentrations of organic pollutants in whole water[END_REF]. Sediment traps and field continuous flow centrifugation rely on the size and density properties of particles to promote their separation from water, similarly to sedimentation occurring in natural systems. Both methods offer the advantage of extracting SPM from a large volume of water (several hundred liters) and then provide a large amount of SPM, statistically representative because it integrates a large time window of at least several hours. Filtration is the most widespread technique used for SPM collection since it is easy to handle on the field and in the lab. The separation is controlled by the pore size of the filters. It is generally performed on small volumes that represent only a snapshot of river water. Several studies have pointed out the advantages and drawbacks of the different techniques. The distribution of organic compounds between dissolved and particulate phases is strongly affected by the separation technique. An overestimate of organic compounds in the particulate phase can be observed when SPM are separated with filtration, assigned to the colloid clogging of the membrane during filtration but these differences seem to depend on the amounts of suspended solids, the organic matter content as well as the ionic strength in the river [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF][START_REF] Morrison | Filtration Artifacts Caused by Overloading Membrane Filters[END_REF][START_REF] Rossé | Effects of continuous flow centrifugation on measurements of trace elements in river water: intrinsic contamination and particle fragmentation[END_REF][START_REF] Ademollo | The analytical problem of measuring total concentrations of organic pollutants in whole water[END_REF]. Anyway, most of the studies focus on the total concentrations of organic compounds in SPM and seldom discuss the influence of the separation techniques on the distribution of organic compounds although these distributions are often used to trace their origin such as in the case of PAHs. In that perspective, we analyzed the concentration and the distribution of PACs in SPM collected in a river affected for more than one century by intense industrial activities (iron ore mining and steel-making plants) and the associated urbanization. Two sampling methods were compared i.e. field continuous flow centrifugation (CFC) and filtration on glass-fiber filters (FT). The study covered different hydrological situations and several sampling sites where the two sampling methods were applied. Two groups of PACs were explored: 16 PAHs representing a group of hydrophobic compounds (3.3 < logK ow < 6.75) and 11 oxygenated and 5 nitrogen PACs, which represent a class of meanly hydrophobic properties (2 < logK ow < 5.32). Material and methods Characteristics of sampling sites The Orne River is a left side tributary of the Moselle River, northeast of the Lorraine region, France (Fig 1). It is a small river (Strahler order 4), with an extended watershed area of 1,268 km 2 , covered by forest (26.5 %), agriculture (67 %) and urban land (6 %). The Orne is 85.8 km long and flows from an altitude of 320 to 155 m asl. The monthly averaged flow fluctuates between 1.5 and 19 m 3 s -1 with a mean flow of 8.1 m 3 s -1 and maximum flow rates reaching at 170 m 3 s -1 . This river has been highly impacted by iron ore extraction and steel-making industries during the whole 20 th century. Five different sampling sites were chosen based on criteria of representativeness and accessibility for the field continuous flow centrifuge in the lower part of the Orne river, on the last 23 km before the confluence with the Moselle River: Auboué (AUB), Homécourt (BARB), Joeuf (JOAB), Moyeuvre-Grande (BETH) and Richemont (RICH). BETH site at Moyeuvre-Grande is located at a dam that influences the river hydrology: the water depth ranges between 3 and 4 m while it is meanly 1 m at the other sites and the water speed (<0.5 m s -1 at the dam) is 1.5 to 5 times lower than at other sites. SPM collection SPM were collected at six periods of time between May 2014 and May-June 2016 covering different hydrological situations (Table 1). The field continuous flow centrifugation (CFC) and filtration (FT) were applied to obtain SPM CFC and SPM FT respectively. Additionally, in May and June 2016, water samples from the inlet and outlet of the CFC were collected and filtered back in the laboratory, to obtain SPM FT In-CFC and SPM FT Out-CFC . The CFC operation, as already mentioned by Le [START_REF] Meur | Characterization of suspended particulate matter in the Moselle River (Lorraine, France): evolution along the course of the river and in different hydrologic regimes[END_REF], started with river water being pumped to the mobile CFC (CEPA Z-41 running at 20000 RPM, equivalent to 17.000×g), located 10-50 m beside the river. The CFC feeding flow rate was set to 600 L h -1 . The cut-off threshold of the field centrifuge was shown to be close to 5 µm by measuring the grain-size of waters at the outlet of the centrifuge.m Depending on the campaign, the CFC was run between 1h30 and 3h in order to obtain representative samples in sufficient amounts (from several grams to 100 g of dry matter). The SPM CFC was recovered from the Teflon plates covering the internal surface of the centrifuge bowl and transferred into glass bottles, transported to the lab in an ice-box to be immediately frozen, freeze-dried and stored at 4 o C for further use. Depending on the water turbidity, and in order to collect sufficient amount of SPM FT on filters, 7.5 L of water were collected in amber glass bottles and when necessary, additional 10 or 20 L were collected. All water samples were brought back to the lab and filtered within 24 h. To facilitate the filtration process, especially for high turbidity samples, waters were filtered sequentially on pre-weighted glass fiber filters, first on GFD (Whatman, 90 mm diameter, nominal pore size = 2.7 µm) followed by GFF (Whatman, 90 mm diameter, nominal pore size = 0.7 µm). Filters were then wrapped in aluminum foil, frozen, freeze-dried and weighted individually, to ± 0.01 mg, and the SPM content on each filter in mg L -1 was determined as the difference between the filter weight before and after filtration process. Analytical methods Global parameters and elemental content Water temperature, electric conductivity (EC) and turbidity were measured using a portable multiparameter device (Hach®). The Dissolved Organic Carbon (DOC) was measured with an automated total organic C analyzer (TOC-VCPH. Shimadzu, Japan) on filtered water (0.22 µm syringe filters) stored in brown glass flasks at 4°C. The Particulate Organic Carbon (POC) was determined on the carbonate-free freeze-dried samples of SPM CFC (1 M HCl; left to stand 1 h; shaken 0.5 h) and measured using a CS Leco SC144 DRPC analyzer and/or a CS Horiba EMIA320V2 analyzer at SARM-CRPG Laboratory. The grain size distribution of particles in raw waters (except for the campaign of May 2014) was determined using laser diffraction (Helos, Sympatec). The raw waters were introduced into the Sucell dispersing unit and were ultrasonicated for 20 seconds. Duplicate or triplicate measurements were performed to improve the measurement quality with and without ultrasound treatment. The particle size distribution was then represented as volumetric percentage as a function of particle diameter. In addition, the percentiles (Di) of the particles were calculated using Helos software. Di is the i th percentile, i.e. the particle diameter at which i % of the particles in the sample is smaller than Di (in µm). Sample treatment Up to 2 g of dry matter of SPM CFC and from 0.06 to 1.4 g of dry matter SPM FT were extracted with an Accelerated Solvent Extractor (Dionex® ASE350). ASE cells were filled with activated copper powder (to remove molecular sulphur) and sodium sulfate (Na 2 SO 4 to remove remaining water) and pre-extracted for cleaning. Samples were extracted at 130 o C and 100 bars with dichloromethane (DCM) with two cycles of 5 min [START_REF] Biache | Effects of thermal desorption on the composition of two coking plant soils: Impact on solvent extractable organic compounds and metal bioavailability[END_REF]). After adjusting the volume at 5 mL, a 1 mL aliquot was taken out for clean-up step. It was spiked with external extraction standards (mixture of 6 deuterated compounds: 2 H 12 ]Benzo[ghi]perylene) to control the loss during the sample preparation. The 1 mL aliquot was then evaporated to dryness using a gentle N 2 flow, diluted into 200 µL of hexane and transferred onto the top of a silica gel column pre-conditioned with hexane. The aliphatic fraction was eluted using 3.5 mL of hexane. PAC fraction was eluted with 2.5 mL of hexane/DCM (65/35; v/v) and 2.5 mL of methanol/DCM (50/50; v/v). The latter fraction was spiked with 20 µL at 12µg mL -1 of internal quantification standards (mixture of 8 deuterated compounds: [ 2 H 8 ]Naphthalene, [ 2 H 10 ]Acenaphthene, [ 2 H 10 ]Phenanthrene, [ 2 H 10 ]Pyrene, [ 2 H 12 ]Chrysene, [ 2 H 12 ]Perylene and[ 2 H [ 2 H 8 ]Dibenzofuran, [ 2 H 10 ]Fluorene, [ 2 H 10 ]Anthracene, [ 2 H 8 ]Anthraquinone, [ 2 H 10 ]Fluoranthene, [ Validation and quality control The quantitative analyses of PACs were carried out using internal calibration using specific family standard (refer to supporting information S1). For each quantified compound, the GC/MS was calibrated between 0.06 and 9.6 µg mL -1 with 10 calibration levels. The calibration curves were drawn and satisfactory determination coefficients were obtained (r 2 >0.99).To verify the quantification, two calibration controls (lower and higher concentrations) were carried out every 6 samples and only a deviation lower than 20% was accepted. The limits of quantification (LOQ) for an extraction of 1 g of sample were between 0.06 and 0.12 µg g -1 (refer to supporting information S1). Experimental and analytical blanks were also monitored regularly to assess external contamination. The whole analytical procedure was validated using reference materials (SRM 1941a, NIST) for PAHs. For O-PAC and N-PAC analysis, no commercial reference material was available. So the laboratory took part to an intercomparison study on the analysis of O-PAC and N-PAC in contaminated soils [START_REF] Lundstedt | First intercomparison study on the analysis of oxygenated polycyclic aromatic hydrocarbons (oxy-PAHs) and nitrogen heterocyclic polycyclic aromatic compounds (N-PACs) in contaminated soil[END_REF]). The methodology was then adapted to sediment and SPM. The recoveries of external standards, added in each sample, were checked and the quantification was validated if it ranged between 60 and 125 % (refer to supporting information S2). Results Sampling campaign characteristics The global parameters (Table 1) exhibited different hydrological situations in the successive sampling campaigns. May 2014 and October 2015 corresponded to the lowest flow conditions with a daily mean water discharge around 1.5 m 3 s -1 and rather high water temperature (13 to 17°C). The turbidity and the SPM contents were respectively lower than 3 NTU and 6 mg L -1 . The water discharge in May 2015 ranged between 8 and 21 m 3 s -1 with turbidity values around 30 NTU and SPM contents from 16 to 54 mg L -1 . Higher water discharges, although rather moderate compared to that of a biennial flood reaching 130 m 3 s -1 , were observed in November 2014 (22 m 3 s -1 the first day and 51 m 3 s -1 the second day) and February 2015 (50 m 3 s -1 ). The highest flow rates of 120 m 3 s -1 were observed during the flood of May 2016. The highest turbidity and SPM content were observed during the first high flow of the season the 5 th of November 2014 (109 NTU and 122 mg L -1 ) and during the flood of May 2016 (94 NTU and 90 mg L -1 ). The POC ranged between 3 and 12.5 mg g -1 and the highest value was recorded during the low flow event of May 2014. DOC varied between 4 and 11 mg L -1 with the highest DOC observed in May 2015 and the 5 th November 2014. Concerning, the grain size distribution of raw waters, the decile D50 was shown to vary very slightly from 5 to 15 µm for the different reported campaigns. The lowest value of the D50 (≈ 5µm) was measured in February 2015 during a high flow event (Table 1 andFig 2). The measured particle size distributions covered relatively narrow ranges from 1.5 to 102 µm and the increases of the flow regime resulted in a clear increase of particle loading (SPM content) with no strong shift of particle size. PAC concentration and distribution in SPM FT and SPM CFC Table 2 displays the contents in PAHs, O-PACs and N-PACs and the main characteristics of their distribution according to the sampling methods (CFC and FT) and to the sampling dates and locations. The comparison of the 16 PAH concentrations in SPM FT and SPM CFC revealed a contrasted effect of the separation techniques. When the whole set of data is considered, the sum of the 16 PAH concentrations in SPM CFC ranged between 2 and 27.7 µg g -1 with a median value at 4 µg g -1 . Despite a high value measured at JOAB site on May 2014, the range of variations was rather narrow, the 1 st and 3 rd quartiles being at 3.6 and 5.3 µg g -1 respectively (Fig 3a). For SPM FT samples, the PAH concentrations varied between 1.3 and 39 µg g -1 with a median value at 18.4 µg g -1 but a higher dispersion of the PAH concentrations was observed, the 1 st and 3 rd quartiles being at 7.2 and 3.4 µg g -1 respectively (Fig. 3b). For all samples, O-PAC and N-PAC concentrations were much lower than PAHs, accounting for 10 to 30% of the total PACs except for BETH site in May 2014. The differences of O-PAC concentrations in SPM FT and SPM CFC were also less contrasted. They were in a very close range, between 0 and 5.4 µg g -1 and 0.1 and 3.8 µg g -1 respectively in SPM FT and SPM CFC . However, as observed for PAHs, the dispersion of the O-PAC concentrations was higher in SPM FT than in SPM CFC (Fig 2c and2d). The discrepancies in PAC concentrations between SPM FT and SPM CFC appeared more clearly when sampling campaigns were distinguished. The ratios of the PAH concentrations in SPM FT and SPM CFC (Fig. 4a) and of the polar PAC concentrations in SPM FT and SPM CFC (Fig. 4b) were calculated for each sample in order to highlight the differences of concentration according to the separation method and the sampling campaign. The whole campaign of May 2015, JOAB in May 2014 and BARB in November 2014 provided comparable PAH concentrations in SPM FT and SPM CFC with a ratio close to 1. However, all the samples of February 2015 and May-June 2016, BETH in May 2014 and JOAB and BETH in November 2014 provided higher concentrations of PAHs with concentrations in SPM FT two to eleven times higher than in SPM CFC (Fig. 4a). The comparison of polar PAC concentrations in SPM FT and SPM CFC also revealed discrepancies between the two sampling methods (Fig 3b). In February 2015 and May-June 2016, polar PACS were six to thirty times more concentrated in SPM FT than in SPM CFC . In May 2014, polar PAC concentrations were fifteen times higher in SPM FT at BETH than in SPM CFC . The distribution of individual PAHs was also strongly and diversely affected by the method of sampling. In SPM CFC , the 4 to 6 ring-PAHs were easily detected and well represented in the distribution, representing 50 to 90% of the all PAHs (except in May 2014), even though they could vary in abundance according to the sampling campaign (Table 2). In SPM FT , 2 to 3 ring-PAHs were systematically more represented than in SPM CFC and accounted for 40 to 70% of the total PAH concentration except during the November 2014 and May 2015 campaigns. The ratio of each individual PAH concentration in SPM FT over its concentration in SPM CFC was plotted against the log K ow of each PAH for all the samples (Fig. 5). The ratio is close to 1 for the PAHs with log K ow higher than 5.5 having more than 4 aromatic rings whereas it can vary from 0.5 to 38 for PAHs with 2 to 4 aromatic rings (log K ow <5.2). The highest differences were observed in Feb 2015 and May-June 2016 and to a lesser extent in Nov 2014, May 2014 and October 2015. Thus, it appeared that the 2 to 3-ring PAHs and to a lesser extent the 4-ring PAHs were the molecules the most affected by the sampling methods. In the same way, any time we observed a significant difference of O-PAC concentration between the two sampling methods (February 2015 and May-June 2016), it could be attributed to a higher concentration in low molecular weight O-PACs, composed of three rings, mainly dibenzofuran, fluorenone and anthraquinone (Table 2). Values of common PAH molecular ratios were compared (Table 2 and Fig. 6). Only, the ratios based on 3 and 4 rings could be calculated in SPM FT and compared to SPM CFC . Whatever the sampling method, the values of Flt/(Flt+Pyr) were found within a quite narrow range of 0.5-0.66 assigned to pyrogenic inputs. The values of Ant/(Ant+Phe) evolved between 0.07 and 0.37 in SPM CFC placing most of the samples in the pyrogenic domain and showing a variation in the contribution of these compounds according to the hydrology. Except for October 2015, the ratios Ant/(Ant+Phe) in SPM FT ranging between 0.06 and 0.17, were systematically lower than in the equivalent SPM CFC suggesting an influence of petrogenic PAHs. PACs in the filtered SPM of the inlet and outlet waters of the CFC The analyses of the matter collected by filtration of the inlet waters of CFC (SPM FT In-CFC ) and the matter collected by filtration of the outlet waters of the CFC (SPM FT Out-CFC ) allowed to better understand the partitioning of SPM in the CFC and then by the filtration process. This test was carried out at AUB, RICH and BETH in May and June 2016. The quantification of the SPM collected by filtration of the inlet and outlet waters showed that CFC allowed to recover 80% of the SPM contained in the inlet waters (Table 3). In the three tests, as already described in the previous paragraphs, the PAH concentrations are six to eleven times higher in SPM FT than in SPM CFC . The PAH concentration in the residual SPM collected by filtration of the CFC outlet waters (SPM FT Out-CFC ) is as high or even twice higher than in the inlet water. The PAH distributions displayed at figure 7 showing that the contribution of the fine matter collected in the outlet waters largely contributes to the SPM collected in the total SPM collected on filters (figure 7). Discussion Our results show that the two methods of SPM collection strongly influence not only the concentration but also the distribution of PACs. PACs in SPM CFC were found in a narrow range of concentrations independently of the sampling location and the hydrological situation. The PAH distributions were dominated by 4 to 6 ring-PAHs. On the contrary, in SPM FT , the spreading of PAH concentrations was much higher, and the PAH distribution was dominated by low molecular weight compounds when a noticeable discrepancy was observed compared to SPM CFC . From reported results in literature, a non-exhaustive inventory of PAH concentrations and distributions according to the SPM collection method, regardless of the spatial and hydrological context was summarized on Table 4. This inventory shows that the concentrations remain in a relatively narrow range (the maximum concentration does not exceed five times the minimum concentration) when the SPM collection method is CFC [START_REF] Wölz | Impact of contaminants bound to suspended particulate matter in the context of flood events[END_REF][START_REF] Meur | Spatial and temporal variations of Particulate Organic Matter from Moselle River and tributaries: A multimolecular investigation[END_REF] or sediment traps [START_REF] Zhang | Size distributions of hydrocarbons in suspended particles from the Yellow River[END_REF][START_REF] Chiffre | PAH occurrence in chalk river systems from the Jura region (France). Pertinence of suspended particulate matter and sediment as matrices for river quality monitoring[END_REF] or pressure-enhanced filtration system [START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF][START_REF] Ko | Seasonal and annual loads of hydrophobic organic contaminants from the Susquehanna River basin to the Chesapeake Bay[END_REF]. When the SPM are collected by filtration, the PAH concentration range can be really more highly spread from one to 40 times [START_REF] Deng | Distribution and loadings of polycyclic aromatic hydrocarbons in the Xijiang River in Guangdong, South China[END_REF]Guo et al., 2007;[START_REF] Luo | Impacts of particulate organic carbon and dissolved organic carbon on removal of polycyclic aromatic hydrocarbons, organochlorine pesticides, and nonylphenols in a wetland[END_REF][START_REF] Maioli | Distribution and sources of aliphatic and polycyclic aromatic hydrocarbons in suspended particulate matter in water from two Brazilian estuarine systems[END_REF][START_REF] Mitra | A preliminary assessment of polycyclic aromatic hydrocarbon distributions in the lower Mississippi River and Gulf of Mexico[END_REF][START_REF] Sun | Distribution of polycyclic aromatic hydrocarbons (PAHs) in Henan Reach of the Yellow River, Middle China[END_REF][START_REF] Zheng | Distribution and ecological risk assessment of polycyclic aromatic hydrocarbons in water, suspended particulate matter and sediment from Daliao River estuary and the adjacent area, China[END_REF]. One can argue that this variation might obviously depend on the river and the hydrological situation. However, if we compare the PAH distributions, it clearly appears that whenever the sampling method is filtration, 2 to 3-ring PAHs can largely dominate the distribution as indicated by the LMW/HMW ratios reported in Table 5. In SPM collected by CFC or sediment traps, the low molecular weight PAHs seldom dominate the distribution. Several studies have reported that filtration retains colloidal organic matter and their associated organic or metallic contaminants leading to an overestimate of compounds associated to particulate matter. [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF] compared centrifugation and filtration to collect particulate matter from wastewaters and riverine waters and observed a systematic higher concentration of aliphatic hydrocarbons in SPM collected by filtration and a lower proportion of dissolved hydrocarbons in the filtered water compared to the centrifuged one. They attributed it to the adsorption of dissolved an colloidal matter on the glass-fiber filter and by the matter retained on its surface. [START_REF] Morrison | Filtration Artifacts Caused by Overloading Membrane Filters[END_REF] showed that membrane clogging during filtration of riverine waters induces the decline of dissolved cation concentrations in filtered waters. Our results show variations from a factor 2 to 9 for PAH contents and 2 to 30 for O-PAC contents when SPM are separated by filtration and could be explained by the retention of colloidal and fine matter on filters. [START_REF] Gomez-Gutierrez | Influence of water filtration on the determination of a wide range of dissolved contaminants at parts-per-trillion levels[END_REF] tested the adsorption of various organics on glass-fiber filters according to DOC values and salinity on synthetic waters. They showed an increase of the adsorption of the more hydrophobic PAHs (4 to 6 rings) with the increase of DOC and salinity but a lower adsorption of low molecular weight PAHs. In our natural waters, if we compare the PAC concentrations and distributions, we observed an opposite trend with low molecular weight PAHs being more concentrated in SPM FT than in SPM CFC . The higher concentration in SPM FT cannot be only related to PAH adsorption on filters but might be due to the retention of colloidal or fine particulate matter (few microns), organic and mineral, particularly enriched in low molecular weight PAHs. This hypothesis is strongly supported by the abundance of low molecular weight PACs highly concentrated in the matter not retained by the centrifuge but collected by filtration of the outlet waters. [START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF] showed that high molecular weight PAHs were rather associated to soot and particles from sediment resuspension whereas more volatile PAHs were associated to autochthonous organic matter. [START_REF] Wang | Monthly variation and vertical distribution of parent and alkyl polycyclic aromatic hydrocarbons in estuarine water column: Role of suspended particulate matter[END_REF] also observed enrichment in low molecular weight hydrocarbons in the finer grain-size fractions of their sediments. Surprisingly, those differences are not systematic and only occur in half of the collected samples. For the samples of February 2015 and May and June 2016, the highest differences between SPM FT and SPM CFC are concomitant of a high flow and the finest grain size distribution of SPM (D50 < 10 µm). In those cases, filtration allowed to recover most of the fine particles and colloids highly concentrated in PACs (18 to 43 µg g -1 ) while field centrifugation collected coarser matter with much lower PAC concentrations (2 to 5.5 µg g -1 ). Previous studies observed similar trends in other contexts. [START_REF] Wang | Monthly variation and vertical distribution of parent and alkyl polycyclic aromatic hydrocarbons in estuarine water column: Role of suspended particulate matter[END_REF] showed that small-size SPM (0,7 -3 µm) collected from estuarine and riverine waters were particularly enriched in PAHs compared to large-size SPM (>3 µm). [START_REF] El-Mufleh | Distribution of PAHs and trace metals in urban stormwater sediments: combination of density fractionation, mineralogy and microanalysis[END_REF] separated sediments from storm water infiltration basins into several density fractions and showed that the PAH amounts were 100 times higher in the lighter fractions than in the denser ones. In their study of colloids and SPM in river, [START_REF] Ran | Fractionation and composition of colloidal and suspended particulate materials in rivers[END_REF] showed the increasing content in organic carbon and ions with decreasing particle size and highlighted the importance of colloidal matter in the concentrations of micropollutants. However, in our set of data, no significant correlation could be observed between the high amount of PACs in SPM FT and global parameters such as particle grain-size, water discharge, organic carbon content of SPM, water conductivity or SPM amount. This comparison of PACs in SPM FT and SPM CFC allowed evidencing the crucial role of colloidal and fine particulate matter in the transfer of PACs. The predominant contribution of fine and/or colloidal matter in SPM FT in February 2015 and May-June 2016 campaigns revealed that this matter transfers mainly low molecular weight PACs compared to the coarser particulate fraction collected by SPM CFC . Also, the molecular ratios suggest a different origin for PACs in colloidal and fine matter with a higher contribution of petroleum products. This suggests that distinct transfer paths of PACs coexist in this river: the PACs associated to particulate matter with a quite homogeneous molecular signature assigned to combustion corresponding to diffuse pollution in the catchment and the PACs associated to colloidal and fine matter with a more variable molecular signature that could be assigned to petrogenic contribution and could enter the river as a point-source during specific hydrological events. Conclusions Filtration on glass fiber filters (0.7 µm), the most commonly used technique, is easy to handle, inexpensive, adapted to any field context and the separation between particulate and dissolved matter is based on particle size. In this study, we showed that this method might collect colloidal and fine matter that can significantly affect the amount of PACs measured in the SPM fraction inducing higher concentrations and distributions enriched in low molecular compounds. These differences were not systematic over the two-year period of our investigation in a small industrial river system. On the contrary, the second sampling technique we tested on the same samples, CFC, provided SPM a large amount of SPM collected out of important volumes of water (500 L) with PAC concentrations quite stable from one site to another and from one hydrological condition to another. PAC distributions were dominated by medium to high molecular weight compounds that allowed to calculate various diagnostic molecular ratios easier to interpret than with FT where the poor abundance of HMW PAC limited the interpretation of molecular ratios. Although filtration presents numerous advantages to collect SPM, one must be very careful in the interpretations of some variations that can also be attributed to the retention of some colloidal and fine matter, enriched in low molecular PACs, especially during high flow events. On the other hand, this allows to access to supplemental information on the nature of PACs transported by fine and colloidal matter. Thus, according to the sampling method, evaluation of PAC distribution between dissolved and particulate phase can be appreciably different. These results suggest that the choice of a SPM collection method is fundamental to comply with the objectives that one can defined for the monitoring of surface waters. 8 ]9H-fluorenone) before evaporation and the volume was adjusted to 100 µL with DCM. To improve the chromatographic resolution, the sample was derivatized by adding BSTFA in (1:1; v/v) and finally injected in 200 µL volume in the gas chromatograph-mass spectrometer (GC-MS). Analysis The instrument used was an Agilent 6890N gas chromatograph equipped with a DB 5-MS column (60 m × 0.25 mm i.d. × 0.25 µm film thickness) coupled with an Agilent 5973 mass selective detector operating in single ion monitoring mode. The molecules were detected with a quadrupole analyzer following ionization by electronic impact. The temperature program was the following: from 70 o C to 130°C at 15°C min -1 , then from 130 o C to 315 o C at 3 o C min -1 and then a 15 min hold at 315 o C. 1 µL of sample was injected in splitless mode at 300°C. The carrier gas was helium at 1.4 mL min -1 constant flow. are representative of those observed in most of the SPM FT and SPM CFC . PAH distributions in SPM CFC are characterized by the abundance of 4 to 6-ring PAHs whereas they are in very low abundance in SPM FT and not detectable in SPM FT Out-CFC . Thus, the CFC retains SPM containing low PAC concentrations made of high molecular weight compounds and the SPM not retained by the CFC but collected by the filtration of the outlet waters is highly concentrated in PACs mainly made of phenanthrene, fluoranthene and pyrene. The PAH distribution in SPM FT In-CFC and SPM FT Out-CFC are very similar Fig. 1 1 Fig. 1 Lower part of the Orne River catchment, showing the five selected sampling sites and the land cover and use (Map source: CORINE Land Cover, 2012). Fig. 2 2 Fig. 2 Grain size distribution deciles (d10, d50 and d90) of raw waters measured for the campaigns from November 2014 to May 2016. Fig. 3 3 Fig. 3 Box plots of ΣPAH, and ΣO-PAC concentrations in SPM CFC (a) and (c) and in SPM FT (b) and (d) for all samples. The boundaries of the box indicate the 25th and 75th percentiles; the line within the box marks the median; the + is the mean value; and values on the top and bottom of the box indicate the minimum and maximum of the distribution. Fig. 4 4 Fig. 4 Comparison of the ratios of PAH content in SPM FT over PAH content in SPM CFC (a) and of polar PAC content (11 O-PACs+ 5 N-PACs) in SPM FT over polar PAC content in SPM CFC (b) for each sample of the campaigns. Fig. 5 5 Fig. 5 Ratios of individual PAH concentration in SPM FT over their concentration in SPM CFC plotted against the log K ow of these PAHs. Black circles represent the campaigns of November 2014, February 2015 and May-June 2016, and white circles represent the other sampling campaigns. Fig. 6 Fig. 7 67 Fig. 6 Ant/(Ant+Phe) vs Flt/(Flt+Pyr) diagnostic ratios calculated in SPM CFC and SPM FT . Dashed lines represent the limits of petrogenic/ pyrogenic domains after Yunker et al. (2002) CFC and in the SPM collected by filtration of the waters entering the CFC (SPM FT In-CFC ) and of the waters collected at the outlet of the CFC (SPM FT Out-CFC ). This campaign was performed at AUB, RICH and BETH in May-June 2016. Table 2 PAC concentrations in SPM CFC and SPM FT (µg g -1 dw) and molecular ratios of PACs. <LQ: under the limit of quantification. 2 Date May 2014 Nov 2014 Feb 2015 May 2015 Oct 2015 May 2016 June 2016 Site JOAB BETH BARB JOAB BETH AUB BARB BETH AUB JOAB BETH RICH AUB RICH AUB RICH BETH PACs in SPM CFC Σ 16PAHs (µg g -1 ) 27.69 3.99 6.07 5.38 5.25 2.54 3.72 5.08 2.45 3.77 3.36 3.74 5.17 7.03 2.01 4.20 3.56 Σ O-PACs (µg g-1 ) 2.66 1.09 0.91 0.91 0.68 0.19 0.12 0.42 0.51 0.96 0.95 1.1 2.48 3.76 0.09 0.00 0.13 Σ N-PACs (µg g -1 ) 0.43 0.9 0.09 0.2 0.06 0.03 <LQ 0.05 <LQ 0.04 0.03 0.04 0.10 0.30 0.18 0.10 0.12 Σ All PACs (µg g -1 ) 30.77 5.99 7.07 6.49 5.98 2.76 3.84 5.55 2.96 4.77 4.35 4.88 7.75 11.09 2.28 4.30 3.81 Ant/(Ant+Phe) 0.15 0.18 0.24 0.3 0.37 0.21 0.18 0.15 0.32 0.32 0.34 0.36 0.07 0.20 0.20 Flt/(Flt+Pyr) 0.66 0.6 0.57 0.56 0.56 0.57 0.58 0.62 0.54 0.56 0.55 0.56 0.53 0.51 0.56 0.56 0.56 BaA/(BaA+Ch) 0.52 0.53 0.53 0.54 0.56 0.57 0.57 0.5 0.52 0.52 0.53 0.49 0.49 0.46 0.50 0.50 IP/(IP+Bghi) 0.63 0.55 0.55 0.57 0.6 0.62 0.62 0.52 0.53 0.54 0.54 0.55 0.52 0.51 0.51 0.51 2-3 ring PAHs (%) 85% 50% 20% 19% 14% 19% 36% 43% 9% 13% 11% 12% 21% 29% 13% 14% 16% 3-rings O-PACs (%) 100% 46% 34% 31% 15% 42% 100% 74% 36% 22% 20% 20% 21% 29% 100% 100% PACs in SPM FT Σ 16PAHs (µg g -1 ) 31.24 19.78 7.18 10.26 10.97 23.53 25.37 34.43 1.3 1.34 1.77 2.58 23.77 13.64 18.44 25.98 39.06 Σ O-PACs (µg g-1 ) 3.25 0.89 0.94 0.97 0.72 2.55 3.36 3.73 0.09 0.19 <LQ 0.11 3.30 5.43 1.89 4.07 3.93 Σ N-PACs (µg g -1 ) 1.78 29.41 0.12 0.11 0.12 0.11 0.26 0.14 0.16 0.23 <LQ 0.45 <LQ <LQ <LQ <LQ 0.15 Σ All PACs (µg g -1 ) 36.27 50.08 8.24 11.34 11.81 26.2 28.98 38.3 1.55 1.77 1.77 3.14 27.07 19.07 20.33 30.05 43.14 Ant/(Ant+Phe) 0.07 0.15 0.1 0.13 0.12 0.15 0.11 0.06 0.17 0.09 0.11 0.09 Flt/(Flt+Pyr) 0.53 0.59 0.6 0.62 0.61 0.65 0.56 0.61 0.58 0.57 0.58 0.61 0.59 0.53 0.52 BaA/(BaA+Ch) 0.55 0.6 0.59 0.59 0.59 0.59 0.54 0.59 IP/(IP+Bghi) 0.55 0.57 0.62 0.65 0.56 0.61 0.6 0.57 0.52 0.56 2-3 ring PAHs (%) 57% 33% 28% 31% 34% 79% 74% 84% 7% 25% 28% 24% 70% 55% 49% 49% 40% 3-ring O-PACs (%) 100% 100% 52% 63% 90% 96% 96% 100% 100% 100% 100% 100% 100% 100% 89% 100% Table 3 PAC concentrations in µg g -1 in the SPM 3 Acknowledgements The authors would like to thank Long-Term Ecosystem Research (LTER) France, Agence Nationale de la Recherche (ANR) project number ANR-14-CE01-0019, RésEAU LorLux and Region Lorraine through the research network of Zone Atelier Moselle (ZAM) for partially funding the work, the Syndicat de Valorisation des Eaux de l'Orne (SVEO) and the city of Moyeuvre for granting us access to the sampling sites. We thank ERASMUS MUNDUS for funding the PhD of M. Abuhelou. The final manuscript was also improved by the valuable suggestions of four reviewers.
44,039
[ "784633", "739381", "1246597", "15387", "13624" ]
[ "237201", "512447", "247127", "247127", "237201", "237201", "237201", "237201", "527641", "512447", "237201", "237201", "512447" ]
00176056
en
[ "shs", "sde" ]
2024/03/05 22:32:13
2007
https://shs.hal.science/halshs-00176056/file/Flachaire_Hollard_Luchini_06.pdf
Emmanuel Flachaire email: [email protected] Keywords: Anchoring, Contingent Valuation, Heterogeneity, Framing effects JEL Classification: Q26, C81, D71 , our method appears successful in discriminating between those who anchor and those who did not. An important result is that when controlling for anchoring -and allowing the degree of anchoring to differ between respondent groups -the efficiency of the double-bounded welfare estimate is greater than for the initial dichotomous choice question. This contrasts with earlier research that finds that the potential efficiency gain from the double-bounded questions is lost when anchoring is controlled for and that we are better off not asking follow-up questions. Résumé pour controler l'ancrage, nous montrons que la prise en compte d'une telle hétérogénéité permet d'obtenir des estimations plus précises que celles obtenues avec la prise en compte d'une seule offre. Ce résultat contraste avec ceux de la littérature, qui trouvent que le gain de précision obtenu avec la prise en compte d'une deuxième offre est en général perdu en présence d'ancrage significatif, à tel point qu'il vaut mieux ne pas proposer une deuxième offre. Introduction Anchoring is a general phenomenon put forward by [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF]: "In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient. That is, different starting points yield different estimates, which are biased toward the initial values. We call that anchoring". This anchoring problem affects, in particular, survey methods, designed to elicit individual willingness to pay (WTP) for a specific good. Among such surveys, by far the most popular one is the contingent valuation (CV) method. Roughly speaking, this method consists of a specific survey that proposes respondents to consider a hypothetical scenario that mimics a market situation. A long discussion has taken place that analyzes the validity of the contingent valuation method in eliciting individual willingness to pay 1 . In the dichotomous choice CV method, the presence of anchoring bias implies that, "confronted with a dollar figure in a situation where he is uncertain about an amenity's value, a respondent may regard the proposed amount as conveying an approximate value of the amenity's true value and anchor his WTP amount on the proposed amount" [START_REF] Mitchell | Using Surveys to Value Public Goods: The contingent Valuation Method[END_REF]. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] propose a model that takes into account the effect of anchoring. It turns out that there is an important loss of efficiency in the presence of substantial anchoring. The purpose of this paper is to address this issue. To the best of our knowledge, anchoring has always been considered as a phenomenon affecting the population as a whole. Little attention has been paid to the fact that some individuals may anchor their answers while others may not 2 . The assumption of homogeneous anchoring may be hazardous as it may lead to econometric problems. Indeed, it is well known in standard regression analysis that individual heterogeneity can be a dramatic source of misspecification and if it is not taken into account, its results can be seriously misleading. In the context of this paper, the presence of two groups or types of people (those who are subject to anchoring and those who are not), is a type of individual heterogeneity that could affect empirical results in CV surveys. The major issue is how to conceive a measurement of individual heterogeneity with respect to anchoring. In other words, if we assume that individuals are of two types, then the question is how can we identify these two distinct groups of people in practice? In this paper, we propose to develop a methodology that borrows tools from social 1 see Mitchell and Carson 1989, Hausman 1993, Arrow et al. 1993, Bateman and Willis 1999 2 Grether (1980) studies decisions under uncertainty and shows that, although the representativeness heuristic explains some of the individuals' behaviors, Bayesian updating is still accurate for other individuals. He suggests that, being familiar with the evaluation of a specific event (in his case, acquired through repeating evaluations in the experiment) leads to more firmly held opinions and, consequently, to a behavior more in line with standard economic assumptions. This is also what John List suggests when he compares the behavior of experienced subjects (through previous professional trade experiences) and unexperienced subjects [START_REF] List | Neoclassical theory versus prospect theory: Evidence from the marketplace[END_REF] psychology that will allow us to identify the two groups of people. Using the dichotomous choice model developed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], we control for anchoring for each group separately. A noticeable empirical result of our methodology is that when we allow the degree of anchoring to differ between those two groups, the efficiency of the double-bounded model improves considerably. This contrasts with previous research that finds that the efficiency gains from the double-bounded model are lost when anchoring is controlled for. The paper is organized as follows. In section 2, we review some possible sources of heterogeneity in the context of anchoring. Then, we concentrate on a particular form of heterogeneity and we present the methodology that we use to identify it in practice. In section 3, we extend the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] in order to develop a specific econometric model with heterogeneous anchoring. Finally, in section 4, we apply our methodology and econometric model to a French dedicated CV survey. Conclusions are drawn in section 5. Conformism as a source of heterogeneity Heterogeneity can be defined in many different ways. In this section, we are interested in a form of heterogeneity linked to the problem of anchoring, that is to say involving the behavior of survey respondents induced by the survey itself. More precisely, we would like to investigate whether there is heterogeneity with respect to the degree of anchoring on the bid in the initial valuation question. Thus, a clear distinction should be made between heterogeneity that leads to different anchoring behaviors and heterogeneity that relates to WTP directly. The latter sort of heterogeneity can be treated, as in standard linear regression model, by the use of regressor variables in specific econometric models and is not related at all to the problem of anchoring. The type of heterogeneity we are interested in here, however, calls for treatment of a different nature. The economic literature on contingent valuation in particular, and on survey data in general, often mentions a particular source of heterogeneity. This source concerns the fact that some individuals may hold a "steadier point of view" than others. Alternatives versions are "more precise beliefs", "higher level of self-confidence", "well defined preferences", etc. . . A good example of such a notion is "one might expect the strongest anchoring effects when primitive beliefs are weak or absent, and the weakest anchoring effects when primitive beliefs are sharply defined" [START_REF] Green | Referendum contingent valuation, anchoring, and willingness to pay for public goods[END_REF]. It is quite clear that all these statements share some common feature. However it seems that economic theory lacks a precise definition of this, even if the notions mentioned are very intuitive. Thus, many authors are confronted with a "missing notion" since economic theory does not propose a clear definition of this type of human characteristic. Psychology proposes a notion of "conformism to the social representation" that could fill this gap. In order to test if an individual representation is a rather conformist one, we compare it to the so called "social representation". Individuals whose representation differs from the social representation could be considered as "non-conformists". The basic idea, supported by social psychology, is that individuals who differ from the social representation are less prone to be influenced3 . It leads us naturally to wonder if individuals that are less prone to be influenced are also less prone to anchoring. Before testing this last hypothesis with an econometric model, we develop a method to isolate "non-conformist" individuals. Method Individuals have, for each particular subject, a representation (i.e. a point of view). Representations are defined in a broad sense by social psychologists,4 since an individual representation is defined as a form of knowledge that can serve as a basis for perceiving and interpreting reality, as well as for orienting one's behavior. This representation may either be composed of stereotypes or of more personal views. The general principle that underlies the above methodology consists of detecting individuals who hold a representation of the object to be evaluated that differs from that of the majority. The methodology allows us to identify an individual who holds a representation which differs from the majority one. We restrict our attention here to a quantitative approach using an open-ended question. This is the usual way to gather quantitative information on an individual representation at low cost [START_REF] Vergès | Approche du noyau central: propriétés quantitatives et structurales[END_REF]. After cleaning the data, we use an aggregation principle in order to establish the majority point of view (which is a proxy for the so called social representation). Then it is possible to compare individual and social representations. Using a simple criterion, we sort individuals into two sub-samples. Those who do not differ from the majority point of view are said to be in conformity with the majority while the others are said to be different from the majority. The methodology consists of four steps summarized in the figure 1 and described in detail in what follows. Step 1: A representation question At a formal level, an individual representation of a given object is an ordered list of terms that one freely associated with the object. Such a list is obtained through open-ended questions such as "what does this evoke to you?". Step 2: Classification As mentioned above, an individual representation is captured through an ordered list of words. A general result is that the total number of different words used by the sample of individuals considered is quite high (say 100 to 500 depending on the complexity of the object). This imposes a categorization that puts together words that are close enough. This step is the only one which leaves the researcher with some degrees of freedom. After the categorization, each individual's answer is transformed into an ordered list of categories. It is then possible to express an individual representation as an ordered list Step 1 : A representation question What are the words which come to your mind when ... ? Result : individual lists of words and expressions Step 2 : Classification Coding words and expressions by "frame of reference" Result : Ordered lists of categories (incomplete) Step Those individual representations, namely ordered lists of words, could at a formal level be considered as an ordinal preference over the set X of possible categories. As the question that is used to elicit individual representation is open-ended, individual lists could be of various length. So, preferences could be incomplete. Those individual representations will in turn aggregate to form the social representation. Step 3: Aggregating representations Using a majoritarian device 5 , it is possible to proceed in a non ambiguous manner in order to identify the social representation on the basis of individual ones. A social representation, whenever it exists, will then be a complete and transitive order over the set X. An important property of the majority principle is that it may lead to non transitive social preferences, the so called Condorcet paradox. Indeed, X may be ranked before Y at the social level and Y ranked before another attribute Z with X not ranked before Z. 6Further results even show that the probability of getting a transitive social preference becomes very small as the number of elements in X grows. We will then consider the use of the majority principle as a test for the existence of a social representation: if a set of data leads to a transitive social representation, the social representation is coherent. Step 4: Segmentation Thanks to our previous results, it is possible to sort individuals according to the way they build their representations. In order to do so, we consider individuals who do not refer to the Condorcet winner (i.e. the top element of the social representation). Recall that preferences are incomplete, so that a typical individual preference does not display all of the elements of X, otherwise all individuals include the Condorcet winner in their preference. In practice, the Condorcet winner refers to elements obviously associated to the object, i.e among the very first words that come to mind when talking about the object. We are then left with two categories of individuals. This leads to a breakdown of individuals into two sub-samples: the ones who did mention the Condorcet winner (conformists) and the ones who did not (non-conformists). Finally, one has a dummy variable that sorts individuals into two categories and that identifies individual heterogeneity. It remains for us to test if such a variable can indeed play a role in anchoring bias, based on a specific econometric model. We develop such a model in the next section. Econometric Models There exist several ways to elicit individuals' WTPs in CV surveys. The use of discrete choice format in contingent valuation surveys is strongly recommended by the work of the NOAA panel [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. It consists of asking a bid to the respondent with a question like if it costs $x to obtain . . . , would you be willing to pay that amount? Indeed, one advantage of the discrete choice format is that it mimics the decision making task that individuals face in everyday life since the respondent accepts or refuses the bid proposed. One drawback of this discrete choice format is that it leads to a a qualitative dependent variable (the respondent answers yes or no) which reveals little about individuals' WTP. In order to gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF] proposed to add a follow-up discrete choice question to improve efficiency of discrete choice questionnaires. This mechanism is known as the double bounded model. This basically consists of asking a second bid to the respondent, greater than the first bid if the respondent asked yes to the first bid and lower otherwise. The key disadvantage of the double-bounded model is that individuals may anchor their answers to the second bid on the first bid proposed. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] show that, in the presence of anchoring bias, information provided by the second answer is lost such that the single bounded model can become more efficient than the double bounded model. In this section, we present these different models proposed in the literature: the single bounded, double bounded models and the [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] anchoring model. Finally, we develop an econometric model of anchoring that depends upon individual heterogeneity. Single bounded model Let us first consider W i , the individual i's prior estimate of his willingness to pay, which is defined as follows W i = x i (β) + u i (1) where the unknown parameters β and σ 2 are respectively a k × 1 vector and a scalar, where x i is a non-linear function depending on k independent explanatory variables. The error term u i are Normally distributed with mean zero and variance σ 2 . The number of observations is equal to n and the error terms u i are normally distributed with mean zero and variance σ 2 . In the single bounded mechanism, the willingness to pay (WTP) of the respondent i is not observed, but his answer to the bid b i is observed. The individual i answers yes to the bid offer if W i > b i and no otherwise. Double bounded model The double bounded model, proposed by [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF], consists of asking a second bid (follow-up question) to the respondent. If the respondent i answers yes to the first bid, b 1i , the second bid b 2i is higher and lower otherwise. The standard procedure, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF], assumes that respondents' WTPs are independent of the bids and deals with the second response in the same manner as the first discrete choice question, W 1i = x i (β) + u i and W 2i = W 1i (2) The individual i answers yes to the first bid offer if W 1i > b 1i and no otherwise. He answers yes to the second bid offer if W 2i > b 2i and no otherwise. [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF] compare the double bounded model with the single bounded model and show that the double bounded model can yield efficiency gains. Anchoring model The double bounded model model assumes that the same random utility model generates both responses to the first and the second bid. In fact, introduction of follow-up questioning can generate inconsistency between answers to the second and first bids. To deal with inconsistency of responses, [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]'s approach considers a model in which the follow-up question can modify the willingness to pay. According to them, respondents combine their prior WTP with the value provided by the first bid, this anchoring effect is then defined as follows W 1i = x i (β) + u i and W 2i = (1 -γ) W 1i + γ b 1i (3) where the parameter 0 ≤ γ ≤ 1. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] show that, when an anchoring bias exists, efficiency gains provided by the double-bounded model disappear. Information yielded by the answers to second bid is diluted in the anchoring bias phenomenon. Anchoring model with heterogeneity In the presence of individual heterogeneity, results based on standard regression can be seriously misleading if this heterogeneity is not taken into account. In the preceding anchoring model, [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] consider that all individuals are influenced by the first bid: the anchoring bias parameter γ is the same for all individuals. However, if only some respondents combine their prior WTP with the information provided by the first bid, the others not, it means that individual heterogeneity is present. Let us consider that we can divide respondents into two distinct groups: one subject to anchoring and another one not subject to anchoring. Then, we can define a new model as follows W 1i = x i (β) + u i and W 2i = (1 -I i γ) W 1i + b 1i I i γ (4) where I i is a dummy variable which is equal to 1 when individual i belongs to one group and 0 if he belongs to the other group. Note that, if I i = 1 for all respondents, our model becomes the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] and if I i = 0 for all respondents, our model becomes the standard double bounded model. The model can also be defined with an heterogeneity based on individual characteristics rather than two groups, replacing I i by a variable X i taking any real values. Estimation The dependent variable is a dichotomous variable: the willingness-to-pay W i is unknown and we observe answers only. Thus, estimation methods appropriate to the qualitative dependent variable are required. The single bounded model can be estimated with a standard probit model. Models with follow-up questions can easily be estimated by maximum likelihood using the log-likelihood function l(y, β) = n i=1 r 1i r 2i log P (yes, yes) + r 1i (1 -r 2i ) log P (yes, no) + (1 -r 1i ) r 2i log P (no, yes) + (1 -r 1i ) (1 -r 2i ) log P (no, no) (5) where r 1 (resp. r 2 ) is a dummy variable which is equal to 1 if the answer to the first bid (resp. to the second) is yes, and is equal to 0 if the answer is no. For each model, we need to derive the following probabilities: P (yes, no) = P (b 1 < W i < b 2 ) and P (yes, yes) = P (W i > b 2 ). P (no, no) =P (W i < b 2 ) P (no, no) =P (b 2 < W i < b 1 ) (6) P (yes, no)=P (b 1 < W i < b 2 ) P (yes, yes)=P (W i > b 2 ) (7) For the anchoring model with heterogeneity, we calculate these probabilities: P (no, no) = Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (8) P (yes, no) = Φ[(b 1i -x i (β))/σ] -Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (9) P (no, yes) = Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] -Φ[(b 1i -x i (β))/σ] (10) P (yes, yes) = 1 -Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (11) The anchoring model, proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] is a special case, with I i = 1 for i = 1, . . . , n. The double bounded model is a special case, with γ = 0. Application In order to test our model empirically, this article uses the main results of a contingent valuation survey which was carried out within a research program that the French Ministry in charge of environmental affairs started in 1995. It is based on a contingent valuation survey which involves a sample of users of the natural reserve of Camargue 7 . The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay to preserve the natural reserve using an entrance fee. The survey was administered to 218 recreational visitors during the spring 1997, using face to face 7 The Camargue is a wetland in the south of France covering 75 000 hectares. The Camargue is a major wetland in France and is host to many fragile ecosystems. The exceptional biological diversity is the result of water and salt in an "amphibious" area inhabited by numerous species. The Camargue is the result of an endless struggle between the river, the sea and man. During the last century, while the construction of dikes and embankments salvaged more land for farming to meet economic needs, it cut off the Camargue region from its environment, depriving it of regular supplies of fresh water and silt previously provided by flooding. Because of this problem and to preserve the wildlife, the water resources are now managed strictly. There are pumping, irrigation and draining stations and a dense network of channels throughout the river delta. However, the costs of such installations are quite large. interviews. Recreational Visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up. There was a high response rate (92.6 %)8 . Conformists and Non-Conformists The questionnaire also contains an open-ended question related to the individual representations of the Camargue. This open-ended question yields the raw material to divide the respondents population into two groups: conformists and non conformists. This is done using the methodology presented in section 2, through the following steps: Step 1: What are the words that come to your mind when you think about the Camargue? In the questionnaire, respondents were asked to freely associate words to the Camargue. This question were asked before the contingent valuation scenario in order to not influence the respondents' answers. Respondents used more than 300 different words or expressions in total. Step 2: A categorization into eight categories A basic categorization by frame of reference leads to eight different categories. For instance, the first category is called "Fauna and Flora". It contains all attributes which refer to the animals of Camargue and local vegetation (fauna, 62 citations, birds, 44, flora, 44, bulls, 37, horses, 53, flamingos, 36, etc.). The others categories are "Landscape", "Disorientation", "Isolation", "Preservation", "Anthropic" and "Coast". A particular exception is the category "Nature" which only contains the word nature which can hardly fall in one of the previous categories. There exists a ninth category which put together all attributes which do not refer to any categories mentioned below9 . Step 3: Existence of a transitive social representation After consolidating the data in step 2, we were left with 218 incomplete preferences over the set X containing our eight categories. A majoritarian pairwise comparison results are presented in Table 1. The result between two categories should be read in the following way: the number of line i and column j is the difference between the number of individuals who rank category i before category j and the individuals who order category j before i. For instance, we see that "Fauna and Flora" is preferred by a strong majority to "Isolation" (a net difference of 85 voices for "Fauna and Flora"). After aggregation through the majoritarian principle, the social representation is then transitive and thus provides a coherent social representation. Step 4: Conformists and non conformists Attributes The top element, namely the Condorcet winner, concerns all aspects relating to biodiversity10 . This is not surprising since the main interest of the Camargue (as presented in all related commercial publications) is the "Fauna and Flora" category. Talking about the Camargue without mentioning any of those aspects is thus remarkable. Individuals who do so are considered as non conformists (38 individuals), while individuals who do are considered as conformists (180 individuals). Recall the survey was admistrated inside Camargue after individuals have visited it. Thus, they are fully aware of the importance of fauna and flora in Camargue. Not referring to those aspects is thus not a hazard. Econometric results We consider the dummy variable conformists/non-conformists, obtained with the four steps described above, and estimate the different models described in section 3, using a linear model [START_REF] Mac Fadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. In practice, a value of particular interest is the mean of WTP, evaluated by μ = n -1 n i=1 x i ( β) (12) and the WTPs estimated dispersion is equal to d = σ [START_REF] Hanemann | The statistical analysis of discrete response CV data[END_REF]. Table 2 presents estimated means of WTP μ, as defined in ( 12), and the dispersions of WTP distributions σ for the single bounded, double bounded, anchoring and anchoring with heterogeneity models. From this Table, it is clear that the standard errors, in parentheses, decrease considerably when one uses the usual double-bounded model (column 2) instead of the single bounded model (column 1). This result confirms the expected efficiency gains provided when the second bid is taken into account [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF]. However, estimates of the mean WTP in [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. Then, we estimate a model with anchoring effect, as defined in (3). Results, given in column 3, show that the anchoring parameter, γ = 0.52, is significant (P -value = 0.0124). This test confirms the existence of an anchoring effect in the respondents' answers. When correcting for anchoring effect, the mean WTP belongs to the confidence interval [118; 136] which intersects the confidence interval of the single bounded model: results are now consistent. However, standard errors increase considerably, so that, even if follow-up questioning increases precision of parameter estimates (column 2), efficiency gains are lost once the anchoring effect is taken into account (column 3). According to this result, "the single-bounded approach may be preferred when the degree of anchoring is substantial" (Herriges and Shogren, 1996, p.124). According to the distinction between conformists and non conformists, we now tackle the assumption of homogeneous anchoring. We firstly estimate a more general model than (4), with two distinct parameters of anchoring for these two groups, respectively conformists and non conformists. This is done from a model with 4). It allows us to test if non-conformists are not subject to anchoring with the null hypothesis γ 2 = 0. A likelihood ratio test is equal to 1.832 (P -value=0.1759), so that we cannot reject the null hypothesis and we therefore select the model ( 4), where anchoring only affects the conformists. W 2i = [1 -I i γ 1 -(1 - I i )γ 2 ]W 1i +[I i γ 1 +(1-I i )γ 2 ]b 1i replacing W 2i in ( Estimates of the model, where only conformists are subject to anchoring, are given in column 4. The anchoring parameter, γ = 0.36, is clearly significant (P -value = 0.005). In other words, it means that conformists use information provided by the first bid in combining their prior WTP with this new information, but not the non-conformists. Moreover, it is clear from In addition, the confidence interval of the mean WTP in the model with anchoring and heterogeneity is equal to [93; 106]. This interval intersects the confidence interval in the single bounded model [104; 123] and so, results are consistent. These results show that the estimate of the mean WTP is smaller and more precise in the anchoring model with heterogeneity than in the single bounded model. Table 3 presents full estimation results. It is worth noting that the introduction of heterogeneity provides a better estimation since many variables are now statistically significant. Indeed, the heterogeneous model exhibits six significant variables. This contrasts with the single-bounded model which exhibits only one significant variable. Our results therefore suggest that when anchoring is understood as a heterogeneous process, one obtains significant efficiency gains. Furthermore, these gains are so important that the welfare estimates can be calculated by using the anchoring model with heterogeneity rather than the single bounded model. This contradicts the result by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] who use a homogeneous anchoring model and observe substantial efficiency losses. Conclusion In this article, we follow a line of argument suggesting that anchoring exists but is not uniformally distributed acrros the population. To that extent, we present a method that is able to identify respondents who are more likely to anchor, and respondents who are not, on the basis of a single open-ended question with which we want to elicit free associations. Depending on the answers, we discriminate between two groups of individuals, namely the conformists and the non-conformists respectively. While the first group responds in more standard terms, the latter give more individualistic answers. We therefore show that it is possible to control for anchoring bias. The interesting aspect for CV practitioners is that we still experience efficiency gains over single bounded dichotomous choice by exploiting the heterogeneity in anchoring effects. This result stands in contrast to [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] who propose a model with homogeneous anchoring throughout the population and find important losses of efficiency with respect to the single-bounded model. Finally, how can we explain that non-conformists are less prone to anchoring? More investigation is required to answer this question. Our suggestion is that non-conformists have already a much more elaborated view on the subject, which does not conform to the "stereotypical" representation of the Camargue. They are not citing the most "obvious" reasons why they are visiting the Camargue (fauna, birds, horses, flamingos etc), but have a more "constructed" discourse, which reflects their own personal opinion on the Camargue. In that sense, we identify people with more "experience" on their subject, which may give raise to stronger opinions and preferences. Arguably, people with enhanced preferences are more likely to behave according to standard economic rationality. This means that in our setting, non-conformists attach much more importance to their own prior value of the object and are not influenced by the bidding values presented to them in the CV questionnaire. The general line of thought parallels experimental findings, which show that experienced subjects are more likely to conform to standard economic rationality. While one can rely on repetition in an experimental setting [START_REF] Grether | Bayes rule as a descriptive model: The representativeness heuristic[END_REF], or clearly identified experienced subjects [START_REF] List | Neoclassical theory versus prospect theory: Evidence from the marketplace[END_REF], to come up with this conclusion, we associate "repetition" and "experience" with non-conformist representations of the subject under consideration. 3 : Aggregation Majority voting as the aggregation principle Result : Test for transitivity of the social representation and identification of the Condorcet winner Step 4 : Segmentation Distinguishing sub-populations in the sample Result : Identification of conformists Conformists Non conformists Final result : Conformity as a dummy variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1: Methodology Table 1 : 1 Majoritarian pairwise comparison F-F Land. Isol. Preserv. Nat. Anth. Disor. Coast Fauna-Flora 0 40 85 73 107 147 146 144 Landscape - 0 48 53 86 117 123 126 Isolation - - 0 6 47 56 78 73 Preservation - - - 0 25 51 62 65 Nature - - - - 0 14 11 28 Anthropic - - - - - 0 9 17 Disorientation - - - - - - 0 12 Coast - - - - - - - 0 Table 2 : 2 Parameter estimates in French Francs (standard errors in parenthesis) both models are very different: in the double bounded model the mean WTP would belong to the interval [77; 86] with a confidence level at 95% 11 , instead of the confidence interval[104; 123] in the single bounded model. Such inconsistent results lead us to consider that anchoring effect could be present, as suggested by Table 2 that standard errors from column 4, in parentheses, Variables Single-bounded Anchoring Anchoring model model model with heterogeneity Constant 35.43 (57.27) 83.57 (68.43) 61.16 (44.18) Distance home-natural site 9.30 (5.30) 7.07 (4.45) ⋆ 4.67 (2.17) Using a car to arrive -61.71 (41.08) -79.47 (49.04) ⋆ -58.22 (26.81) Employee ⋆ 95.86 (46.86) 84.27 (49.09) ⋆ 65.36 (27.77) Middle class 109.96 (63.60) 99.89 (56.95) ⋆ 74.66 (28.96) Inactive 52.58 (38.44) 57.12 (40.87) 48.80 (27.99) Working class 97.28 (68.29) 81.27 (81.66) 62.00 (53.27) White collar 80.33 (42.16) 78.88 (44.24) ⋆ 59.66 (24.65) Visiting with family 4.71 (29.61) 12.79 (31.36) 13.01 (22.71) Visiting Alone 61.11 (101.67) 122.37 (95.03) 89.18 (52.97) Visiting with a group 44.79 (47.90) 3.70 (46.24) 4.22 (32.65) First visit 51.42 (35.29) 18.56 (23.50) 15.59 (16.31) New facilities proposed 56.93 (32.12) 57.29 (33.06) ⋆ 41.94 (15.59) Other financing proposed -32.03 (27.60) -28.19 (21.84) -19.01 (12.87) South-West -24.18 (33.57) -42.04 (40.61) -28.48 (24.24) South-East 42.04 (58.26) 52.72 (52.06) 40.73 (32.61) Questionnaire type -28.19 (23.34) -13.15 (17.82) -10.50 (11.97) Investigator 1 23.44 (56.29) 6.12 (47.50) 8.26 (32.07) Investigator 2 -17.12 (57.52) -39.70 (54.49) -29.92 (35.09) Table 3 : 3 Parameter estimates, standard errors in parentheses (⋆: significant at 95%) are significantly reduced compared to those of column 1. Hence, although the singlebounded model provides better results in terms of efficiency than the model with constant anchoring, our model with anchoring and heterogeneity yields more efficient estimates. Moscovici (1998aMoscovici ( , 1998b) ) [START_REF] Moscovici | La psychanalyse, son image et son public[END_REF]Moscovici ( , 1998a)),[START_REF] Farr | From collective to social representations: Aller et Retour[END_REF],[START_REF] Viaud | A positional and representational analysis of consumption. households when facing debt and credit[END_REF] The majority principle will then consist of a pairwise comparison of each of the attributes. For each pair (X, Y ), the number of individuals who rank X before Y is compared to the number of individuals who rank Y before X. The individuals who do not cite either X or Y since incomplete individual representation may exist do not contribute to the choice between X and Y . Adding to this, when an individual cites X and not Y , X is considered as superior to Y . See Laslier (1997) for details. See Claeys-Mekdade, Geniaux, and Luchini (1999) for a complete description of the contingent valuation survey. After categorization and deletion of doubles, the average number of attributes evoked by the respondents falls from 5.5 to 4.0. Full description of the data and more details are available in[START_REF] Hollard | Théorie du choix social et représentations : analyse d'une enquête sur le tourisme vert en camargue[END_REF]. this confidence interval is defined as [81.79 ± 1.96 × 2.41]. Acknowledgments This research was supported by the French Ministry in charge of environmental affairs. The authors wish to thank Louis-André Gérard-Varet for advice throughout this work. Thanks to Emmanuel Bec, Colin Camerer, Russell Davidson, Alan Kirman, André Lapied, Miriam Teschl and Jean-François Wen for their helpful and constructive comments. We also gratefully aknowledge the participants of the workshop Recent issues on contingent valuation surveys held in Marseilles (June 2003), especially Jason Shogren for his helpful comments and remarks. Errors are the authors' own responsibility.
39,899
[ "843051", "1331865", "742445" ]
[ "15080", "45168", "199934" ]
01760682
en
[ "shs" ]
2024/03/05 22:32:13
1996
https://hal.science/hal-01760682/file/2302-52.pdf
B Mikula Hélène Mathian Denise Pumain Lena Sanders Dynamic modelling and geographical information systems: for an integration published or not. The documents may come L'archive ouverte pluridisciplinaire
222
[ "2360", "1200345", "184463" ]
[ "145345", "43649", "43649" ]
01760693
en
[ "spi" ]
2024/03/05 22:32:13
2013
https://hal.science/tel-01760693/file/thesis_della_marca.pdf
Antonello Scanni Nando Basile Olivier Pizzuto Julien Amouroux He Guillaume Just Lorin Martin Olivier Paulet Lionel Bertorello Yohan Joly Luc Baron Marco Mantelli Patrick Poire Marion Carmona Jean-Sebastian Culoma Ellen Blanchet helped me for the thesis preparation in a satisfactory English :- General introduction Walking down the street, inside an airport or a university, it is impossible not to notice some people speaking or sending messages with their smartphones, others are painting a picture on their tablets, and all this is happening while we are transferring the data of our research from a smart card to a laptop. The wish to communicate and to keep all the information in our pocket, has lead to the development of embedded and portable device technology. Suddenly, with the coming of social networks, we need to exchange comments, articles, pictures, movies and all other types of data with the rest of the word, regardless of our position. In a "touch" we can access the information that always needs to be stored in larger quantity; not one single bit of what belongs to us must be lost and the devices must be extremely reliable and efficient. In this scenario the microelectronics industry is continuously evolving and never ceases to astonish. As a consequence, over the last decade, the market of semiconductor integrated circuits (IC) for embedded applications has exploded too. The request of customers commands the market of low energy consumption portable devices. Particular attention is paid to Flash memories that actually represent the most important media to store each type of data. Depending on application characteristics, different architectures and devices have been developed over the last few years in order to satisfy all the needs of customers. Size scaling, faster access time and lower energy consumption have been the three pillars of scientific research in micro and nano electronic devices over the last few years. Starting from these philosophical considerations we performed an experimental study on silicon nanocrystal memory that represents one of most attractive solutions to replace the standard Flash floating gate device. The aim of this thesis is to understand the physical mechanisms that govern the silicon nanocrystal cell behavior, to optimize the device architecture and to compare the results found with the standard Flash to verify performance improvement. In the first chapter, we will present the economic context, the evolution and the working of EEPROM-Flash memories. Then, a detailed description of the technology, the functioning and their scaling limits will be provided. Finally we will expose the possible solutions to overcome these problems and the thesis framework. The second chapter will present the experimental setup and the methods of characterization used to measure the performances of silicon nanocrystal memory cell. Moreover the impact of relevant technological parameters such as: the nature of nanocrystals, silicon nitride presence, channel doping dose and tunnel oxide thickness, will be analyzed. A memory cell stack optimization is also proposed to match the Flash floating gate memory performance. In the third chapter the impact of main technological parameters on silicon memory cell reliability (endurance and data retention) is studied. The performance of silicon nanocrystal memories for applications functioning within a wide range of temperatures [-40°C; 150°C] is also evaluated reaching for the first time a 1Mcycles endurance with a 4V programming window. Finally the proposed optimized cell is compared to the standard Flash floating gate. Chapter four describes a new dynamic technique of measurement for the drain current consumption during the hot carrier injection. This enables the cell energy consumption to be evaluated when a programming operation occurs. This method is applied for the first time to the floating gate and silicon nanocrystals memory devices. A study on programming scheme and the impact of technological parameter is presented in this chapter. In addition the silicon nanocrystal and floating gate cells are compared. Finally we demonstrate that is possible to reach a sub-nanojoule energy consumption saving a 4V programming window. Finally in the chapter five the conclusion of this work will be analyzed in order to highlight the main experimental results. Moreover the basis for a future work will be presented. Introduction The aim of this first chapter is to present the economic context, the role and the evolution of non-volatile memories. In this context we will present the Flash floating gate device and the physical mechanisms used to transfer electric charge from and into the floating gate. Then the limits of this device and the existing solutions to overcome them will be introduced. In particular, we will focus on the silicon nanocrystal memory that represents the object of this thesis. The industry of semiconductor memories The market of non-volatile memories Over the last decade, the market of non volatile memories has been boosted, driven by the increasing number of portable devices (figure 1.1). All the applications require higher and higher performance such as: high density, low power consumption, short access time, low costs, and so on [Changhyun '06]. This is why the business of Flash memories gained market segments at the expense of other types of memory (figure 1.2). Although the market is growing continuously, the price of memory device is decreasing (figure 1.3). '08]. As the memory market enters the Gigabit and GHz range with consumers demanding ever better performance and more diversified applications, new types of devices are being developed in order to keep up with the scaling requirements for cost reduction. In this scenario, memories play an important role. The "ideal" memory should be a memory that retains the stored information even when it is not powered (non volatile) with a high integration density, that can be infinitely written/re-written (infinite endurance), with ultra high program/erase/read operations, and a zero energy consumption. Because the "ideal" device does not exist, different types of memories have been studied in order to develop one or more of these properties according to their final application [Masoero '12a] (figure 1.4 ). In the next section, the most important semiconductor memories will be summarized. [Zajac '10]. "Bit count" is the amount of data that can be stored in a given block. Memory classification There are various possibilities to classify semiconductor memories; one is to consider their electrical characteristics (figure 1.5). Volatile Memories: are fast memories that are used for temporary storage data since they lose the information when the power is turned off. We can divide them into two types: Static Random Access Memory (SRAM). The information is maintained as long as they are powered. They are made up of flip-flop circuitry (six transistors in a particular configuration). Because of its large number of components SRAM is large in size and cannot compete with the density typical of other kinds of memories. Dynamic Random Access Memory (DRAM). These memories lose the information in a short time. They are made up of a transistor and a capacity where the charge is stored. They are widely used in processors for the temporary storage of information. As the capacitor loses the charge, a refresh or recharge operation is needed to maintain the right state. Non-Volatile Memories: they retain the information even when the power is down. They have been conceived in order to store the information without any power consumption for a long time. This thesis concerns the study of charge storage non volatile memories that are a subgroup of the semiconductor memories. However it is important to remember that there are other devices were the information can be stocked. A very common storage device is the magnetic disk; its main drawback being the long access time and the sensitivity to magnetic fields. Another example of non-volatile memory is the CD technology developed in the late 1970s which uses an optical media that can be read fast, but necessitating a pre-recorded content. Here we will only describe the memory based on semiconductor technology: Read Only Memory (ROM). This is the first non-volatile semiconductor memory. It consists in a simple metal/oxide/semiconductor (MOS) transistor. Thus its cell size is potentially the smallest of any type of memory device. The memory is programmed by channel implant during the fabrication process and can never be modified. It is mainly used to distribute programs containing microcode that do not need frequent update (firmware). Programmable Read Only Memory (PROM). It is similar to the ROM memory mentioned above, but the programming phase can be done by the user. It was invented in 1956 and can constitute a cheaper alternative to the ROM memory because it does not need a new mask for new programming. Erasable Programmable Read Only Memory (EPROM). This memory could be erased and programmed by the user, but the erase has to be done by extracting the circuit and putting it under ultraviolet (UV) radiations. The particularity of this device is the presence of a "floating gate" between the control (top) and tunnel (bottom) oxides. In 1967 D. Khang and S. M. Sze proposed a MOS-based non-volatile memory based on a floating gate in a metal-insulator-metal-insulator-semiconductor structure [Kahng '67]. At the time, however, it was almost impossible to deposit a thin oxide layer (<5nm) without introducing fatal defects. As a consequence a fairly thick oxide layer was adopted and this type of device was developed for the first time at Intel by . Electrically Erasable Programmable Read Only Memory (EEPROM). In this memory both the write and erase operations can be electrically accomplished, without removing the chip from the motherboard. The EEPROM cell features a select transistor in series to each floating gate cell. The select transistor increases the size of the memories and the complexity of array organization, but the memory array can be erased bit per bit. Flash memory is a synthesis between the density of EPROM and the enhanced functionality of EEPROM. It looks like EEPROM memory but without the select transistor. Historically, the name comes from its fast erasing mechanism. Because of these properties and the new applications (figure 1.6) the flash memory market is growing at a higher average annual rate than DRAM and SRAM, which makes it today the most produced memory (figure 1.2). Depending on their applications, flash memories can used in two different architectures that we introduce here and we will describe in next section. NOR flash memory provides random memory access and fast reads useful for pulling data from the memory. The NAND, on the other hand, reads data slowly but has fast write speeds and high density. Figure 1. 6. Market trend of NAND Flash memories in portable applications [Bez '11]. Flash memory architectures Flash memories are organized in arrays of rows (word lines or WL) and columns (bit lines or BL). The type of connection determines the array architecture (figure 1.7). NOR: The NOR architecture was introduced for the first time by Intel in 1988. The cells are connected in parallel and in particular, the gates are connected together through the wordline, while the drain is shared along the bitline. The fact that the drain of each cell can be selectively selected enables a random access of any cell in the array. Programming is generally done by channel hot electron (CHE) and erasing by Fowler-Nordheim (FN). NOR architectures provide fast reading and relatively slow programming mechanisms. The presence of a drain contact for each cell limits the scaling to 6F 2 , where F is the smallest lithographic feature. Fast read, good reliability and relatively fast write mechanism make NOR architecture the most suitable technology for the embedded applications requiring the storage of codes and parameters and more generally for execution-in-place. The memory cells studied in this thesis will be integrated in a NOR architecture for embedded applications. NAND: Toshiba presented the NAND architecture development in 1987 in order to realize ultra high density EPROM and Flash EEPROM [Masuoka '87]. This architecture was introduced in 1989 and presented all the cells in series where the gates were connected by a wordline while the drain and the source terminals were not contacted. The absence of contacts means that the cell cannot be selectively addressed and the programming can be done only by Fowler-Nordheim. On the other hand, it is possible to reach an optimal cell size of 4F 2 , thus a 30% higher density than in NOR cells. In NAND architecture programming is relatively fast but the reading process is quite slow as the reading of one cell is done by forcing the cell in the same bit line to the ON state. The high density and the slow reading but fast writing speeds make NAND architecture suitable for USB keys, storing digital photos, MP3 audio, GPS and many other multimedia applications. Floating gate cell The floating gate cell is the basis of the charge trap memory. The understanding of the basic concepts and functionalities of this device are fundamental and studied in this thesis. In this part we will describe flash memory operations. The operation principle is the following (figure. 1.8a): when the cell is erased there are no charges in the floating gate and the threshold voltage (Vt) is low (Vte). On the contrary when the memory is programmed (or written) the injected charge is stored in the floating gate layer and the threshold voltage value is high (Vtp). To know the state of the memory (e.g. the amount of trapped charge) it is just necessary to bias the gate with moderate read voltage (Vg) that is between (Vte) and (Vtp) and then determine if the current flows through the channel (ON state) or not (OFF state). andQ≠0). b) Schematic cross section of a floating gate transistor. The model using the capacitance between the floating gate and the other electrodes is described [Cappelletti '99]. The schematic cross section of a generic FG device is shown in figure 1.8b; the upper gate is the control gate (CG) and the lower gate, completely isolated within the gate dielectric, is the floating gate (FG). The simple model shown in figure 1.8b helps to understand the electrical behavior of a FG device. CFC, CS, CB, and CD are the capacitances between the FG and control gate, source, drain, and substrate regions, respectively. The potentials are described as follows: VFG is the potential on the FG, VCG is the potential on the control gate, and VS, VD and VB are potentials on source, drain, and bulk, respectively [Pavan '97]. Basic structure: capacitive model The basic concepts and the functionality of a FG device are easily understood if it is possible to determine the FG potential. Consider the case when no charge is stored in the FG, i.e., Q=0. ) ( ) ( ) ( ) ( 0 D FG D B FG B S FG S CG FG FC V V C V V C V V C V V C Q          ( 1 ) Where VFG is the potential on the FG, VCG is the potential on the control gate, and VS, VD and VB are potentials on source, drain, and bulk, respectively. We name: B D S FC T C C C C C     (2) The total capacitance of the FG, and T J J C C   (3) The coupling factor relative to the electrode J, where J can be one of G, D, S and B, the potential on the FG due to capacitive coupling is given by B B S S DS D GS G FG V V V V V         (4) It should be pointed out that (4) shows that the FG potential does not depend only on the control gate voltage but also on the source, drain, and bulk potentials [Pavan '97]. When the device is biased into conduction and the source is grounded, VFG can be written approximately as [Wu '92]: ) ( ) ( Dt D D G G FG FG V V Vt V Vt V        (6) Where αG and αD are the coupling factors, VtFG is the FG threshold voltage (i.e., the VFG value at which the device turns on), while VDt is the drain voltage used for reading measurement. The control gate threshold voltage (Vt) is obviously dependent on the charge (Q) possibly stored in the FG and is typically given in the form: Dt G D FG G FG V C Q Vt Vt       (7) When ( 7) is substituted into (6), the following well-known expression for VFG is obtained: T D D G G FG C Q V V V      (8) In particular the Vt shift (ΔVt) due to the programming operation is derived approximately as: FC T G C Q C Q Vt Vt Vt         0 (9) Where Vt0 is the threshold voltage when Q=0. Equations ( 8) and ( 9) reveal the importance of the gate coupling factor (αG): (8) shows that high αG induces a floating gate potential close to the applied control gate bias and consequently, the gate coupling ratio needs to be high for provide a good programming and erasing efficiency. On the other hand (9) indicates that high αG reduces the impact of the storage charge to the programming window (ΔVt). The international roadmap for semiconductor [ITRS '12] indicates that the best trade-off is achieved with a αG between 0.6 et 0.7. Programming mechanisms We describe in this section the two main methods to program a Flash memory cell: Fowler-Nordheim (FN) [Fowler '28] and the channel hot electron (CHE) [Takeda '83]. Fowler-Nordheim programming The Fowler-Nordheim programming operation is performed by applying a positive high voltage on the control gate terminal (about 20V) and keeping source drain and bulk grounded (figure 1.9a). The high electric field generated through the tunnel oxide creates a gate current due to the FN tunneling of charge from the channel to the floating gate [Chang '83]. consequence, the floating gate potential decreases and hence, the electric field through the tunnel oxide decreases. The charge injection will continue until the cancellation of electric field in the tunnel oxide. This is due to the maximum drop potential through the interpoly dielectric layer (ONO). This operation is relatively slow (order of milliseconds), but the energy consumption can be considered negligible because no current flows in the channel. Channel Hot Electron programming This operation is done keeping bulk and source grounded and applying a positive high voltage on gate (order of 10V) and drain (order of 5V) terminals (figure 1.10). The electrons are first strongly accelerated in the pinchoff region by the high lateral electric field induced by the drain/source bias. Then the electrons that have reached a sufficiently high kinetic energy are injected into the floating gate thanks to the vertical electric field induced by the positive voltage applied on the gate electrode [Ning '78] [Takeda '85] [Chenming '85]. Programming by channel hot electron is faster than FN (few microseconds). Furthermore the CHE is efficiency poor (only a few electrons are injected over the total amount of electrons that flow from source to drain [Simon '84]), and consequently high power consumption is reached. We remember that this programming mechanism is the main one used in this work to characterize the memory cells. Erase mechanisms There are mainly four ways to erase the Flash floating gate cell; the schematic representations are shown in figure 1.11. Fowler-Nordheim erase As for the programming operation, the source, drain and bulk are generally kept grounded while a strong negative voltage (order of -15V) is applied to the gate terminal. In this case, electrons are forced to flow from the floating gate to the semiconductor bulk. This method is slow, but the erasing is uniform on the channel surface (figure 1.11a). This is the preferred mechanism to erase the memory cells in NOR architecture. In the next chapter we will discuss about its effect on studied samples. Hot Hole Injection (HHI) erase This mechanism consists in accelerating the holes produced by reverse biasing of drain/bulk junction and by injecting them into the floating gate thanks to the vertical electric field [Takeda '83]. Figure 1.11b shows that this is done by keeping the bulk and source grounded and biasing positively the drain (order of 5V) and negatively the gate (about -10V). HHI erasing method is fast, localized near the drain and could induce the SILC (Stress Induced Leakage Current) phenomenon more easily than the methods listed above. Source erasing This forces electrons to flow from the floating gate into the source junction by FN tunneling. This erasing is done by applying a positive voltage of about 15V on the source and keeping bulk and gate grounded (figure 1.11c). In order to prevent current through the channel, the drain is kept floating. There are three main drawbacks to this method the erasing is localized near the source, it needs a strong source/gate overlap and it requires the application of a high voltage on the source terminal. Mix source-gate erase This is a mix between the source and the FN erasing. Electrons are erased both through the source and the channel. The principle is to share the high voltage needed in the source erasing between the gate and the source electrodes. As a result a negative bias of about -10V is applied on the gate and a positive bias of about 5V on the source. Again, the drain is kept floating in order to prevent source to drain current (figure 1.11d). Evolution and limits of Flash memories We explained at the beginning of the chapter that the new applications have commanded the semiconductor market and the research development. Since the invention of flash memory cell, the progress on device architectures and materials has been huge. The "ideal" memory should have: • high density solution • low power consumption • compatibility with logic circuits and integration This is the final objective of semiconductor research. As the "ideal" device does not exist different types of memories have been invented in order to push some specific properties. For these reasons, as shown in figure 1.12, memory technology development did not pursue a single technology solution, but rather it has oriented in many different direction over time [Hidaka '11] [Baker '12]. Figure 1. 12. Evolution of Flash and embedded-Flash memory technology (left) [Hidaka '11]. Mapping of common eNVM architectures to the NVM byte count and critical characteristics (right) [Baker '12]. Here, a category for non volatile storage with absolute minimum cost/bit is also shown. In this section we will first introduce the device scaling and the related challenges and then we will present flash cell developments. It is worth noting that the solutions found for the flash memory cell can be used in embedded memory. In fact, even if in embedded memories there are minor constraints on the cell dimensions, research has always provided smaller nonvolatile memory for embedded applications that have to face-off with the same flash scaling limits. Device scaling During the last 30 years Flash cell size has shrunk from 1.5um to 25nm doubling the memory capacity every year. In '12]. White cell color: manufacturable solutions exist and are being optimized. Yellow cell color: manufacturable solutions are known. Red cell color: unknown manufacturable solutions. We can see that even if the trend is maintained and the cell scaled down in the years to come, some technological solutions are still not known. Moreover, scaling beyond the 28nm will be very difficult if no revolutionary technology is adopted. The main issues that limit device miniaturizing are: Stress Induced Leakage Current (SILC). During each erase/write cycle the stress degrades the tunnel oxide and the cell slowly loses its capacity to store electric charges (figure 1.13). Figure 1. 13. Experimental cumulative distribution functions of bits vs. threshold voltage, measured at different times after different P/E cycling conditions [Hoefler '02]. This phenomenon, increases as the tunnel oxide is thinned. This is due to the defects induced in the oxide by the electrons that passing through it during program/erase operations [Pavan '97] [Ling-Chang '06] [Hoefler '02] [Belgal '02] [Kato '94] [Chimenton '01]. Consequently, the retention depends on the number of cycles and on the tunnel oxide thickness, but the physical scaling of this latter is limited to 6-7 nm. Short Channel Effects (SCE). SCE appear when the gate length dimensions are so short that the gate control of the channel is lowered due to the influence of the source and drain potentials (figure 1.14). This parasitic effect produces the Drain Induced Barrier Lowering (DIBL) phenomenon [Yau '75], which results in threshold voltage decrease and the degradation of the subthreshold slope. Because of DIBL, the "OFF" current (IOFF) increases and the power consumption reaches values incompatible with the advanced technology node requirements [Brews '80] [Fichtner '80] [Yau '74] [Fukuma '77]. Moreover, elevated IOFF currents result in some disturb of the memory cell, especially in the erased state. The insert is the calculated boron profile below the silicon surface in the channel [Fichtner '80]. Disturb. We consider here the main disturb effects due to the programming and reading operations done on unselected cell of a NOR memory array. It is to be remembered that in this thesis work, the electrical characterizations are based on the principle that the memory cells will be integrated in a NOR architecture for embedded applications.  Programming disturb. This impacts the unselected cells in the same bitline and wordline of selected cell. In the first case a drain stress is produced and The cell scaling reduces the distances between the neighboring cells and the contacts. This means that parasitic capacitances have to be taken into account for the coupling factor calculation, and we will explain our model in chapter 4 section 2.2. Figure 1. 16. TEM pictures of STMicroelectronics 90nm NOR Flash (left) and Samsung sub-50nm NAND Flash right) [Kim '05]. Parasitic Charge Trapping. In scaled memories the reduction in the number of stored electrons leads to a higher influence of the parasitic charge trapping on threshold voltage shift [Prall '10]. Figure 1.17 shows various locations within a NAND cell, thus programmed and erased by FN, where the parasitic charge can be trapped. The results of a TCAD simulation show that with the memory scaling, the number of electrons located outside the floating gate starts to dominate the cell threshold voltage shift. We will see in following chapters, how this parameter impacts the memory cell behavior. The triangle shows the ±3σ percentage divided by the mean [Prall '10]. Alternative solutions In this section we will describe some of the envisaged modification to the classical flash memory cell in order to overcome the scaling limits presented in the previous section. Tunnel dielectric In a flash memory the tunnel dielectric has the double role of tunneling media during programming operations and electrostatic barrier in order to preserve the stocked charge. Moreover Interpoly material Maintaining a constant coupling ratio at a value of 0.6-0.7 is a great scaling challenge. The use of high-k dielectric in the interpoly dielectric is envisaged to reduce the total EOT while maintaining or even increasing the gate coupling ratio. The choice of the high-k must take into account that for most of them the high dielectric constant comes at the expense of a narrower band gap (figure 1.20). This narrowed band gap can cause leakage current during retention operation [Casperson '02]. In particular Alumina dielectric is employed in the TANOS (TaN/Al2O3/Si3N4/SiO2/Si) memory, proposed for the first time by Samsung in 2005 [Yoocheol '05], Despite the envisaged advantages, high-k materials are not as well known as the silicon oxide and they need further development before they can be integrated in the memory market. One of the main problems is that they inevitably introduce defects that can induce trap assisted conduction and degrade the memory operations . Silicon nanocrystal memory: state of the art The market of nonvolatile Flash memories, for portable systems, requires lower and lower energy and higher reliability solutions. The silicon nanocrystal Flash memory cell appears as one promising candidate for embedded applications. The functioning principle of discrete charge trapping silicon nanocrystal memories (Si-nc) is similar to floating gate devices. In this thesis we consider the integration of Si-nc memories in NOR architecture for embedded applications programmed by channel hot electron and erased by Fowler-Nordheim mechanisms. There are many are the advantages to using this technology: -Robustness against SILC and RILC (Radiation Induced Leakage Current), this enables to scale the tunnel oxide thickness to be scaled down to 5nm, while the ten year data retention constraint is guaranteed. Moreover the operation voltages can be decreased too [Compagnoni '03] [Monzio Compagnoni '04]. Further improvements can be achieved using cells with a high number of nanocrystals [De Salvo '03]. -Full compatibility with standard CMOS fabrication process encouraging industrial manufacturability, reducing the number of masks with respect to the fabrication of floating gate device [Muralidhar '03] [ Baron '04] and ease of integration [Jacob '08]. -Decrease in cell disturb, due to the discrete nature of nanocrystals and their smaller size than a floating gate, the coupling factor between the gate and drain is reduced as well as the disturbs between neighboring cells. -Multi level applications, the threshold voltage of a silicon nanocrystal transistor depends on the position of stored charge along the channel [Crupi '03] [De Salvo '03]. Despite these peculiarities two main drawbacks characterize the Si-nc memories: -The weak coupling factor between the control gate and nanocrystals. This implies finding a method to keep the program/erase voltages small and to take advantage of the decrease in tunnel oxide thickness [De Salvo '01]. -The spread in the surface fraction covered with Si-nc limiting this type of cell for high integration density applications [Gerardi '04]. IBM presented the first Si-nc memory at IEDM [Tiwari '95] in order to improve the DRAM (Dynamic Random Access Memory) performance using a device with characteristics similar to EEPROM. The polysilicon floating gate is replaced by silicon nanocrystals grown on tunnel oxide by Low Pressure Chemical Vapor Deposition (LPCVD) two step process. This type of fabrication enables the size and density of nanocrystals to be controlled separately [Mazen '03] [Mazen '04]. Figure 1. 23. Schematic representation of the nucleation and growth two step process [Mazen '03]. Other techniques of fabrication have also been developed: ionic implantation [Hanafi '96], annealing of SRO (Silicon Rich Oxide) layers deposition [Rosmeulen '02] and aereosol deposition [De Blauwe '00]. Thanks to these research works, Motorola demonstrated the interest in using this device for non-volatile applications by developing a 4Mb memory array [Muralidhar '03]. In addition STMicroelectronics in collaboration with CEA-Leti presented their 1Mb memory array [De Salvo '03]. The three main actors in the industry of silicon nanocrystal memories are STMicroelectronics, Atmel and Freescale. Finally they processed the silicon nanocrystal memory cell in order to assume a cylindrical shape, which greatly benefits improve the coupling ratio (figure 1.26a). In addition, they used an optimized ONO control dielectric, enabling the reduction of the parasitic charge trapping during cycling (figure 1.26b); this type of cell was integrated in a 4Mb NOR array [Gerardi '08]. It clearly appears that increasing the Si-nc size, the programming window is increased too. Indeed, this result well agrees with the theoretical model [De Salvo '01] which states that the programming window linearly increases with the floating gate surface portion covered by the Si-NCs. In fact it was demonstrated for the Si-nc cell that the dynamic charging/discharging Si-dot memory corresponds better to a FG memory device operation rather than to a pure capture/emission trap-like behavior [De Salvo '01]. Starting from the capacitive model of Flash floating gate (section 1.3.1), and by considering the discrete nature of nanocrystals, the coefficient αD can be neglected and the equation ( 8) can be rearranged as: T G G FG C Q V V    (10) In this FG-like approach, we define a parameter that takes into account the surface portion covered by the nanocrystals (Rnc). It corresponds to a weighting factor for the trapped charges in the MOSFET threshold voltage; the Vt shift in this case takes into account this parameter and can be written as: 11) This approach will be considered as fundamental in the next chapters in order to improve the Si-nc memory cell coupling factor and thus the programming window. FC nc C R Q Vt Vt Vt       0 ( We reported in figure 1.28 the results shown in [Jacob '08] concerning cell reliability using the HTO control oxide and keeping the silicon nanocrystal size constant. Freescale was created by Motorola in 2004 when the studies on silicon nanocrystal memory cell had already been started [Muralidhar '03]. Freescale did a comparative study on the importance of control dielectric, using HTO and ONO samples because the latter, with its silicon nitride layer, represents a barrier against the parasitic oxidation of silicon nanocrystals and decreases the leakage current in the memory stack. As a drawback the parasitic charge trapping is present during the programming operations. In figure 1 ONO [Muralidhar '04] control dielectric. This parasitic charge trapping impacts also the data retention (figure 1.30). It is thus important to minimize it to reach the 20 year target. They demonstrated the advantage of discrete nature of silicon nanocrystals on data retention and read disturb; it enables the tunnel oxide thickness to be decreased and hence the program/erase voltages. Figure 1. 30. Program state data retention and erased slate READ disturb characteristics for a nanocrystal NVM bitcell with a 5mn tunnel oxide. Exhibited charge loss in cycled case is attributed to detrapping of parasitic charge [Muralidhar '04]. Further studies have been performed concerning the impact of silicon nanocrystals size and density [Rao '05] [Gasquet '06]. Figure 1.31a shows that the covered area impacts the program/erase speed and the saturation level of the programming window. [Rao '05]. b) 200ºC bake Vt shift for samples with different nanocrystals size [Gasquet '06]. The hot carrier injection speed increases with the covered area, while the Fowler-Nordheim erase operation is more efficient with smaller nanocrystals. This is due to the presence of HTO and the Coulomb blockade effect. Data retention measurements have been also carried out on a 4Mb memory array. The samples had a 5nm tunnel oxide and 10nm HTO (figure 1.31b). Here the data retention loss is shown during a 200ºC bake. The erased state is very close to neutral charge so the Vt shift is small while most of the variation in program state response originates in the first 54 hours of bake and appear uncorrelated to nanocrystal size. Moreover, Freescale decided to integrate silicon nanocrystals in high scalable Split Gate memories (figure 1.32) [Sung-Taeg '08] [Yater '09], where it is possible to control the current consumption during the hot carrier injection for low energy embedded applications [Masoero '11] [Masoero '12b]. Recent results of endurance and data retention are reported in figure 1.33. The cycling experiments (figure 1.33a) show program and erase Vt distribution width that remain approximately constant throughout extended cycling and a substantial operating window is maintained even after 300Kcycles. Concerning the data retention, due to the inherent benefits of NC-based memories, no extrinsic charge loss was observed on fresh and cycled parts (figure 1.33b). The average loss for 504hrs for uncycled arrays is about 70mV and for 10K and 100K cycled arrays it is 250mV and 400mV, respectively [Yater '11] [Sung-Taeg '12]. Finally all these studies underline the importance of achieving a good coupling factor to improve the programming window and thus cell endurance, paying attention to the tunnel oxide thickness that plays an important role for the data retention and disturbs. Flash technology for embedded applications The 1T silicon nanocrystal technology is not the only solution to replace the Flash floating gate. In particular for the market of embedded applications the Flash memory array is integrated in the microcontroller products with SRAM, ROM and logic circuits achieving System on a Chip solution (SoC). This type of integrated circuit enables the fabrication costs reduction due to the compatibility with the CMOS process, by improving the system performance because the code can be executed directly from the embedded Flash. The most important applications for embedded products are the smart card and automotive, where low energy consumption, fast access time and high reliability are required (figure 1.34). In this scenario each one of main industrial actors searches the best compromise between cell area, performance and cost. In figure 1.35 we show the mainstream Flash concepts proposed by the top players of SoC manufacturers [Strenz '11]. [Strenz '11]. Although a large variety of different cell concepts can be found in sell, only three main concepts in terms of bitcell structure dominate the marketall of them using NOR array configuration: 1T stacked gate concepts, splitgate concepts as well as 2-transistor NOR concepts. Due to highly diverging product requirements there is a variety of concepts tailored to specific applications. Looking into development of new nodes a clear slowdown of area shrink potential can be observed for classical bitcell concepts while reliability requirements are tightened rather than relaxed. This increases the pressure for new, emerging cell concepts with better shrink potential. We used this brief analysis (Robert Strenz, Infineon -Workshop on Innovative Memory Technologies, Grenoble 2012), to highlight the concept that the industry push its technology to overcome the problem of scaling cost. Innovative solutions for non volatile memory Since the ultimate scaling limitation for charge storage devices is too few electrons, devices that provide memory states without electric charges are promising to scale further. Several non-charge-storage memories have been extensively studied and some commercialized, and each has its own merits and unique challenges. Some of these are uniquely suited for special applications and may follow a scaling path independent of NOR and NAND flash. Some may eventually replace NOR or NAND flash. Logic states that do not depend on charge storage eventually also run into fundamental physical limits. For example, small storage volume may be vulnerable to random thermal noise, such as the case of superparamagnetism limitation for MRAM. One disadvantage of this category of devices is that the storage element itself cannot also serve as the memory selection (access) device because they are mostly two-terminal devices. Even if the on/off ratio is high, two terminal devices still lack a separate control (e.g. gate) that can turn the device off in normal state. Therefore, these devices use 1T-1C (FeRAM), 1T-1R (MRAM and PCRAM) or 1D-1R (PCRAM) structures. It is thus challenging to achieve small (4F 2 ) cell size without an innovative access device. In addition, because of the more complex cell structure that must include a separate access (selection) device, it is more difficult to design 3-D arrays that can be fabricated using just a few additional masks like those proposed for 3-D NAND [ITRS '12] [Jiyoung '09] [Tae-Su '09] [SungJin (figure 1.36). Ferroelectric Random Access Memory (FeRAM) FeRAM devices achieve non-volatility by switching and sensing the polarization state of a ferroelectric capacitor. To read the memory state the hysteresis loop of the ferroelectric capacitor must be traced and the data must be written back after reading. Because of this "destructive read," it is a challenge to find ferroelectric and electrode materials that provide both adequate change in polarization and the necessary stability over extended operating cycles. The ferroelectric materials are foreign to the normal complement of CMOS fabrication materials and can be degraded by conventional CMOS processing conditions. Thus, the ferroelectric materials, buffer materials and process conditions are still being refined. So far, the most advanced FeRAM [Hong '07] is substantially less dense than NOR and NAND flash. It is fabricated at least one technology generation behind NOR and NAND flash, and not capable of MLC. Thus, the hope for near term replacement of NOR or NAND flash has faded. However, FeRAM is fast, low power and low voltage, which makes it suitable for RFID, smart card, ID card and other embedded applications. In order to achieve density goals with further scaling, the basic geometry of the cell must be modified while maintaining the desired isolation. Recent progress in electrode materials show promise to thin down the ferroelectric capacitor [ITRS '12] and extend the viability of 2-D stacked capacitor through most of the near-term years. Beyond this the need for 3-D capacitors still remains a formidable challenge. Magnetic Random Access Memory (MRAM) MRAM devices employ a magnetic tunnel junction (MTJ) as the memory element. An MTJ cell consists of two ferromagnetic materials separated by a thin insulating layer that acts as a tunnel barrier. When the magnetic moment of one layer is switched to align with the other layer (or to oppose the direction of the other layer) the effective resistance to current flow through the MTJ changes. The magnitude of the tunneling current can be read to indicate whether a ONE or a ZERO is stored. Field switching MRAM probably is the closest to an ideal "universal memory", since it is non-volatile and fast and can be cycled indefinitely, thus may be used as NVM as well as SRAM and DRAM. However, producing magnetic field in an IC circuit is both difficult and inefficient. Nevertheless, field switching MTJ MRAM has successfully been done in products. In the near term, the challenge will be the achievement of adequate magnetic intensity fields to accomplish switching in scaled cells, where electromigration limits the current density that can be used. Therefore, it is expected that field switch MTJ MRAM is unlikely to scale beyond 65 nm node. Recent advances in "spin-torque transfer (STT)" approach, where a spin-polarized current transfers its angular momentum to the free magnetic layer and thus reverses its polarity without resorting to an external magnetic field, offer a new potential solution [Miura '07]. During the spin transfer process, substantial current passes through the MTJ tunnel layer and this stress may reduce the writing endurance. Upon further scaling the stability of the storage element is subject to thermal noise, thus perpendicular magnetization materials are projected to be needed at 32 nm and below [ITRS '12]. Resistive Random Access Memory (RRAM) RRAM is also a promising candidate next-generation universal memory because of its shorter write time, large R-ratio, multilevel capability, and relatively low write power consumption. However, the switching mechanism of RRAM remains unclear. RRAM are based on binary metal oxides has been attracting increasing interest, owing to its easy fabrication, feasibility of 3-D (stacked) arrays, and promising performances. In particular, NiO and HfO based RRAM have shown low voltage and relatively fast programming operations [Russo '09] [Vandelli '11]. RRAM functionality is based on the capability to switch the device resistance by the application of electrical pulses or voltage sweeps. In the case of metal-oxide-based RRAM devices, the switching mechanism has been recognized to be a highly localized phenomenon, where a conductive filament is alternatively formed and destroyed (at least partially) within the dielectric layer. Several physical interpretations for the switching processes have been proposed, including trap charging in the dielectric, space-charge-limited conduction processes, ion conduction and electrodeposition, Mott transition, and Joule heating. Such a large variety of the proposed physical mechanisms can be explained in part by the different dielectric and electrode materials and by the different procedures used in the experiments (unipolar or bipolar experiments). This aspect represents a limit today for the cell behavior understanding and a comprehensive physical picture of the programming behavior in RRAM device is still to be developed. This device results to be highly scalable, but limited by the size of select transistor in cell architecture. Another drawback is due to the high voltage necessary to create the conductive filament the first time to switch from the pristine state in a conductive state. This "first programming" operation has to be performed during the manufacturing process, thus increasing the fabrication complexity. Phase Change Random Access Memory (PCRAM) PCRAM devices use the resistivity difference between the amorphous and the crystalline states of chalcogenide glass (the most commonly used compound is Ge2Sb2Te5, or GST) to store the logic ONE and logic ZERO levels. The device consists of a top electrode, the chalcogenide phase change layer, and a bottom electrode. The leakage path is cut off by an access (selection) transistor (or diode) in series with the phase change element. The phase change write operation consists of: (1) RESET, for which the chalcogenide glass is momentarily melted by a short electric pulse and then quickly quenched into an amorphous solid with high resistivity, and (2) SET, for which a lower amplitude but longer pulse (usually >100 ns) anneals the amorphous phase into a low resistance crystalline state. The 1T-1R (or 1D-1R) cell is larger or smaller than NOR flash, depending on whether MOSFET or BJT (or diode) is used and the device may be programmed to any final state without erasing the previous state, which provides substantially faster programming throughput. The simple resistor structure and the low voltage operation also make PCRAM attractive for embedded NVM applications [ITRS '12]. The major challenges for PCRAM are the high current required to reset the phase change element and the relatively long set time. Interaction of phase change material with electrodes may pose long-term reliability issues and limit the cycling endurance. This is a major challenge for DRAM-like applications. Because PCRAM does not need to operate in page mode (no need to erase), it is a true random access, bit alterable memory like DRAM. The scalability of the PCRAM device to < 5 nm has been recently demonstrated using carbon nanotubes as electrodes [Feng '10] [Jiale '11] and the reset current followed the extrapolation line from larger devices. In at least one case, cycling endurance of 10 11 was demonstrated [Kim '10]. Conclusion In this chapter, we have presented the framework of this thesis. In the first part the economic context, the classification and the architectures of semiconductors memory were presented. Then, the Flash floating gate memory cell was described as well as its capacitive model that characterizes this device. Furthermore, the main program/erase mechanisms implemented in memory arrays are explained highlighting the importance of channel hot electron programming operation and the Fowler-Nordheim erasing for this thesis work. We thus presented the flash scaling limits and the proposed solutions; we explained the advantages of using a charge trapping layer instead of the continuous floating gate and a high-k control dielectric instead of the classical silicon oxide. Finally, we introduced the silicon nanocrystal memory cell that is the central point of this thesis. In particular we reported the state of the art of charge trap silicon nanocrystal memory, listing the various trials performed in the past. We introduced the impact on cell performances and reliability of some technical parameters: silicon nanocrystal size and density, control oxide, and the cell active shape. The object of this thesis will be to find the best tradeoff between some technological parameters, in order to optimize the programming window, the reliability and the energy consumption of our silicon nanocrystal cell. Introduction In this section the results concerning the programming window electrical characterization are presented. The programming window of the silicon nanocrystal memory cell was measured using a defined experimental protocol developed in the STMicroelectronics-Rousset electrical characterization laboratory. With this procedure we evaluated the impact of main technological parameters on the programming window: silicon nanocrystal size and density (covered area), presence of silicon nitride capping layer, channel doping dose and tunnel oxide thickness. The results were compared to the state of the art in order to understand how to improve the cell performance using a CMOS process fully compatible with the existing STMicroelectronics method. The chapter will conclude with the benchmarking of silicon nanocrystal cell versus the standard Flash floating gate. Experimental details One of the main limits of silicon nanocrystal memories is the narrow programming window [Gerardi '08] [Monzio Compagnoni '03] [De Salvo '03]. In order to evaluate how to improve the memory cell performance, it is important to develop manual and automatic tests. Experimental setup The electrical characterization of the silicon nanocrystal cell was performed using manual and automatic probe stations. The first was used to measure the programming/erase kinetic characteristics, while the second was used to obtain statistical information concerning the dispersion on wafer. In figure 2.1 a picture of the manual test bench is shown.  Manual prober • Thermo chuck equipped  Switch matrix (optional) • To connect the instruments to the sample  Tester • Electrical parameter analyser (HP4156) • LCR meter (HP4284) • Pulse Generator (HP8110)  Computer (LABView system) to drive the prober and the tester The manual probe station is driven by the LABView system in order to command the instruments of the bench with homemade software. In particular the test bench is equipped with a HP4156 electrical parameter analyzer, a HP4284 LCR precision meter and a HP8110 pulse generator. The switch matrix enables the instruments to be connected to the 200mm wafer (sample). Using the LABView program we are able to measure the program/erase cell characteristics using different biasing conditions. The presence of thermo chuck enables measurement in the range of temperature from -40°C up to 250°C. Methods of characterization In order to characterize the programming window of the silicon nanocrystal cell and compare it with the characterizations obtained on standard Flash floating gate in NOR architecture, we used an appropriate method to program the cell by channel hot electron and to erase it by Fowler-Nordheim mechanism. The measurement protocol was divided into two parts and it was kept unchanged for all samples. The first part was performed using the automatic bench; by applying only one program/erase cycle we were able to evaluate the programming window dispersion on the whole wafer. The evaluation of programming window dispersion was performed using a fixed 5µs programming pulse, a gate voltage (Vg) of 9V, a drain voltage (Vd) of 4.2V, source and bulk voltages (Vs and Vb respectively) of 0V. Concerning the erase phase, a pulse of 90ms was applied on the gate terminal using Vg=-18V, while the drain, source and bulk terminals were grounded (Vd=Vs=Vb=0V). The second part of the experiments was performed using the manual prober station. Its purpose was to apply program/erase pulses with different durations and amplitudes to get the kinetic evolution of the threshold voltage in channel hot electron and Fowler-Nordheim regime. The two methods are described below. The staircases have been used to emulate the ramps generated in STMicroelectronics products. The programming kinetic was performed by applying 4.2V pulses on drain terminal and a staircase from 3V to 9V with a step of 0.75V, followed by an additional pulse of 1µs on gate terminal. The duration of each pulse was 0.5µs in order to obtain a 1.5V/µs ramp. For the erase kinetic, 10V pulses were applied on drain, source and bulk terminals, while a staircase from -4V to -10V was applied on gate terminal. In this way it was possible to reach the gate-bulk voltage of 20V. This represents the maximum voltage value available in STMicroelectronics products. The step amplitude was 0.25V, while the duration was 50µs in order to emulate a 5kV/s ramp. After each pulse the cell state was read by measuring the gate voltage needed to drive a fixed drain current of 1µA. The programming window (ΔVt) is calculated as the difference between the programmed threshold voltage (Vtp) and the erased threshold voltage (Vte): Vte Vtp Vt    (1) Impact of technological parameters After the technical details described above, we are going to present the results of electrical characterization in terms of programming window. The aim is to understand the impact of main technological parameters on the programming window and how to merge them to obtain the best results with respect to the Flash floating gate. Effect of silicon nanocrystal size The impact of silicon nanocrystals size on the programming window has already been analyzed in depth [De Salvo '01] [Rao '05]. Nevertheless it is important to evaluate its effect in STMicroelectronics products. In [Jacob '08] it appears that the bigger the Si-NCs are, the larger the programming window is. This is due to the increasing surface portion covered by the nanocrystals. In this work, the studied samples have a channel width of W=90nm and a channel length of L=180nm. The cell stack is described in figure 2.4. On a p-type substrate a 5.2nm thick tunnel oxide was grown. The average silicon nanocrystal size and density have been measured in-line using a Critical Dimension Scanning Electron Microscopy (CDSEM) technique [Amouroux '12]. In this case we compare the samples with two different diameters (Φ) of 6nm and 9nm. Then, to complete the stack, the ONO (Oxide-Nitride-Oxide) Inter-Poly Dielectric (IPD) layer was deposited to reach 14.5nm of Equivalent Oxide Thickness (EOT). On a silicon nanocrystal cell it is not possible to measure capacitance to calculate the EOT of ONO thickness, because of the discrete nature of nanocrystals. The ONO thickness is thus considered to be unchanged with respect to the standard Flash floating gate cell, because the fabrication process and the recipe are unchanged for the two devices. As for the F.G. it was measured at the end of process with capacitance-voltage characterizations and Transmission Electron Microscopy analysis. Figure 2.5a shows the average values and the dispersion of program/erase threshold voltages obtained with the statistical measurements (30 samples). The minimum program/erase states to target the Flash floating gate are highlighted. With our extrapolation it is necessary to increase the nanocrystal diameter up to 14nm, maintaining the cell stack unchanged, in order to achieve good programming window. Using the measured size and density (figure 2.4).we plot the correlation between the programming window and the percentage of covered area in figure 2.5b; increasing the covered area, thus the coupling factor, the programming window increases because of the higher number of trapped charges. Using this cell structure to achieve the minimum programming window of 4V, the 95% of covered area is needed. That is not coherent with the discrete nature of Si-nc cell, as detailed in section 1.4. Moreover, the programming window increases with the silicon nanocrystal size as well as the dispersion on wafer. Increasing the diameter by 3nm enables increasing the programming window of 0.5V that is not sufficient for our application. After these preliminary evaluations we used the staircases, described in paragraph 2.2, to measure the program/erase kinetic characteristics. In figure 2.6a the results concerning the considered samples are compared. As expected, we notice that by increasing the nanocrystal size, the programming window is improved because the covered area increases as well as the cell coupling factor. In literature it is shown that the FN erase speed can increase when the nanocrystal size decreases [Rao '05]. This is true for specific cell architectures and in particular for small nanocrystal diameters. However, when increasing the nanocrystal size and the covered area, the coulomb blockade effect and quantum mechanisms can be neglected [De Salvo '01], thus the coupling factor dependence becomes predominant. It is important to notice that the programming window of figures 2.5 and 2.6 cannot be compared because two different program/erase schemes are used. With the ramped gate voltage the program/erase efficiency decreases with respect to the box pulses used for the statistical measurements shown in figure 2.5. To conclude we can confirm that increasing the silicon nanocrystal size and thus, the covered area, the programming window is improved. In particular this effect is more evident on FN erase operation. Effect of silicon nitride capping layer In literature it is shown how to improve the programming window using high-k material interpoly dielectrics [Molas '07] The final programming window is increased with the Si3N4 capping layer thanks to the increase in charge trapping probability and the improvement in coupling factor (figure 2.9). In particular the former improves the channel hot electron programming efficiency, while the covering ratio increase improves in particular the erase operation. It is worth noting that the programming windows of figure 2.8 and 2.9 are not directly comparable because the mechanisms of program erase are different due to the different scheme box or ramp pulses. Finally, these results suggest the silicon nitride capping layer is a solution to increase the programming window maintaining the nanocrystal physical parameters (size and density) constant. In section 3 we will analyze the impact of Si3N4 capping layer on Si-nc cell reliability. Effect of channel doping dose The impact of channel doping dose (CDD) on the threshold voltage of a MOS transistor is well known and well analyzed in literature [Brews '78] [Brews '79a]. In this section we show the experimental results concerning the programming window achieved using the silicon nanocrystal cell with 9nm silicon nanocrystals capped by 2nm Si3N4 trapping layer, where the channel doping dose is varied. More precisely three CDD are used: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 , and 11•10 13 at/cm 2 (figure 2.10). The aim of this trial is to increase the injection probability due to the higher number of carriers in the channel surface. In figure 2.11a the programming window dispersion on wafer is shown. It is important to notice that in spite of the sample dispersion, the trends related to the threshold voltage shift versus the channel doping dose are coherent; tp and Vte shift toward higher levels, due to the threshold voltage dependence on CDD [Brews '79a] [Brews '79b] [Booth '87]. As expected, the average programming window increases with the channel doping dose (figure 2.11b). The programming window tends to saturate for the higher CDD; We showed in figure 2.11a the threshold voltage dependence on CDD; hence we performed kinetic measurements forcing the programmed and erased threshold voltages to the levels of Vtp=8.2V and Vte=3.5V. These levels represent the targets used in STMicroelectronics products. The Vt adjusting operation enabled the different devices to be compared in order to evaluate the impact of channel doping dose on programming window only. We notice the improvement in programming efficiency when CDD=11•10 13 at/cm 2 (figure 2.12a) and at the same time, the erase efficiency is decreased (figure 2.12b). The programming window trend is the same as statistical measurements made with box pulses (increasing the CDD the programming window increases), but the absolute values are not comparable because, in this case, the Vt adjusting is performed and also the program/erase scheme is different (box versus ramp). In this section we showed that by increasing the channel doping dose, the programming window increases, but the threshold voltage adjusting is needed to reach good levels of programmed and erased states. In section 3.1 we demonstrated that the erase operation can be improved with the increase of nanocrystal size and density. We can affirm therefore it is important to find the best tradeoff between the channel doping dose and the channel covered area in order to optimize the programming window. Finally we decided to use the higher channel doping dose (CDD=11•10 13 at/cm 2 ) for cell optimization, as reported at the end of the chapter. Effect of tunnel oxide thickness variation The last technological parameter we varied in order to improve the Si-nc cell performances was the tunnel oxide thickness (tunox), due to the well known dependence of Fowler-Nordheim tunneling on tunox [Fowler '28] [Punchaipetch '06]. In literature alternative solutions to the SiO2 tunnel oxide integrating high-k materials are proposed [Fernandes '01] [Maikap '08]. The studied samples are described in figure 2.13a; three values of SiO2 tunnel oxide thickness were considered: tunox=3.7nm, tunox=4.2nm, tunox=5.2nm. It is not possible to measure the tunnel oxide thickness of the silicon nanocrystal cell because the nature of nanocrystals does enable them to be contacted directly. We decided thus to measure the capacitance of memory stack integrated in large structures applying -7V on gate terminal (30 samples). We verified that the measurement did not introduce a parasitic charge trapping in nanocrystals. Hence we extrapolated the EOT of memory stack and we compared it with theoretical results (figure 2.13b). The difference between the calculated and measured values is due to the fact that the calculated EOT does not take in to account the substrate and gate doping in capacitance stack. However one can notice that the relative variation of EOT stack thickness corresponds to the variation of tunnel oxide thickness of the three samples. Furthermore, Transmission Electron Microscopy (TEM) photos were taken to measure the physical tunnel oxide thickness at the end of fabrication process (figure 2.13c). Using a specific image processing based on the light contrast of the TEM picture we measured the physical thicknesses of our samples that correspond to the expected ones. Also for this case we performed experiments to evaluate the programming window dispersion on wafer (30 samples are tested); the results are reported in figure 2.14. The dispersion is greater than 1V, due to the process variation. This limits the data interpretation, but the trend is clear by increasing the tunnel oxide thickness, the programming window decreases. Here the cell reaches a satisfactory erase level, using the tunox=4.2nm and tunox=3.7nm, but as in the case of channel doping dose variation, the Vt adjusting is needed due to the impact of tunnel oxide thickness on cell threshold voltage [Yu-Pin '82] [Koh '01]. Also in this case, before the kinetic characterizations, the program/erase threshold voltages were fixed: Vte=3V and Vtp=8.2V, in order to evaluate the impact of tunnel oxide thickness on programming window only. One can notice that the tunnel oxide thickness has a limited influence on channel hot electron programming operation. This is due to the dominant role of horizontal electric field on hot carrier injection probability [Eitan '81]. In table 2.1 we reported the vertical electric field (ξvert) in tunnel oxide calculated using a 9V gate voltage and we show that, for the considered tunnel oxide thicknesses, the tunox variation slightly impacts the vertical electric field [Tam '84]. Instead, during the Fowler-Nordheim erase operation, the gate voltage reaches -20V and only the vertical electric field is present. The F-N erase only depends on the applied gate voltage and tunnel oxide thickness [Fowler '28], keeping the temperature constant. We can thus affirm that the impact of tunnel oxide thickness is more relevant on erase operation; in particular a thickness of 4.2nm is sufficient to achieve the expected 4V programming window in less than 1ms. As a consequence of these considerations we performed program/erase In figure 2.17 the ∆Vt, using 100ms of program/erase time, is plotted as a function of tunnel oxide thickness. We show the dependence of Fowler-Nordheim program/erase on tunnel oxide thickness. The two operations are not symmetrical because by applying a positive or negative voltage on gate terminal, the channel zone is depleted, or not, which in turns varies the bulk surface potential. Moreover this technological parameter impacts the reliability characteristic which will be described in the next chapter. Finally it is important to find a satisfactory tradeoff between tunnel oxide thickness and channel doping dose in order to adjust the program/erase threshold voltages. Moreover in order to reach a 4V programming window the maximum tunnel oxide thickness is 4.2nm for this cell architecture. tunox Figure 2. 17. Dependence of programming window, measured using Vg=±18V after 100ms, versus tunnel oxide thickness for the programmed and erased states. Programming window cell optimization In the last paragraph we evaluated the impact of the main technological parameters on the programming window of the STMicroelectronics silicon nanocrystal memory cell. The aim of this analysis was to define the best way to improve the programming window using the standard program/erase pulses used for the Flash floating gate memory cell, maintaining the cell dimensions unchanged. Below we summarize the main conclusion of previous studies that we have taken into account in order to optimize the silicon nanocrystal cell:  When increasing the silicon nanocrystal size, thus the covered cell area, the programming window increases and in particular the Fowler-Nordheim erase operation is improved. We noticed that using the standard memory cell stack a covering of 95% is required to reach the level of 4V programming window that is not coherent with the silicon nanocrystal principle of functioning. In order to improve the programming window and optimize the stack of Si-nc cell, we considered as a fundamental point to increase the coupling factor as explained for the Flash floating gate by [Wang '79] [Wu '92] [Pavan '97] [Esseni '99] and in the chapter 1. Two different recipes have been developed to achieve 9nm and 12nm silicon nanocrystals reaching respectively 46% and 76% covered area. Furthermore with the coupling factor optimization it was possible to decrease the ONO layer thickness down to10.5nm to increase the vertical electric field during the erase operation. This thickness value is chosen in accordance with the recipes available in the STMicroelectronics production line.  The presence of Si3N4 capping layer on silicon nanocrystals increases the charge trapping probability and the covered channel area. The coupling factor is increased and then the programming window increases. Observing the CDSEM pictures we noticed that the Si3N4 capping layer grows around the silicon nanocrystals. Hence if their size is big enough it is possible to reach the contact and the coalescence of hybrid nanocrystals. In this case it is not possible to confirm if the programming window improvement is due to the Si3N4 capping layer presence or if it is due to the covered area increasing. In figure 2.7 we have shown the isolation of nanocrystals in a tilted CDSEM picture. In figure 2.18 we plot on the same graph the results of figure 2.5 and figure 2.8 showing that the programming window due to the Si3N4 presence can be extrapolated by the trend obtained when varying the covered area. In this case we can consider that the improvement in programming window depends mainly on covered area and slightly on charge trapping probability increasing. Even if the presence of nanocrystal Si3N4 capping layer is helpful to improve the programming window, we decided to avoid this process step in order to minimize the effects of parasitic charge trapping [Steimle '04] [Gerardi '07] [Bharath Kumar '06]. This choice will be explained and described in the next chapter on cell reliability.  We have shown that it is possible to improve the programming window by increasing the channel doping dose, but paying attention to the shift of threshold voltages. By increasing the channel doping dose up to 10 14 at/cm 2 , a 20% programming window gain is achieved. In this case the adjusting of program/erase threshold voltages is needed and to do this it is important to find the best tradeoff with the variation of other parameters nanocrystal size and tunnel oxide thickness. After these considerations we decided to use 10 14 at/cm 2 as the CDD for the optimized silicon nanocrystal cell in order to reach a higher programming window; the details will be given below.  Finally we studied the impact of tunnel oxide thickness on program and erase operations. In particular we demonstrated during the channel hot electron programming the tunnel oxide thickness slightly impacts the programming window because of the dominant dependence on lateral electric field. On the contrary, this technological parameter strongly impacts the Fowler-Nordheim operations. In particular we showed the effect on erase operation using both the ramped gate voltage and box pulses. In this latter case an improvement of 1.5V/nm can be achieved. In order to reach the 4V programming window a tunnel oxide of 4.2nm or thinner is needed. As for channel doping dose, the program/erase threshold voltages are shifted by the tunnel oxide variation and a Vt adjusting operation is needed. The layers stacked in the optimized nanocrystal cell are shown in figure 2.19. The SiO2 4.2nm thick tunnel oxide was grown on a p-type substrate doped on a surface with a dose of 10 14 at/cm 2 . Two different recipes were developed to grow 9nm and 12nm silicon nanocrystals. The cell stack is completed with an ONO layer with an equivalent oxide thickness of 10.5nm. We showed in chapter 1 that by decreasing the ONO thickness, the capacitance between the control gate and the floating gate is increased and the programming window is thus decreased; to compensate this effect we increased the silicon nanocrystal size up to 12nm. Furthermore the Fowler-Nordheim erase operation can be improved by decreasing the ONO thickness and thus increasing the vertical electric field on tunnel oxide. Using these two nanocrystal fabrication recipes we obtained the following samples:  Sample 1: Φ=9nm; density=7.3•10 11 nc/cm 2 ; covering=46.4%  Sample2: Φ=12nm; density=6.7•10 11 nc/cm 2 ; covering=75.7% In figure 2.20 the program/erase kinetic characteristics are plotted for the optimized stacks; the dispersion on wafer is also highlighted (30 samples tested). The first point we notice is the limited dispersion with respect to the nanocrystal diameter which is comparable for the two samples. Once again we demonstrated that for the silicon nanocrystal cell the covered area slightly impacts the channel hot electron programming, while the Fowler-Nordheim erase operation is strongly improved. The cell with the higher covered area can be erased in 0.4ms reaching a programming window of 4.7V which is thus greater than the 4V minimum programming window target; the program/erase threshold voltages were fixed: Vte=3V and Vtp=8.2V. In this case the quantum and/or the Coulomb blockade effects are negligible because of the large size of nanocrystals and the thick tunnel oxide. [De Salvo '01]. In conclusion with the optimized cell architecture, it is possible to reach the 4V programming window using the standard program/erase ramps described in section 2.2. In the following chapters we will continue the study of the silicon nanocrystal cell by analyzing its reliability and energy consumption. The program/erase pulses are described in section 2.2. Benchmarking with Flash floating gate To conclude this paragraph we compare the results obtained on optimized silicon nanocrystal memory cell (Φ=12nm; density=6.7•10 11 nc/cm 2 ; covering=75.7%) with the standard Flash floating gate keeping the cell size constant. To compare these devices the program/erase levels were fixed to Vte=3V and Vtp=8V. In figure 2.21 the program kinetic characteristic is shown for each device. For the optimized Si-nc cell the performances are the same of floating gate cell; the 4V minimum programming window is reached using a channel hot electron operation in 3.5µs using the ramped gate voltage. The erasing time to reach the minimum 4V programming window is 0.2ms for the optimized Si-nc cell achieving a gain of 60% with respect to the Flash floating gate. Considering these last results the programming window width can be increased up to 5V by adjusting the program/erase ramp time. This is important with regard to the programming window degradation after cycling, as explained in the next chapter. In conclusion all the trials varying different technological parameters (nanocrystals size and density, Si3N4 capping, channel doping dose and tunnel oxide thickness) have enabled us to optimize the silicon nanocrystal cell programming window in order to make it comparable with the Flash floating gate memory cell. The aim is to substitute the floating gate and thus decrease the wafer costs. In the next chapter we will compare the optimized silicon nanocrystal cell with the Flash floating gate memory according to the reliability results (endurance and data retention). Chapter 3 -Reliability of silicon nanocrystal memory cell Introduction In this section we present the results concerning our study on the reliability of the silicon nanocrystal memory cell. The data retention, thus the charge loss, is evaluated for different temperatures starting from a fixed programmed state [Gerardi '02]. The aim of endurance experiments is to evaluate the cell functionality after a large number of program/erase cycles, typically 100k. In particular, the results will show the impact of some technological parameters on cell reliability such as: silicon nanocrystal size and density, silicon nitride capping layer, channel doping dose and tunnel oxide. The comprehension of experimental results will be useful to improve the cell performances similarly to the previous chapter devoted to programming window characterization. At the end we will present the results obtained for the optimized STMicroelectronics nanocrystal cell and we will compare them to the standard Flash floating gate. Data retention: impact of technological parameters The data retention experiments have been performed by programming the silicon nanocrystal cell with manual and automatic test bench described in chapter 2. Let us evaluate the effect of main technological parameters on data retention and chose the best cell architecture configuration to optimize the performance. Effect of silicon nitride capping layer We have seen in chapter 2 that hybrid silicon nanocrystal memory is an attractive solution to improve the cell programming window [Steimle '03] [Colonna '08] [Chen '09] [Tsung-Yu '10]. Moreover in literature many papers present the integration of high-k materials as a good option to achieve better cell performances [Lee '03] [Molas '07]. At STMicroelectronics we were not able to integrate high-k materials in the process flow so we decided to develop a hybrid solution by capping the silicon nanocrystals using the silicon nitride (Si3N4) to compare it with standard Si-nc cell. The presence of this layer improves the programming window by increasing the covered area and generating a higher number of trapping sites with respect to the simple Si-nc utilization as explained in chapter 2. In literature the effect of temperature on charge loss is also explained when a silicon nitride layer is used to store the charges. This is the case for SONOS and TANOS memories [Tsai '01] This explains why our hybrid Si-nc+SiN memory loses more charges with respect to the simple Si-nc cell. The difference can be due to the portions of trapped charge at the tunox/SiN interface around the Si-nc. These trapped charges are more energetic and when the temperature increases can be easily lost. In this case the presence of Si3N4 layer does not improve the cell data retention because is not sandwiched between the nanocristal and the bulk to increase the barrier thickness. Rather it caps the nanocrystals and contacs directly the tunnel oxide, creating a parasitic charge trapping at the tunox/SiN interface. Effect of channel doping dose In figure 3.3 we plot the data retention results for the samples where the channel doping dose (CDD) is changed. In this case the tunnel oxide thickness is 5.2nm and the nanocrystals are capped with the Si3N4 layer (Φ=9nm+SiN=2nm). The charge loss at 150°C is not impacted by the CDD. The slight difference with the data presented in figure 3.1 can be due to the parasitic charge trapping at the tunox/SiN interface caused by the irregular Si3N4 layer deposition on the wafer; nevertheless, the difference is not relevant for cell behavior understanding. Effect of tunnel oxide thickness The most important parameter to evaluate the charge loss during the data retention is the tunnel oxide thickness (tunox), because as shown in figure 3.2b, it defines the thickness of barrier between the nanocrystals and the substrate. In literature the direct dependence on tunox of charge loss in terms of applied electric field is explained [Weihua '07] [Ghosh '10]. The mechanisms of charge loss can be activated since it depends on the tunnel oxide thickness and the type of traps generated by the Si-nc fabrication process. We considered samples with three different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm (description in figure 2.13). Figure 3.4 shows the data retention results at 27°C, 150°C and 250°C. In these samples the silicon nanocrystals are capped by the Si3N4 trapping layer. Ideally one memory cell must keep the charge stored for 10 years at 27°C. Considering the charge loss, the cell program level has to be greater than the program verify level; in our case this level is fixed at 5.7V. The data retention specification is reached only using a tunnel oxide thickness of 5.2nm, whereas if a tunox=4.2nm is used, the charge loss constraint is very close to the suitable limit. Moreover, the charge loss mechanism is strongly accelerated if the temperature is increased. This is due to the parasitic charge trapping at the tunox/SiN interface [Chung '07]. As previously explained, the charge loss mechanism in this case is similar to the SONOS and TANOS memories [Hung-Bin '12]. In table 3.1 we summarized the percentage of lost charge after 186 hours for all temperatures considered and tunnel oxide thicknesses. In figure 3.5 we show the Arrhenius plot of retention time, defined as the time necessary to reach a Vt=6V that is very near the program verify level. The slopes of extrapolated trends enable achieving the charge loss activation energy (Ea) to be attained for each tunnel oxide thickness considered embedded in our memory stack. As expected, when increasing the tunnel oxide thickness, a higher energy is necessary to activate the charge loss mechanism [Weihua '07] [Ghosh '10]. The extrapolated quantities are comparable to the amounts found in literature [Gerardi '08] [Lee '09] [Gay '12]. However the differences are due to the cell architectures.More particularly these activation energies can be impacted by the parasitic charge trapping in Si3N4 capping layer. We can conclude that is possible to achieve the data retention specification using a tunnel oxide 5.2nm thick in the range of temperature [27°C; Percentage of charge loss after 186h 150°C], but as we demonstrated in chapter 2, in accordance with the literature, this parameter impacts the coupling factor and the Fowler-Nordheim erase efficiency. To improve the performance, the cell architecture was changed, avoiding the Si3N4 capping layer and thus minimizing the parasitic charge trapping (see section 3.4) Endurance: impact of technological parameters Another important point to evaluate in the key criterion of reliability is the endurance of silicon nanocrystal memory cell. We present in this chapter the memory device degradation in terms of programming window closure when the number of program/ease cycles is increased. We investigate the effect of different technological parameters such as: silicon nanocrystal size and density (covering area), silicon nitride capping layer (Si3N4), channel doping dose and tunnel oxide thickness. The aim is to understand the Si-nc cell behavior and chose the best architecture configuration to optimize the memory performance. Impact of silicon nanocrystal size We described in chapter 2 the increasing of nanocrystal size, thus the covering of channel area, corresponds to an increase of the programming window. Using the same samples we reported in figure 3.7, the results of 100kcycles endurance experiments. The cells were programmed by channel hot electron using Vg=9V, Vd=4.2V and tp=5µs; the Fowler-Nordheim erase was performed with Vg=-18V, using a ramp of 5kV/s followed by a plateau of 90ms (te). A schematic of the cycling signals is reported in figure 3.6. The programming window at the beginning is 1.9V for the bigger nanocrystals and 1.1V using the 6nm nanocrystals. This result is coherent with the data reported in chapter 2. The erase operation is improved by increasing the channel covered area (9nm Si-nc), because the coupling factor is increased. However, in both cases we notice an important shift of threshold voltages, in particular for the erase state (2.4V). This can be due to the parasitic charge trapping in ONO interpoly dielectric layer [Jacob '07] [Gerardi '07] [Hung-Bin '12]. The electric field applied during the FN erase is not sufficient to evacuate the trapped charges during the programming operation in the SiN layer of ONO stack. The same phenomenon is present in SONOS and TANOS memories. Despite, the improvement due to the increasing of silicon nanocrystal size (or covered area), cell functioning does not reach the specification of 100kcycles. The cell with the 6nm silicon nanocrystals, after only 10cycles, has the erase threshold voltage at the same level of program threshold voltage; hence it is impossible to discriminate the two states for the reading circuits. For the sample with 9nm embedded Si-nc the endurance limit is 10kcycles. Finally, to improve the programming window and the erase efficiency, thus the cell endurance, one way is to increase the silicon nanocrystal covered area. Further development of cell architecture is needed to reach a good programming window, such as Si3N4 capping layer, channel doping dose, tunnel oxide thickness, but we will study these aspects in the next sections. Impact of silicon nitride capping layer We repeated the endurance experiments on the samples with the silicon nanocrystals capped Impact of channel doping dose We have previously seen in chapter 2 that an increasing of the channel doping dose enables the programming window to be increased and it generates a shift of program/erase threshold One can notice the shift of programming window toward higher voltages at the cycling beginning due to the increase of CDD as described in chapter 2. Moreover in this case the parasitic charge trapping in Si3N4 capping layer is present. This is demonstrated by the shift of threshold voltage during the cycling. The closure of programming window is evident for the three samples, but the highest CDD presents the most stable programming threshold voltage and only the erase state shifts. In order to summarize the results, we reported in table 3.3 the values of programming windows before and after the cell cycling and the program/erase threshold voltages shifts. For the first time we achieved 100kcycles endurance characteristic by keeping the programmed and erased states separated using the CDD=1.1•10 13 at/cm 2 and the erase time te=10ms, but we can note that the programming window is only 1.1V. That is not enough to achieve a good cell functioning, hence other improvements on the memory cell stack are necessary. In the next section the impact of tunnel oxide thickness will be studied. CDD (at/cm Impact of tunnel oxide thickness The last technological parameter we studied is the tunnel oxide thickness. We varied it between 3.7nm and 5.2nm, using the silicon nanocrystals capped by silicon nitride layer One can notice the limited influence of tunnel oxide thickness on CHE programming operation, while the FN erase is strongly impacted by this technological parameter. The endurance experiments confirm the erase efficiency improvement when the tunnel oxide thickness is decreased. This is due to the higher vertical electric field applied, as explained in Silicon nanocrystal cell optimization In this chapter we evaluated the impact of the main technological parameters of STMicroelectronics silicon nanocrystal memory cell. The results reported here concerning data retention and endurance experiments have been confirmed in literature, in particular the charge loss activation energy and the threshold voltage shift during the cycling due to the parasitic charge trapping. The aim of this analysis was to define the best way to achieve better results in terms of reliability of Si-nc memory cell. In order to optimize the silicon nanocrystal cell we have taken into account the following points:  In literature it is shown that the data retention is not strongly impacted by the nanocrystal size [Crupi '03] [Weihua '07] [Gasquet '06]. However this parameter impacts directly the programming window and the erase efficiency as shown in chapter 2. Moreover, the increasing of covered area leads to a good cell functioning after 100k program/erase cycles with better erase efficiency during the cycling. In any case we noticed that the increasing of covered area, thus the cell coupling factor, using the studied cell architecture, is not sufficient to erase the parasitic charge trapping in the ONO control dielectric [Gerardi '07] [Jacob '07]. Hence it is important to increase the nanocrystal size and density, but further improvements on cell architecture are needed.  The presence of silicon nitride capping layer on nanocrystals increases the charge trapping probability and the cell covered area. We described in section 3.2.1 the differences between the samples with nanocrystals entirely surrounded by Si3N4 layer and the samples where Si-nc are grown on the SiO2 tunnel oxide and afterwards capped by Si3N4. Our cell corresponds to the second case. Thus, there are no benefits concerning the data retention due to the Si3N4 presence because the barrier to consider for the data retention corresponds only to the tunnel oxide thickness. Moreover, the presence of parasitic charge trapping at the tunox/Si3N4 interface facilitates the charge loss at high temperature. Concerning the cell endurance using the silicon nitrite capping layer, the coupling factor is increased and thus the programming window increases too, but the parasitic charge trapping in the Si3N4 does not enable good cell functionality after 100k program/erase cycles with the described memory stack. To avoid the parasitic charge trapping that worsens the Sinc cell reliability for the optimization, we decided to skip this process step by adjusting the other technical parameters to achieve better results.  The data retention is unchanged when varying the channel doping dose. It is thus possible achieve a programming window gain by increasing the channel doping dose and adjusting the program/erase levels if needed (chapter 2). Moreover, the increasing of CDD decreases the shift of programmed threshold voltage because the programmed threshold voltage is close to the saturation level; the CDD=10 14 at/cm 2 is chosen for the optimized silicon nanocrystal memory cell.  Finally we showed the charge loss dependence on tunnel oxide thickness and we extrapolated the activation energies related to the different samples. As in the case of erase operation, the charge loss increases as well as the tunnel oxide thickness. We noticed that a 5.2nm tunnel oxide is needed to achieve the data retention specification for temperatures up to 150°C. On the other hand, the tunnel oxide thickness strongly impacts the erase Fowler-Nordheim operation. Hence, to achieve good cell functioning after 100k program/erase cycles, a tunnel oxide thickness of 3.7nm has to be used. Fo our study it is important now, to evaluate the cell behavior using a 4.2nm tunnel oxide, embedded in a different architecture without silicon nitride capping layer and an optimized ONO stack, in order to increase the vertical electric field and improve the program/erase efficiency and cell reliability. Data retention optimization It is clear now that if on the one hand the Si3N4 capping layer increases the programming window, on the other hand the Si-nc cell reliability is strongly degraded. We show below the results obtained using the silicon nanocrystal cell with the optimized stack. The latter is In order to show the gain reached by avoiding the Si3N4 capping layer in figure 3.12, we plot the data retention characteristics at 150°C of optimized Si-nc cell (Φ=9nm) compared with the data of hybrid nanocrystals (Si-nc+SiN) cell; the two samples have the same tunnel oxide thickness (tunox=4.2nm). In this case the optimized Si-nc cell is able to retain the stored charge up to 10 years at 150°C. This result demonstrates that the data retention strongly improves by avoiding the silicon nitride capping layer. Endurance optimization We complete the optimized cell reliability characterization showing the data concerning the endurance cell degradation. The impact of Si3N4 on endurance is evaluated above. In figure 3.15 we plot the results of hybrid silicon nanocrystal cell compared with the optimized cell; the cell schematics are also shown. In order to achieve the same programming window (4V) at the beginning of cycling, we used different program/erase conditions:  Hybrid Si-nc cell (Φ=9nm+SiN=2nm) o CHE programming: Vg=9V; Vd=4.2V; tp=5µs. o FN erase: Vg=-18V; te=ramp=5kV/s + 10ms.  Optimized Si-nc cell (Φ=9nm) o CHE programming: Vg=9V; Vd=4.2V; tp=1µs. o FN erase: Vg=-18V; te=ramp=5kV/s + 1ms. Using the optimized memory stack it is possible to decrease the programming and erase time thanks to the higher covered area and associated coupling factor. Moreover, by avoiding the Si3N4 capping layer, the erase efficiency and the endurance are greatly improved. A very slight shift of program/erase threshold voltages for the optimized sample results in 3.6V programming window after 100kcycles. This result was reached without a pre-cycling cell treatment. In fact using a positive or negative stress with high voltage, before cycling, helps accelerate the degradation process and improves the endurance performance with less memory window decrease [Yong '10]. The shifts of threshold voltages measured for the optimized Si-nc cell are not so marked, thus the pre-cycling treatment is not needed. After reaching this good results by using the optimized Si-nc cell we show in figure 3.16, for the first time to our knowledge, the 1Mcycles endurance characteristics of two optimized samples with different nanocrystal sizes (Φ=9nm and Φ=12nm) by achieving a large programming window. The cell with 12nm Si-nc is able to maintain a 4V programming window after 1Mcycles, improving the results published in [Ng '06]. In table 3.5 the values of the programming window before and after the cycling as well as the threshold voltage shifts are reported. Hence to improve the endurance performances up to 1Mcycles it is important to avoid the Si3N4 Si-nc capping, to increase the covered area and to use a thinner ONO layer. Using the program/erase conditions of experiments reported in figure 3.16, we repeated the cycling varying the temperature (T=-40°C up to T=150°C). In figure 3.17 the results are shown for the Si-nc cell with the higher covered area (sample with Φ=12nm and density=6.7•10 11 nc/cm 2 ). The programming window after 1Mcyles remains bigger than 4V and its value does not depend on the temperature. One can notice that by increasing the temperature the characteristic shifts toward the lower voltages. Both the programming and the erase operations are impacted by the temperature [Della Marca '13]. The programming efficiency decreases by increasing the temperature because the current in the channel decreases as well as the injection probability [Eitan '81] [Emrani '93]. Moreover, at low temperature, an increase in mobility is observed for Si-nc transistors generating a quasi-linear increase of the threshold voltage [Souifi '03]. In the case of FN erase, the efficiency increases with the temperature. This is justified assuming that the dominant conduction mechanism is assisted by traps [Zhou '09]. Therefore the programming window is bigger than 4V at the first cycle for all the temperatures and the shift of threshold voltages is due to the program/erase conditions that are kept unchanged. In table 3.6 the programming window before and after the cycling as well as the threshold voltage shifts are reported for different temperatures: T=-40°C, T=27°C and T=150°C. Benchmarking with Flash floating gate To conclude this chapter we compare the results concerning the optimized silicon nanocrystal memory cell with the standard Flash floating gate, keeping constant the cell size. In figure 3.18 the data retention at 250°C is shown for each device. We have seen previously (figure 3.11) that the optimized cell can maintain the programmed memory state 10 years up to 150°C. To satisfy our fixed data retention specification and to achieve the Flash floating gate results the cell must keep a program threshold voltage greater than 5.75V at 250°C up to 168h. One can notice that the Si-nc cell is just at the limit of this target, but further efforts concerning the tunnel oxide optimization are required to reach the standard floating gate device. The main constraint is the fast initial charge loss due to the charge trapping in the tunnel oxide, ONO layer and at the interfaces: substrate/tunox, Si-nc/oxide. To improve data retention of the optimized Si-nc cell one way is to increase the tunnel oxide thickness taking in to account the tradeoff with the programming window. Moreover special recipes of tunnel oxide growth can be developed playing on: time and temperature of process, oxide nitridation and preparation of the active surface to silicon nanocrystal nucleation. However these options will be taken in to account in future work. The endurance results are also compared keeping unchanged the program/erase conditions (CHE programming: Vg=9V, Vd=4.2V, tp=1µs and FN erase: Vg=-18V, ramp=5kV/s+te=1ms). We considered the optimized Si-nc cell with a larger programming window (Φ=12nm); the data are plotted in figure 3.19. As expected, using the same program/erase conditions the Flash floating gate presents a larger programming window at the beginning of cycling (ΔVt=7V), thanks to its superior coupling factor and higher programming efficiency. The most significant threshold voltage degradation determines a major closure of programming window after 1Mcycles (ΔVt=2.8V), while the endurance characteristic remains more stable for the Si-nc cell. This is why for the floating gate device it is important to achieve a larger programming window. To understand the results qualitatively, we report in table 3.7 the programming window before and after the cycling as well as the threshold voltage shifts. Introduction In this section we present the results concerning the current and energy consumption of floating gate and silicon nanocrystal memory cells during the channel hot electron programming operation. The current consumption evaluation of a Flash floating gate memory cell is measured using current/voltage converter or indirect technique. In this way it is not possible to understand the dynamic cell behavior and to measure the cell performances in a NOR architecture for a programming pulse of several microseconds. Moreover, the indirect method, which will be explained in the chapter, is not functional for silicon nanocrystal memories. In this context we developed a new experimental setup in order to measure dynamically the current consumption during a channel hot electron programming operation. This method helps to understand the dynamic behavior of two devices. The energy consumption is also evaluated using different bias and doping conditions. The aim was to characterize the impact of different parameters about the floating gate cell consumption and find the best tradeoff for the Si-nc cell. Furthermore the consumption due to the leakage of unselected cells in the memory array is measured in order to complete this study. In conclusion the consumption is optimized and compared for both devices, giving new solutions for low power applications. Methods of Flash floating gate current consumption measurement Today one of the most important challenges for Flash floating gate memory cells in view of low power applications is to minimize the current consumption during the Channel Hot Electron (CHE) programming operation. Specific consumption characterization technique is presented in the literature, but requires a complex measurement setup, limited by circuit time constants [Esseni '99] [Esseni '00b] [Maure '09]. As an alternative it is possible to calculate the cell consumption using the static drain current measurement on an equivalent transistor. Standard current consumption measurement In literature it is shown how to measure the drain current of a Flash floating gate using a dedicated experimental setup. In [Esseni '99] and [Esseni '00b] Indirect current consumption measurement As an alternative to the direct measurement on floating gate cell, it is possible to calculate the static current consumption during the programming operation starting from the drain current measurement on the equivalent transistor (called dummy cell), where the control gate and floating gate are shorted and the geometric dimensions (channel length and channel width) are kept unchanged. This technique enables the consumption to be calculated, regardless of the programming time, by using a commercial electrical parameter analyzer, while the current is considered to be constant during the programming. In order to explain this method we consider the following formula: Vt : threshold voltage of floating gate cell during the programming operation. ) ( Vt Vt V V V G treq G D D G G FG         Defining the overdrive voltage as: Vt V V G OV   (4.2) We obtain: treq D D OV G FG Vt V V V      (4.3) The coupling factors have been calculated using the capacitance model shown in figure 4.3 and the cell dimensions. In this simple model the parasitic capacitances are also considered: We calculated for the standard Flash floating gate: αG=0.67 and αD=0.07. In the case of G  it is possible to compare the theoretical result with the measured values using different experimental techniques [Wong '92] [Choi '94]. This requires the static measurement of electrical parameters in both a floating gate cell and dummy cell. In figure 4.4 we reported the box plot of αG calculated as the ratio between the subthreshold slope of dummy cell and subthreshold slope of floating gate cell. The cell dimensions are: W=90nm and L=180nm. One can notice the effect of dispersion on wafer related to the process variations (tunnel oxide integrity, source and drain implantation, channel doping dose, geometrical effect, etc..), and coupling factor calculation (36 samples are tested). Finally the overdrive voltage is measured monitoring the Vt evolution during a programming operation. When the cell is programmed by a ramp, the Vov remains constant as well as the CHE injection [Esseni '99]. In figure 4.6 the measured Vt, obtained by applying the 1.5V/µs ramp on the control gate with a drain voltage of 4.2V, are plotted. The ramped control gate voltage is emulated using a series of pulses as explained in chapter 2. In order to reduce the error due to the Vov calculation, the overdrive is calculated as follow: At this point it is possible to calculate the floating gate potential reached by the cell after the channel hot electron programming operation; its value is: VFG=3.3V. In order to measure the static cell consumption, this potential is applied on the gate of the dummy cell maintaining the Vd=4.2V. In figure 4.8 we reported the drain current absorption (Id) under programming conditions measured for the equivalent transistor. During the experimental trials we noticed the degradation of Id level for consecutive measurements on the same device. This is due to the high voltage applied between gate and drain terminals by degrading the tunnel oxide.           2 1 n n G OV Vt Vt V V (4. Thus the measurement has been performed using a sampling time as fast as possible (65µs) depending on the parameter analyzer speed. Using this indirect method it is possible to evaluate the floating gate cell consumption by using the static measurements on an equivalent transistor (dummy cell). This procedure introduces a significant error due to the spread of dummy cell parameters, by assuming that the drain current absorption remains constant during the programming phase. This means the energy consumption is overestimated with respect to the real conditions. On the other hand the direct measurement on floating gate cell, by using the IV converter described above (figure 1.2), shows relevant limits when the programming pulse is short (several microseconds) due to the presence of parasitic capacitance in the measurement setup. That is why we have been motivated to develop a new measurement technique New method of current consumption measurement We discussed in paragraph 2.2 the complex measurement setup for dynamic drain current measurements. This method is affected by high time constant and does not enable measurement in a very short time period. The alternative indirect method of absorbed Id extrapolation that we previously described, is equally inaccurate and does not enable energy consumption calculation. We propose a new technique of measurement in order to measure the drain current during the CHE programming operation by using pulses of several microseconds. Our setup is shown in figure 4.9, where we use the Agilent B1500 equipped with two WGFMU (Waveform Generator and Fast Measurement Unit, Agilent B1530A) [Della Marca '11b] [Della Marca '11c]. In this way it is possible to set the sampling time at 10 ns by measuring the current dynamically. Moreover a power supply source can be connected using a low resistance switch matrix to the FG to complete the device biasing. One can notice that the drain current is not constant during the programming operation using a ramped gate pulse. The Id becomes constant when the equilibrium condition is reached [Esseni '99] and its quasi-static value decreases when the gate voltage remains constant. Thus the importance of this characterization technique is related to the cell behavior comprehension and the energy consumption calculation. Floating gate consumption characterization In order to understand the cell's behavior during the channel hot electron operation and to optimize its performances, we decided to use the dynamic technique of measurement to evaluate the impact of the programming pulse shape; the impact of drain and bulk biases and the impact of technology (channel doping dose and lightly doped drain). The study on current and energy consumption during the programming operation is not limited to the single cell current absorption, but extended to the bitline leakage current as well. In the memory array, the unselected cells connected to the same bitline of selected cell, contribute to the global consumption with their drain/bulk junction leakage [Della Marca '13]. The principle of bitline leakage measurement will be explained in paragraph 4.3.2. Cell consumption Impact of programming pulse shape We have seen before that in literature the floating gate behavior is described when the pulse gate is represented by box or by ramp. We applied our dynamic method to measure the drain current and consequently, the energy consumption and the programming window. The aim was to find the best tradeoff to improve the cell performances. In figure 4.11a the boxes applied on the control gate are shown, the ramp speed is 45V/µs and the drain voltage is constant at 4.2V. For all box pulses, the measured current peak is constant (figure 4.11b). When the gate voltage remains constant, the Id current quickly decreases following an exponential law. We have to plot the consumption data in arbitrary units (a. u.) to respect the STMicroelectronics data confidentiality. This means that it is possible to reach low energy consumption levels and programming in a very short time. After each programming operation, the threshold voltage is measured and the cell is erased at the same start level. In this way, we calculate the programming window (PW) as the difference between programmed and erased threshold voltages. Then, the energy consumption (Ec) is calculated using the following formula:      tp t t dt Vd Id Ec 0 ) ( (4.5) Where tp is the programming time. With the same method, we measured the drain current for different ramps applied on control gate (figure 4.13). It is worth noting that by increasing the ramp speed the Id current peak increases. On the contrary, when the ramp is slower, the Id current is smoothed (no peak), but the programming time increases. As explained before, we calculated the programming window and the energy consumption also in this case; the results are plotted in figure 4.14. The programming window and the energy consumption both decrease with the ramp speed increase. It is possible to reach a very low Ec, maintaining a good PW level, but inferior to the minimum specification (figure 4.14). This specification is due to the sense amplifier sensibility. This study enables the gate pulse shape to be chosen with respect to the best compromise between the cell performances (PW, Ec, Id peak). The final amplitude of Vg and the programming time duration are kept constant at 9V and 5µs respectively (charge pump design constraints). In order to optimize the cell performances, we decided to merge a 1.5V/µs ramp with a 1µs plateau in order to avoid the problem of Id peak, maintaining a satisfactory programming window. The results are summarized in Table 1, where the gain/loss percentages are normalized with respect to the case of single box programming. Using the new dynamic method of measurement, we characterized the device with different ramp and box programming signals. This procedure enables the best programming pulse shape to be chosen with respect to the final embedded low power product application. We have shown one possible optimization, with respect to the standard box pulse. The best tradeoff reduces the current consumption by 35%. However this decreases the programming window by 11% and increases the energy consumption by 10%, if the programming duration is kept constant (5µs). Another improvement, concerning floating gate cell performances, can be obtained using the appropriate drain and bulk biases. This study will be presented in the next paragraph. Impact of drain and bulk biases Using the optimized pulse (ramp + plateau), we decided to study the dynamic cell behavior for several drain (Vd) and bulk (Vb) voltages. by increasing again. The current variation in this region is attributed to the effects of the high fields applied between gate and substrate, as well as drain and source, and the channel modulation. These effects induce the hot carrier generation, thus increasing the drain current. In this case the channel is completely formed and pinched closely to the drain; the position of the pinch-off point is modulated by the drain voltage [Benfdila '04] [Moon '91] [Wang '79]. In figure 4.17d, it is worth noting that we found an optimal value of drain current between the low and high injection zones (Vd=3.8V). After each programming pulse, we measured the We repeated the experiment with the reverse body bias to benefit from the CHISEL effect [Takeda '83] [Driussi '04] [Esseni '00a]. The results are reported in figure 4.19. By increasing the amplitude of bulk bias the injection efficiency is increased, reaching a bigger programming window. In the meantime, the energy consumption of drain charge pump decreases due to the current reduction, allowing relaxed design constraints. Here we only considered this contribution, but by adding the substrate biasing, a bulk current is present. This current impacts the size of bulk charge pumps. Impact of channel doping dose After this study on programming pulses, where we understood the significant role of surface channel potential, we decided to modify an important technological parameter which can impact the cell consumption: the Channel Doping Dose (CDD). We tested the device described above using the optimized gate pulse (ramp + plateau) presented in Bitline leakage Another interesting point to evaluate in a NOR-architecture memory array is the BitLine Leakage (BLL), due to the Gate Induced Drain Leakage (GIDL) current in Band-to-Band Tunneling (BBT) regime during the CHE programming operation [Rideau '04]. As highlighted in previous studies [Mii '92] [Orlowski '89] [Touhami '01], several technological parameters, such as cell LDD doping (dose, tilt, and energy), drain-gate overlap, or STI shape have an impact on electric fields in the drain-bulk junction, responsible for GIDL. In this section, the impact of arsenic LDD implantation energy on bitline leakage measurements is presented through electrical characterizations, performed on a dummy cell structure. Impact of lightly doped drain implantation energy It is traditionally known that LDD process is used in classic MOSFETs to reduce lateral electric field by forming a gradual drain junction doping near the channel and as a consequence, to decrease the hot electron effect at the drain side [Nair '04] [Yimao '11]. In TCAD simulations of LDD implantation To understand the effect of LDD concerning the BLL mechanism, we performed TCAD investigation using Synopsys commercial tool for both process and electrical simulations. The process simulator parameters, i.e. doping diffusion and segregation coefficients, have been fine-tuned in order to obtain electrical results in accordance with experimental data. For electrical simulations, the hydrodynamic transport model has been adopted. Figure 4.23a shows the 2D cell net active doping profiles of LDD and drain-bulk junction, for several implantation energies. In figure 4.23b, the net doping profile along a horizontal cut below the Si/SiO2 interface is reported and five regions are identified, from left to right: source/LDD/channel/LDD and drain. We observe that the net doping level at the channel-LDD region (also corresponding to the gate-drain overlap region) decreases with implantation energy increase. It has been shown that doping and surface potential gradients have an impact on GIDL through the lateral electric field [Parke '92] [Rideau '10]. In the present case, a less abrupt net doping profile in the channel-LDD region for the highest implantation energy (figure 4.23b) leads to a lower lateral electric field and a smaller leakage current. respectively. It can be noticed that if, on the one hand, no significant variation is seen on the vertical electric field peak (figure 4.24c) and on the other hand, the lateral electric field peak decreases as LDD implantation energy increases (figure 4.24b). As previously mentioned, the reduction of the lateral electric field, and thus of the global electric field, decreases the leakage current of the drain-bulk junction, due to Band-to-Band Tunneling. Although the cell LDD implantation energy increase could help decrease the bitline leakage, we also have to take into account its impact on cell performances during the programming operation. In what follows, we will focus on the impact of implant energy on the write efficiency. Programming is performed on a standard cell using CHE injection. We bias the control gate and the drain with 9V and 3.8V box pulses respectively, and the bulk with -0.5V. In figure 4.25a the programming window and the bitline leakage are plotted versus the LDD implantation energy. This graph highlights the fact that the programming window is impacted by LDD energy and decreases as the energy increases, due to the reduction of the lateral electric field contribution. A satisfactory trade-off can be found reaching a gain of 49% in terms of BLL reduction, losing only 6% of PW and increasing by +10keV the standard LDD implantation energy. Further improvements can be made, with a +20keV increase, gaining 70% of BLL reduction against less than 10% loss on PW. This study has been performed by keeping the channel doping dose unchanged. In order to find the best tradeoff, it is important to take into account that when the CDD is increased, the BLL is increased too, because the lateral electric field is enhanced (figure 4.25b). In conclusion we found a programming scheme optimization for the floating gate cell using the new dynamic method measurement for the drain current consumption. The study enables the best tradeoff to be found depending on cell application in terms of dynamic consumption and programming window. In addition we considered the impact of drain and bulk biases highlighting the optimum point of work for our technology using Vd=3.8V and Vb=-0.5V. Finally the impact of channel doping dose and lightly doped drain implantation energy have been studied to improve the consumption due to the unselected cells of bitline. Silicon nanocrystal cell consumption characterization The study of floating gate cell consumption helped understand the main parameters that can impact this aspect during the channel hot electron programming operation. We have seen the importance of new setup development for dynamic current consumption. Moreover, this method becomes compulsory because the discrete nature of nanocrystals does not enable a dummy cell to be designed that is useful for static measurement and thus the drain current consumption extrapolation. Here we present the impact of programming scheme and tunnel oxide thickness on s hybrid silicon nanocrystal cell (Si-nc+SiN). Finally we will show the consumption results on optimized Si-nc cell compared with those on the standard floating gate. Impact of programming pulse shape After the study on the floating gate cell, we used the same setup for dynamic current measurements to evaluate also the hybrid Si-nc cell with a 5.2nm tunnel oxide thickness. This device has been chosen for its higher programming window due to the SiN presence that increases the charge trap probability (see chapters 2 and 3). The programming window and consumption are evaluated by using box and ramp pulses and also considering the optimization used in the case of floating gate (ramp+plateau). In figure 4.26 we show the applied gate voltage pulses while the drain voltage is constant (Vd=4.2V); the source and bulk terminals are grounded (Vs=Vb=0V). The programming pulse varies from a box to a ramp of 1.2V/µs; between these two conditions each ramp is followed by a plateau of different duration to improve the programming window, while the programming time is kept constant (tp=5µs). We program the cell always by starting from the same threshold voltage, in order to measure the drain current and the programming window after each pulse. Using the dynamic measurement setup and considering the Id behavior, we can observe that the current follows the gate potential variation over the time. The cell behavior is very different with respect to the floating gate cell results observed above. These results, obtained for the Si-nc cell, suggest a transistor-like behavior during the programming operation. In particular we can notice that the peak is not present when the box pulse is applied, but the current remains constant during the programming phase. The design of memory array control circuits have to take into account this aspect, but no overshoot is possible. In figure 4.27 we reported a simple schematic of silicon nanocrystal cell in order to explain its behavior. During the CHE injection the charges are trapped only in the silicon nanocrystals and SiN capping layer close to the drain area. In this zone the horizontal electric field is stronger and the electrons are accelerated so as to be injected in trapping layers [Tam '84] [Takeda '85] [Ning '78] [Chenming '85]. During this operation a high potential is applied on drain terminal (Vd=4.2V). Then the Space Charge Region (SCR) on drain-body junction increases its area, With these measurements and using the formula (4.5) we can compare the results obtained for Si-nc cell and the results reported in figure 4.16 for the Flash floating gate. The aim is to evaluate the relation between the shape of the gate pulse, the energy and the programming window of the two devices (figure 4.28). Concerning the programming window, the hybrid Si-nc and the floating gate cells have the same behavior. One can notice that by increasing the plateau duration (or the ramp speed) the programming window increases too, and the difference between the two devices remains constant (10%). This difference is due to the higher coupling factor of Flash floating gate. Considering the Id consumption of Si-nc+SiN cell (figure 4.26b), it is easy to understand the linear dependence of energy consumption on plateau duration, while Ec slightly decreases for the Flash floating gate. For each pulse it is evident that the hybrid Si-nc cell reaches a smaller programming window consuming more energy; in particular to achieve a good programming window (greater than 80%), the Si-nc+SiN cell can use up to 50% more energy. It is necessary to use box pulses to increase the programming window, but also to decrease the The aim is to find the best programming pulse shape to improve the cell performances, as we did for the flash floating gate cell. In figure 4.30 we notice that the minimum programming window level is reached using 2µs box pulses. This enables the energy under the floating gate consumption to be decreased by using a 5µs box pulses. We can now establish that for the Si-nc+SiN cell, the best performances are obtained using box pulses of short duration. Unlike the floating gate cell, where an optimized pulse was defined by merging a ramp followed by a plateau in order to avoid current peak during the programming, for the hybrid Si-nc cell, the current level is constant. This leads to less design constraints and less disturb in logic and analog circuits around the memory array. To complete the study we report the experimental data of dynamic drain current absorption, of the programming window and the energy consumption in figures 4.31 and 4.32. We demonstrated that the drain current follows the gate potential, ( see also the figures 4.26 and 4.29), confirming the linear dependence of energy consumption on programming time. In figure 4.32 we compared the experimental results obtained for the hybrid silicon nanocrystal cell when it is programmed by box or by ramp. When the programming time increases, the consumed energy as well as the programming window are increased. In this case the ramp speed is varied as a function of programming time (tp); so the programming window decreases when the ramp speed is increased because tp decreases too. The difference between ramp and box pulses is constant and independent of programming time: it is 40% for programming window and 50% for the energy consumption. The energy increasing is linear but the programming window tends to saturate when increasing the programming time. This means that in order to reach a satisfactory programming window level, a long programming time is necessary. Today, for electronic low power applications, speed is a fundamental parameter in order to be competitive on market. This is why we consider the box programming scheme to be the best solution for the silicon nanocrystal cell. TCAD simulations of current consumption In order to confirm the experimental results on silicon nanocrystal and floating gate cells, the behavior of these two devices is simulated using a commercial TCAD simulator. We chose a two-dimensional (2D) approach in order to evaluate the process impact, improve the program stability and reduce the simulation time. We produced process simulations using the We used the same calibrated hydrodynamic set model for both devices, except for the Channel Hot Electron injection model. The Spherical Harmonic Expansion (SHE) model presented in [Seonghoon '09] was chosen for FG cell programming simulations, while the Fiegna model [Fiegna '93] is used for Si-nc+SiN cell. During the simulations we noticed that the SHE model can reproduce dynamic current simulations of each device, but not the programming window, in the case of Si-nc+SiN cell, even if the adjustment parameters are considered. Hence we decided to use in this case the Fiegna model that offers the best compromise to simulate the Si-nc+SiN channel hot electron programming operation. Figure 4.34 shows concordance between TCAD simulations and dynamic drain current measurements. In figure 4.34 the results of dynamic current measurements, obtained for the floating gate and the Si-nc+SiN cells, are shown in the case of programming box and ramp pulses. These are also compared with the simulations of our TCAD model previously presented. For each case there is a satisfactory quantitative concordance. As described above we were able to simulate the programming window level after each pulse and to calculate the consumed energy. In figure 4.35 is reported the case of the Si-nc+SiN cell, by using the box pulses with different durations. The experimental data used to fit the simulations are the same as those plotted in figure 4.30. One can notice that the concordance between data and simulation predicts the cell behavior by varying the voltage bias and pulse shape. In this case we used the simulations to confirm our explanation concerning the cell functioning. Thus the charge localization maintains the absorbed drain current constant. This leads to an increase of the cell consumption and to suppress the current peak during the channel hot electron operation. Hybrid silicon nanocrystal cell programming scheme optimization Previously, we described the floating gate and hybrid silicon nanocrystal cell behavior using the experimental data obtained with the dynamic current measurements and TCAD simulations. The Si-nc+SiN cell does not present a drain current peak during the channel hot electron programming, but the consumption is higher than standard Flash floating gate. In order to reduce the consumed energy we decided to use short programming pulses by maintaining a satisfactory programming window level. In order to summarize all results, we plot in figure 4.36 the value of energy consumption calculated for the F.G. and Si-nc+SiN cells, while the programming window is kept constant (PW=4V). To reduce the design constraints, it is possible to optimize the programming operation of Si-nc+SiN cell using a box pulse. To reach 4V of programming window the following conditions are used:  Impact of gate and drain biases In the preceding paragraph we studied the effect of programming pulse shape of Si-nc+SiN cell, keeping the biasing conditions constant. In order to optimize the cell consumption we further investigate the effect of gate and drain biases. In figure 4.37 the programming window and the energy are plotted keeping the programming time constant (tp=1µs); both are calculated as explained above. It is confirmed that the box pulse is more efficient in terms of programming window than the ramp, at the expense of higher energy consumption. Moreover the impact of drain and gate voltages is shown. The increase in drain voltage by 0.4V leads to the same gain in terms of memory window as when increasing Vg by 1.2V. The horizontal electric field becomes predominant in channel hot electron operation, a small variation of Vd thus implies a relevant increase of the programming window. Comparing the ramp and box pulses, when gate voltage is varied (figure 4.37 a and b), we notice that the energy consumption is twice as big as when the box pulses are used independently of the programming time. In addition the difference of programming windows, calculated for both cases, increases with gate voltage which means that the box pulse achieves the best results regardless of Vg. This increases the vertical electric field during the CHE operation, hence the charge injection probability is increased. In figure 4.37 c) and d) the impact of drain voltage on programming window and energy consumption is shown. We confirm the best results in terms of PW are reached using the box pulse and we notice the exponential increase of energy for the higher amounts of Vd. To better understand the cell performances as a function of programming time and biases, the programming energy is plotted as a function of the programming window (figure 4.38). The goal is to increase the programming window by keeping constant the energy consumption using the box pulse and optimizing the biasing conditions. The abrupt variation of gate voltage (high ramp speed) during the programming operation starts the hot carrier generation at the beginning of the programming phase. The hot electron injection starts when Vg≈Vd [Takeda '83] [Takeda '85]. Thus, by using a ramp, the programming efficiency is lower and a longer time is needed to program the cell correctly. In figure 4.38 we notice that the programming window tends to saturate when the programming time is increased, leading to higher consumption. This is due to the quantity of injected charges that decreases the vertical electric field during the programming operation. We demonstrated once again that the box pulse increases the programming window by keeping the programming time and the biasing condition constant. Considering these results we defined the programming efficiency (PE) as the ratio between the programming window (PW) and the energy consumption (Ec). One can notice the linear dependence on the gate voltage (or vertical electric field). On the other hand, an optimized efficiency is measured for Vd=4.2V. This is due to the exponential behavior of energy consumption versus drain voltage. When the drain voltage is higher than 4.2V, the programming injection tends to saturate while the drain current increases which increases the consumption and reduces the programming efficiency [Della Marca '12] [Masoero '12]. We can now affirm that to optimize the programming operation, it is necessary to use the box programming pulse with the higher gate voltage and Vd=4.2V which, in this case, represents the point of higher programming efficiency. Using the Impact of tunnel oxide thickness After the study of programming scheme to improve the consumption of the silicon nanocrystal cell, in this section we show the impact of tunnel oxide thickness (tunox) on the programming operation (programming window and energy consumption) and data retention. First we performed the programming kinetic experiments using cumulative box pulses (duration of 0.5µs) for two tunnel oxide thicknesses (figure reported the values of vertical electric field calculated during the first 0.5µs, considering zero charges stored in silicon nanocrystals for two different tunnel oxide thicknesses. We can notice that the difference of 0.2 MV/cm is small compared to the maximum Evert=4.2MV/cm, thus the 1nm variation of tunox becomes negligible for the channel hot electron operation because the horizontal electric field is dominant. In figure 4.42 the energy consumption is plotted as a function of the programming window using the optimized programming conditions identified before (box pulse, Vg=10V and Vd=4.2V). The XY scale axes are the same as for figure 4.38. Moreover we highlighted the levels of minimum programming window, acceptable for good cell functionality, and the level of sub-nanojoule energy consumption. In figure 4.42, once again the tunnel oxide thickness has a limited influence on the consumed energy and the programming window during the CHE programming. In particular to produce the minimum programming window, a programming pulse of 1μs is sufficient for the two tunnel oxide thicknesses with a consumption energy lower than 1nJ. We show in figure 4.43 the programming efficiency calculated for two different tunnel oxides varying the box pulse duration, in order to conclude the study on tunnel oxide thickness and to demonstrate that this technological parameter has a limited impact on programming efficiency. The efficiency is higher using short pulses and is independent of the tunnel oxide thickness; which is consistent with the results previously presented. In this section we studied the impact of programming scheme on energy consumption for silicon nanocrystal memory cell. We propose an optimization concerning the programming pulse shape demonstrating the greatest efficiency of box pulse versus the ramp. The linear dependence on gate voltage is shown, while an optimum point of work is found for Vd=4.2V. The consumption has been reduced down to 1nJ generating a satisfactory programming window. Moreover the best trade-off to improve the cell efficiency is found by using very fast pulses regardless of tunnel oxide thickness. Optimized cell consumption In previous sections we showed the characterization results of floating gate and hybrid silicon nanocrystal cell consumption; we related it with the programming window in order to find an optimized programming scheme. The best compromise for floating gate cell was to use a programming ramp followed by a plateau in order to reduce the drain current peak and to maintain a satisfactory programming window. In the case of the Si-nc+SiN cell we found the best compromise using very short box pulse on gate terminal. Here we report the results found using the optimized silicon nanocrystal cell with 4.2nm tunnel oxide, 12nm silicon nanocrystal size and the thinner ONO (EOT=10.5nm). Using box pulses of different durations, we characterized the drain current absorption; in figure 4.44 we reported the results using Vg=10V and Vd=4.2V, the Y scale axe is the same as in figure 4.29 so as to compare the optimized cell with Si-nc+SiN cell. We notice that the drain current does not follow the gate potential, but in this case it decreases during the programming time, showing a similar behavior to that of the floating gate cell, where a current peak is present. We explained previously that the different behavior between Si-nc+SiN and floating gate cell is justified by the localization or not of trapped charges. In the case of Si-nc cell we can affirm that the localization effect due to the SiN layer is not present. Moreover the bigger size of nanocrystals produces a charge distribution toward the center of device channel, modifying the potential of substrate surface. In figure 4.45 we reported the cell schematics and the relative drain current measurements in order to compare the behavior of Si-nc+SiN, optimized Si-nc and floating gate cells. with different electric potentials [Matsuoka '92]. Moreover the nanocrystal coalescence during the growth process can reduce the distances up to the contact creating percolative paths from the drain side to the source side. Taking into account the dynamic behavior of the Si-nc cell and the fact that the box is more efficient with respect to the ramp pulse, in particular for short pulse duration, we repeated the measurements by varying drain and gate biases and we compared these results with the hybrid Si-nc cell. In figure 4.46 we reported the programming window and the energy consumption as a function of Vg (a-b) and Vd (c-d) using 1µs box programming pulses, of Si-nc and Si-nc+SiN cells. We notice that the optimized silicon nanocrystal cell presents a better programming window due to the higher covering area (coupling factor) and lower energy consumption due to the lower drain current. The dependence on gate voltage (vertical electric field) is linear as in the case of the Si-nc+SiN cell. With high drain voltage the programming window starts to saturate for Vd=3.8V, while the consumed energy exponentially increases. These results directly impact the cell efficiency. In figure 4.47a we can notice that for the optimized Si-nc cell the programming efficiency decreases in spite of the Si-nc+SiN cell trend. Benchmarking with Flash floating gate To conclude the report on cell energy consumption, in this paragraph we compare the main results based on the previous study of the silicon nanocrystal cell with Flash floating gate. We have shown before the cell behavior under different biasing conditions and the impact of some technological parameters (tunnel oxide thickness, channel doping dose, ONO thickness). In order to compare these two different devices, we plotted in figure 4.49 the programming window and the consumed energy using box pulses of different duration. The devices were tested using the proper drain voltage optimization shown previously: Vd=4.2V for the Si-nc+SiN; Vd=3.8V for F.G. and Si-nc cells; the gate voltage is kept unchanged (Vg=9V). The optimized Si-nc cell is able to reach a programmed level comparable with that of the floating gate cell, but the consumed energy remains higher because of the higher drain current; the worst case is represented by the Si-nc+SiN cell. Using the experimental data we extrapolated the power laws describing cell behavior and we show that by using very fast pulses the Si-nc cell consumption can be improved but it does not go down to the floating gate level. The results of ramped gate voltage (figure 4.50) show that the Si-nc optimized cell is completely comparable with the floating gate in terms of programming window and energy consumption. Even if the maximum drain current is grater, in the case of Si-nc, the different dynamic behavior enables to reach very similar performance. The dynamic measurements verify the presence of current peak in the case of the floating gate and we explained above that it can cause disturbs in analog and digital circuits around the memory array. In the case of the optimized Si-nc cell, the current decrease during programming operation is not abrupt and it can be tolerated depending on design constraints. To conclude the chapter we show in figure 4.51 the performance of Si-nc, hybrid Si-nc and floating gate cells found using the optimized drain voltages and keeping the gate voltage unchanged (Vg=9V). The programming time is fixed at 2µs in order to propose benchmarking in the case of low energy and fast application. We notice that when the box pulse is applied the floating gate consumption is 50% less than for the Si-nc optimized cell, on the contrary the programming window can be considered as equivalent. Spite when using programming ramp, the silicon nanocrystal cell shows the best consumption level. The optimized silicon nanocrystal cell can be considered as a good alternative to the flash floating gate in terms of programming speed and energy consumption maintaining a satisfactory level of programming window, but further efforts are necessary to overcome the Flash floating gate memory cell. In the first chapter the economic context, the evolution and the classification of semiconductor memories was presented. Then the Flash memory operations needed to understand this thesis were reviewed. We thus presented the Flash memory scaling limits and the proposed solutions. We explained the advantages of using a discrete charge trapping layer instead of the continuous floating gate and the importance of control dielectric instead of the classical silicon oxide. Finally, we introduced the silicon nanocrystal memory solution. In particular we reported the state of the art of charge trap silicon nanocrystal cell, which is the object of this thesis. This option reduces the mask number in process fabrication and scales the memory stack thickness, hence decreasing the operating voltages. Moreover the silicon nanocrystal cell can produce satisfactory data retention results using a thin tunnel oxide. The second chapter describes the experimental setup used to characterize the programming window of silicon nanocrystal memory with automatic and manual tests. In chapter three we have shown the results of reliability experiments performed on silicon nanocrystal memory. The variation of technological parameters has been also evaluated. In particular the presence of silicon nitride capping layer on nanocrystals increases the charge trapping probability and the cell covered area. In our case the nanocrystals were not entirely surrounded by Si3N4, layer but they were grown on the SiO2 tunnel oxide and afterwards capped by Si3N4. We demonstrated that there are no benefits in this configuration concerning the data retention, due to the Si3N4 presence, because the physical tunnel barrier that separates the nanocrystals from the substrate corresponds to the tunnel oxide thickness only. Furthermore the Si3N4 capping layer enables the parasitic charge trapping at the tunox/SiN interface by increasing the charge loss with the temperature. Moreover, the presence of parasitic charge trapping at the tunox/Si3N4 interface facilitates the charge loss at high temperature. Concerning the cell endurance using the silicon nitrite capping layer, the coupling factor is increased, thus the programming window increases too, but the parasitic charge trapping in the Si3N4 does not lead to produce satisfactory cell functionality after 100k program/erase cycles. Another important point is the charge loss dependence on tunnel oxide thickness. We extrapolated the activation energies related to the different samples. Obviously the charge loss increases when the tunnel oxide thickness decreases. We noticed that a 5.2nm tunnel oxide is needed to achieve the data retention specification for temperatures up to 150°C. On the other hand, the tunnel oxide thickness strongly impacts the erase Fowler-Nordheim operation, hence to obtain satisfactory cell functioning after 100k program/erase cycles, a tunnel oxide thickness of 3.7nm has to be used. Finally we evaluated the cell behavior using a 4.2nm tunnel oxide, embedded in a different architecture without the silicon nitride capping layer and with an optimized ONO stack (EOT=10.5nm) in order to increase the vertical electric field which improves the program/erase efficiency and the cell reliability. We demonstrated for the first time the silicon nanocrystal cell functioning up to 1M program/erase cycles by maintaining a 4V programming window in a wide range of temperatures from -40°C to 150°C. Moreover, by avoiding the SiN capping, the data retention is also improved for cycled samples. The silicon nanocrystal covered area and the channel doping dose increase improve the programming window, hence the endurance performance. Furthermore we have shown that these two technological parameters do not impact the data retention results. Finally the Si-nc cell was compared with the floating gate. The endurance experiments have shown better behavior for the Si-nc cell up to 1M program/erase cycles, while the charge loss is higher at 250°C due to the thinner tunnel oxide. We presented a new dynamic technique of drain current measurement in chapter four. This innovative method is presented for the first time in literature and can be used for different cell architectures (floating gate, silicon nanocrystal and split gate). We characterized the consumption of floating gate and silicon nanocrystal cell under various bias conditions and programming schemes. We compared in particular the ramp and box pulses on gate terminal during a channel hot electron programming operation. In this way we optimized the programming pulses for the two devices in order to minimize the energy consumption and the drain current peak. For the Flash floating gate we propose to use a ramped gate followed by a plateau, while a box pulse can be used in the case of the silicon nanocrystal memory cell. Using TCAD simulation we explained the transistor-like behavior of the silicon nanocrystal cell when the SiN capping layer is used. This is due to the discrete nature of the charge trapping layer and thus the localization of trapped charge. The optimized silicon nanocrystal cell was also characterized showing an intermediate behavior between the floating gate and the hybrid Si-nc cell. This decreases the cell consumption by increasing the programming window and thus the cell programming efficiency. Using the optimized gate and drain biasing we demonstrated that it is possible to obtain a 4V programming window with sub-nanojoule energy consumption. Finally we compared the optimized silicon nanocrystal cell with the Flash floating gate. The programming time has been fixed to 2µs in order to propose benchmarking in the case of low energy and fast application. We notice that when the box pulse is applied the floating gate consumption is 50% less than with the Si-nc optimized cell, except for short pulses when the performances of the two devices become more similar. In spite of this, using the ramp programming, the silicon nanocrystal cell has the best consumption level. The optimized silicon nanocrystal cell can be considered as a good alternative to the flash floating gate in terms of programming speed and energy consumption while keeping a satisfactory level of programming window. Perspectives In this thesis we focused in particular on the silicon nanocrystal cell current consumption during the channel hot electron programming operation. Hereafter we propose two interesting points to study in a future work. Dependence of current consumption on silicon nanocrystals size and density We explained the cell behavior when the SiN capping layer is used. Using large nanocrystals it is possible to activate a mechanism of charge diffusion in the charge trapping layer. This explains behavior similar to the floating gate device, where the charge is distributed on the channel area. Preliminary experimental results lead us to say that is possible to drive the drain current consumption, controlling the charge diffusion mechanism. We conclude that the charges in silicon nanocrystals positioned on the drain side (HCI zone) can control the programming window, while the stored charges in Si-ncs, close to the source, control the consumed current. The charge diffusion changes the cell behavior emulating a double transistor functioning where the transistor on the source side acts as an access transistor with a threshold voltage that depends on the quantity of the trapped charges. To increase the ability to control the channel current, one way is to increase the charge diffusion toward the source side. In figure 5.1 we show a schematic of the silicon nanocrystal cell where the mechanism of charge diffusion is enabled. Using the asymmetrical tunnel oxide thickness it is possible to improve the hot carrier injection by increasing the vertical electric field when the channel is pinched-off. Moreover the consumed current is controlled by the tunnel oxide thickness in the source region. In this way the programming efficiency can be improved. This cell presents advantages concerning to current consumption of a split gate cell in a small area. The real current consumption can be evaluated with our dynamic method of measurement. As a drawback, the high electric field generated in the tunnel oxide zone, where the thickness is varied, can stress the device, thus limiting the endurance performance. Enfin, nous démonterons qu'il est possible de rejoindre une consommation énergétique inférieure à 1nJ en sauvegardant une fenêtre de programmation de 4V. Présentation de la thèse Le manuscrit terminera avec une conclusion générale qui résume les différents résultats obtenus dans ce travail de thèse, avant de proposer quelques perspectives. Le marché des mémoires à semi-conducteur Pendant les dix dernières années, le marché des mémoires à semi-conducteur a subi une forte augmentation, grâce à l'énorme quantité de produits tels que les smartphones et autres Les avantages de l'utilisation de cette technologie sont: -Résistance contre le SILC et RILC: ceci permet de diminuer l'épaisseur de l'oxyde de tunnel en dessous de 5nm, en conservant toujours une contrainte de dix ans pour la rétention de données. Par ailleurs, les tensions des opérations d'écritures et d'effacement peuvent être également diminuées. -Compatibilité avec le procédé de fabrication CMOS standard afin d'encourager la production industrielle, en réduisant le nombre de masques utilisés par rapport à la fabrication du dispositif à grille flottante. -Diminution des effets de perturbations de la cellule mémoire: Grâce à la nature discrète des nanocristaux ainsi qu'à leur petite taille, le facteur de couplage entre la grille et le drain est réduit autant que les interférences entre cellules voisines. -Application multi-niveaux: La tension de seuil d'un transistor à nanocristaux de silicium dépend de la position de la charge stockée tout le long du canal. En dépit de ces particularités, deux inconvénients importants caractérisent la mémoire Si-nc: -Le faible facteur de couplage entre la grille de contrôle et les nanocristaux. -La dispersion sur la surface recouverte avec les nanocristaux qui limite ce type de cellule pour des applications à haute densité d'intégration. Des études importantes ont été amenées par les sociétés STMicroelectronics, Atmel et Freescale, qui ont démontré la possibilité de pouvoir obtenir des résultats intéressants en termes de fenêtre de programmation satisfaisante, haute fiabilité et intégration au sein d'architectures mémoires spécifiques (Split Gate). Caractérisation électrique de la cellule mémoire à nanocristaux Dans ce chapitre, nous évaluerons l'impact des principaux paramètres technologiques sur la fenêtre de programmation de la cellule mémoire à nanocristaux produite au sein de STMicroelectronics. Le but de cette analyse a été de définir la meilleure façon d'améliorer la fenêtre de programmation en utilisant toujours les pulses de programmation et d'effacement standards utilisées pour la cellule à mémoire Flash à grille flottante. Ensuite nous résumerons les conclusions principales obtenues par les études de caractérisations électriques de la cellule. - -Nous avons montré qu'il est possible d'augmenter la fenêtre de programmation en augmentant la dose de dopage dans le canal, en considérant toujours le décalage des tensions de seuil. En augmentant la dose de dopage de canal jusqu'à 10 14 at/cm -La rétention des données reste inchangée lorsque la dose de dopage du canal (CDD) est modifiée. Il est possible par conséquent d'obtenir un gain sur la fenêtre de programmation en augmentant la dose de dopage de canal mais dans ce cas une régulation des niveaux programmés/effacés doit être réalisée. Le CDD=10 14 at/cm 2 a été choisi pour la cellule mémoire optimisée à nanocristaux de silicium. -Enfin, nous avons montré la dépendance de la perte des charges de l'épaisseur de Consommation de la cellule pendant une opération de programmation par injection d'électrons chauds Dans cette section nous présentons les résultats concernant la consommation de courant et Conclusion générale Dans ce travail de thèse nous avons caractérisé et modélisé les cellules mémoires à nanocristaux de silicium. Suite à une étude détaillée des récentes implications des nanocristaux dans des dispositifs mémoire, nous avons optimisé l'empilement de la mémoire. Nous avons par conséquent caractérisé la fenêtre de programmation en faisant varier Caractérisation et Modélisation des Mémoires Avancées non Volatiles à Piégeage de Charge Les mémoires à nanocristaux de silicium sont considérées comme l'une des solutions les plus intéressantes pour remplacer les grilles flottantes dans les mémoires Flash pour des applications de mémoires non-volatiles embarquées. Ces nanocristaux sont intéressants pour leur compatibilité avec les technologies de procédé CMOS, et la réduction des coûts de fabrication. De plus, la taille des nanocristaux garantie un faible couplage entre les cellules et la robustesse contre les effets de SILC. L'un des principaux challenges pour les mémoires embarquées dans des applications mobiles et sans contact est l'amélioration de la consommation d'énergie afin de réduire les contraintes de design de cellules. Dans cette étude, nous présentons l'état de l'art des mémoires Flash à grille flottante et à nanocristaux de silicium. Sur ce dernier type de mémoire une optimisation des principaux paramètres technologiques a été effectuée pour permettre l'obtention d'une fenêtre de programmation compatible avec les applications à faible consommation d'énergie. L'étude s'attache à l'optimisation de la fiabilité de la cellule à nanocristaux de silicium. On présente pour la première fois une cellule fonctionnelle après un million de cycles d'écriture et effacement dans une large gamme de températures [-40°C;150°C], et qui est capable de retenir l'information pendant dix ans à 150°C. Enfin, une analyse de la consommation de courant et d'énergie durant la programmation montre l'adaptabilité de la cellule pour des applications à faible consommation. Toutes les données expérimentales ont été comparées avec les résultats d'une cellule standard à grille flottante pour montrer les améliorations apportées. Mots Clés: Mémoires à nanocristaux de silicium; grille flottante; consommation d'énergie; fenêtre de programmation; fiabilité; température Characterization and Modeling of Advanced Charge Trapping Non Volatile Memories The silicon nanocrystal memories are one of the most attractive solutions to replace the Flash floating gate for nonvolatile memory embedded applications, especially for their high compatibility with CMOS process and the lower manufacturing cost. Moreover, the nanocrystal size guarantees a weak device-todevice coupling in an array configuration and, in addition, for this technology it has been shown the robustness against SILC. One of the main challenges for embedded memories in portable and contactless applications is to improve the energy consumption in order to reduce the design constraints. Today the application request is to use the Flash memories with both low voltage biases and fast programming operation. In this study, we present the state of the art of Flash floating gate memory cell and silicon nanocrystal memories. Concerning this latter device, we studied the effect of main technological parameters in order to optimize the cell performance. The aim was to achieve a satisfactory programming window for low energy applications. Furthermore, the silicon nanocrystal cell reliability has been investigated. We present for the first time a silicon nanocrystal memory cell with a good functioning after one million write/erase cycles, working on a wide range of temperature [-40°C; 150°C]. Moreover, ten years data retention at 150°C is extrapolated. Finally, the analysis concerning the current and energy consumption during the programming operation shows the opportunity to use the silicon nanocrystal cell for low power applications. All the experimental data have been compared with the results achieved on Flash floating gate memory, to show the performance improvement. Key words: Silicon nanocrystal memories; floating gate; energy consumption; programming window; reliability; temperature Chapter 1 - 1 Flash memories: an overview 1.1 Introduction ........................................................................................................................ 1.2 The industry of semiconductor memories .......................................................................... 1.2.1 The market of non-volatile memories ......................................................................... 1.2.2 Memory classification ................................................................................................. 1.2.3 Flash memory architectures ........................................................................................ 1.3 Floating gate cell ................................................................................................................ 1.3.1 Basic structure: capacitive model ................................................................................ 1.3.2 Programming mechanisms .......................................................................................... 1.3.3 Erase mechanisms ....................................................................................................... 1.3.4 Evolution and limits of Flash memories ..................................................................... 1.3.4.1 Device scaling....................................................................................................... 1.3.5 Alternative solutions ................................................................................................... 1.3.5.1 Tunnel dielectric ................................................................................................... 1.3.5.2 Interpoly material ................................................................................................. 1.3.5.3 Control Gate ......................................................................................................... 1.3.5.4 Trapping layer....................................................................................................... 1.4 Silicon nanocrystal memory: state of the art ...................................................................... 1.5 Flash technology for embedded applications ..................................................................... 1.6 Innovative solutions for non volatile memory ................................................................... 1.6.1 Ferroelectric Random Access Memory (FeRAM) ...................................................... 1.6.2 Magnetic Random Access Memory (MRAM) ............................................................ 1.6.3 Resistive Random Access Memory (RRAM) ............................................................. 1.6.4 Phase Change Random Access Memory (PCRAM) ................................................... 1.7 Conclusion ......................................................................................................................... Bibliography of chapter 1 ........................................................................................................ Figure 1 1 Figure 1. 1. Evolution and forecast of portable devices market (source: muniwireless.com and trak.in) Figure 1 1 Figure 1. 3. DRAM and Flash price outlook [Philip Wong'08]. Figure 1 . 4 . 14 Figure 1. 4. Mapping of typical applications into NVM space[Zajac '10]. "Bit count" is the amount of data that can be stored in a given block. Figure 1 1 Figure 1. 5. Left: Overview of the non volatile semiconductor memories; Right: Semiconductor memory classification by different performance criteria. Figure 1 1 Figure 1. 7. Architectures of NAND (left) and NOR (right) memory array (source: micron.com). Figure 1 1 Figure 1. 8. a) I-V trans-characteristics of a floating gate device for two different values of charge stored within the floating gate (Q=0 and Q≠0). b) Schematic cross section of a floating gate transistor. The model using thecapacitance between the floating gate and the other electrodes is described[Cappelletti '99]. Figure 1 1 Figure 1. 9. a) FN programming mechanism representation. b) Band diagram of a floating gate memory during FN programming operation. In this tunnel effect, the electrons flow from the conduction band of the silicon into the floating gate through the triangular energy barrier of the tunnel oxide (figure 1.9b). During the FN programming, the number of trapped electrons in the floating gate increases. As a Figure 1 . 10 . 110 Figure 1. 10. Channel Hot Electron (CHE) programming mechanism representation. Figure 1 . 11 . 111 Figure 1. 11. Flash floating gate schematics of erase mechanisms: a) Fowler-Nordheim, b) Hot Hole Injection (HHI), c) source erasing, d) mix source-gate erasing. Figure 1 . 1 Figure 1. 14. Subthreshold current of MOS transistor as a function of gate voltage with the channel length as parameter.The insert is the calculated boron profile below the silicon surface in the channel[Fichtner '80]. the programmed cells can lose part of their charge due to FN drain stress on tunnel oxide causing hot hole injection (see the cell A in figure 1.15). The second case, represented in figure 1.15, cell B concerns a gate stress that can be induced on programmed cells (charge lost due to the stress through the ONO) or on erased cells (charge trapping due to the stress through the tunnel oxide).  Read disturb. In this case the selected cell can suffer from parasitic programming at low gate voltage; furthermore the unselected cells are gate stressed too (figure 1.15 cell C). Figure 1 . 1 Figure 1. 15.Programming disturb (left) and read disturb (right) condition in NOR Flash memory array. Figure 1 . 1 Figure 1. 17. Locations of parasitic charge in a NAND cell (left). Number of electrons required in each locationto shift the cell VTH by 100mV (right)[Prall '10].Random Doping Fluctuation. The threshold voltage shift due to random variations in the quantity and position of doping atoms is an increasingly problem as device dimensions shrink. In figure1.18 the mean and 3σ for the number of doping atoms are shown as a function of feature size. As device size scales down, the total number of doping atoms in the channel decreases, resulting in a larger variation of doping numbers, and significantly impacting threshold voltage. It has been documented[Frank '99] that at 25nm node, the Vt can be expected to vary of about 30% purely due to the random doping fluctuation. Figure 1 . 1 Figure 1. 18. Number of Boron atoms per cell (mean: square, -3σ: diamond, +3σ: circle vs. feature size). The triangle shows the ±3σ percentage divided by the mean [Prall '10]. we must avoid the creation of defects during the programming operations that can induce the SILC and degrade the retention and cycling performance. This technological challenge can be solved by engineering the tunnel barrier. As shown in figure 1.19 crested barriers can provide both sufficient programming and retention. Several crested barriers have been tested: the most common one consists in an ONO layer [Hang-Ting '05], but other combinations have also been experimented (SiO2/Al2O3/SiO2 [Blomme '09], SiO2/AlN [Molas '10]). Figure 1 . 1 Figure 1. 19. Principle of operation of crested barrier[Buckley '06]. Figure 1 . 1 Figure1. 20. Relationship between the dielectric constant and band gap[Robertson '05]. Figure 1 . 1 Figure 1. 21. (a) Schematic explaining electron back tunneling phenomena. (b) Erase characteristics of SANOS device with n+ poly-Si gate and (c) TaN/n+ poly-Si gate.1.3.5.4 Trapping layerIn figure1.22a we show the schematics of continuous floating gate cell and discrete charge trapping layer. In the first case the trapped charge is free to move along the polysilicon floating gate. This makes the device very sensitive to SILC. A discrete charge trapping layer is the solution envisaged to avoid the charge loss if an electric path is generated in tunnel Figure 1 . 1 Figure 1. 22. Schematic diagrams representing (SILC) phenomena for (a) continuous floating gate cell (b) discrete charge trapping layer. STMicroelectronics, in collaboration with CEA-Leti, presented in 2003 a 1Mb nanocrystal CAST (Cell Array Structure Test, figure 1.24a) where the Si-nc were fabricated with a two step LPCVD process [De Salvo '03] [Gerardi '04]. This structure is programmed and erased by Fowler-Nordheim tunneling, and the write/erase characteristics are reported in figure 1.24b. Figure 1 . 1 Figure 1. 24. a) Schematic of a CAST structure. b) Program/erase characteristics in fully Fowler-Nordheim regime [De Salvo '03]. As a result STMicroelectronics presented in 2007 a 16Mb Flash NOR memory array divided into 32 sectors of 512kb [Gerardi '07a] [Gerardi '07b]. The silicon nanocrystals were grown on a tunnel oxide, 5nm thick, with a diameter between 3nm and 6nm and a density of 5•10 11 nc/cm 2 . To complete the stack an ONO layer was used as a control oxide (EOT=12nm). In figure 1.25a the program/erase threshold voltage distributions of 16Mb memory array are plotted. In this case the cells have been programmed by channel hot electron and erased by Fowler-Nordheim reaching a 3V programming window in case of the average of distributions and 800mV for the worst case. Moreover Gerardi highlighted the problem of parasitic charge trapping in ONO layer during the cycling (figure 1.25b). Figure 1 . 1 Figure 1. 25. a) Program and erase threshold voltage distributions for one sector of 512 kb of nanocrystal memory cells. b) Evolution of the program/erase threshold voltages of a Si-nc memory cell showing that the program/erase levels are shifted due to electron trapping in the ONO [Gerardi '07b]. Figure 1 . 1 Figure 1. 26. a) Cross-section of the cylindrical-shaped structure and corresponding TEM image on the right. b) Endurance characteristic of a Si-nc cell by using CHE/FN and FN/FN program/erase operations. Figure 1 . 1 Figure 1. 27. a) SEM images of Si-NCs with same dot nucleation step and different dot growing times. b) written and erased Vth of bitcells with different Si-NCs [Jacob '08]. Figure 1 . 1 Figure 1. 28. a) Endurance data for a memory bitcell b) Threshold voltage distributions of erased and written states of two different sectors, measured before and after 10k write/erase cycles. c) Data retention at 150°C on two different uncycled 512Kb sectors [Jacob '08]. .29 we report the endurance characteristics of HTO and ONO samples fabricated by Freescale. For the HTO sample the threshold voltages remain stable up to 1kcycles, and afterwards their increase is explained by the parasitic charge trapping in the oxide. In the case of ONO sample, the electrons trapping in the silicon nitride layer starts immediately with the first program/erase cycles figure 1.29. Figure 1 . 1 Figure 1. 29. Endurance characteristics of silicon nanocrystal cells integrating a) HTO [Steimle '04] and b)ONO[Muralidhar '04] control dielectric. Figure 1 . 1 Figure 1. 31. a) HCI program and FN erase speed (with positive gate voltage) for devices with 4.5nm bottom oxide and 12nm top oxide, with different nanocrystal depositions [Rao '05]. b) 200ºC bake Vt shift for sampleswith different nanocrystals size[Gasquet '06]. Figure 1 . 1 Figure 1. 32. Schematic of Split Gate with memory first Left) or Access first (Right) configuration[Masoero '12a]. Figure 1 . 1 Figure 1. 33. a) Erase and program Vt distributions of cycles up to 300K at 25°C. b) Bake retention characteristics at 150C with fresh, 10K and 100K cycled parts of 125°C cycling temperature [Sung-Taeg '12]. Figure 1 . 1 Figure 1. 34. Automotive microcontroller and smartcard market (source: iSupply Q1 2012 update). Figure 1 . 1 Figure 1. 35. Mainstream Flash integration concept[Strenz '11]. Figure 1 . 1 Figure 1. 36. An overview on 3D array integration of charge trapped Flash NAND: a) [Jiyoung '09], b)[Tae-Su '09], c) [SungJin '10], d) [Eun-Seok '11]. Figure 2 . 2 Figure 2. 1. Manual test bench. Figure 2 . 2 2 shows the automatic probe bench system. The test program is developed in HP BASIC language and controlled by SIAM (Automatic Identification System of Model Parameters). It is a dedicated software for parametric tests. It was possible to load in the automatic prober up to 25 wafers to obtain statistical results. With an accurate prober calibration the station is able to probe the wafers by applying the same test program. The bench was equipped with a HP4142 electrical parameter analyzer, a HP4284 LCR precision meter and a HP8110 pulse generator. Automatic Prober • Wafer handling • Wafer alignment  Matrix test head, probe card • Connect the instruments to the samples  Tester • LCR meter (HP4284) • Pulse Generator (HP8110) • Parameter analyzer (HP4142)  Computer (SIAM system) To drive the prober and the tester Figure 2 . 2 . 22 Figure 2. 2. Automatic test bench. Figures 2.3(a) and 2.3(b) showthe signals used during the channel hot electron programming kinetic, and the Fowler-Nordheim erase kinetic, in that order. Figure 2 . 2 Figure 2. 3. Signals applied during a) the channel hot electron programming operation and b) Fowler-Nordheim erase operation. Figure 2 . 4 . 24 Figure 2. 4. Silicon nanocrystal cells used to evaluate the impact of nanocrystal size. (a) and (c) are the schematics of cells that integrated average 6nm and 9nm nanocrystal diameter. (b) and (d) are the corresponding 40° tilted CDSEM pictures used to measure the nanocrystal size, density. Figure 2 . 2 Figure 2. 5. a) Statistical results of program/erase threshold voltage measured for samples with different silicon nanocrystal sizes (Φ=6nm and Φ=9nm). b) Programming window as a function of covered area. Figure 2 . 2 Figure 2. 6. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with Φ=6nm and Φ=9nm. The program/erase pulses are described in section 2.2. instead of the ONO layer. One alternative solution is represented by the hybrid Si-nc cell. It is demonstrated that the capping layer on Si-ncs increases the number of trapping sites. Moreover it protects the nanocrystals from oxidation during the interpoly dielectric deposition [Steimle '03] [Colonna '08] [Chen '09]. In this paragraph we analyze the impact of silicon nitride (Si3N4) capping layer on the memory cell programming window. The studied samples are shown in figure 2.7. Figure 2 . 2 Figure 2. 7. Silicon nanocrystal cells used to evaluate the impact of nanocrystal size. a) and c) are the schematics of cells that integrated average 9nm nanocrystals diameter with and without Si 3 N 4 capping layer. b) and d) are the corresponding 40° tilted CDSEM pictures used to measure the nanocrystal size and density. Figure 2 . 2 Figure 2. 8. Statistical results of program/erase threshold voltage measured for samples with and without Si 3 N 4 capping layer on silicon nanocrystals. The results plotted in figure 2.8 show the programming window is increased by 1V when the nanocrystals are capped by Si3N4 layer, maintaining the threshold voltage dispersion unchanged, thanks to the increased number of trapping sites [Chen '09]. Nevertheless, the expected limits of program/erase levels are not still reached. The program/erase kinetics, in figure 2.9, are achieved using the ramps previously described. Figure 2 . 2 Figure 2. 9. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with and without Si 3 N 4 capping layer. The program/erase pulses are described in section 2.2. Figure 2 . 10 . 210 Figure 2. 10. Schematics of silicon nanocrystal cells with different channel doping doses (CDD). Figure 2 . 2 Figure 2. 11. a) Statistical results of program/erase threshold voltage measured for samples with different channel doping doses (CDD).b) Linear dependence of programming window as a function of CDD. figure 2.12. Figure 2 . 2 Figure 2. 12. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with different channel doping doses (CDD). The program/erase ramps are described in section 2.2. Figure 2 . 2 Figure 2. 13. a) Cell schematics of Si-nc+SiN devices with: tunox=3.7nm, tunox=4.2nm, tunox=5.2nm. b) Measured and calculated EOT of memory stack. C) TEM analysis used to measure the tunnel oxide thicknesses. Figure 2 . 2 Figure 2. 14. Statistical results of program/erase threshold voltage measured for samples with different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm. Figure 2 . 2 Figure 2.15 shows the results of CHE program and FN erase kinetic characteristics, using the pulses described in section 2.2. Figure 2 . 2 Figure 2. 15. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with different tunnel oxide thickness (tunox). The program/erase pulses are described in section 2.2. Fowler- Nordheim characterizations using box pulses of variable duration and Vg=±18V (figure 2.16), in order to complete the evaluation of programming window dependence on tunnel oxide thickness. It is worth noting that 100ms are necessary to obtain a 4V programming window by writing with a gate voltage of 18V for the sample with 3.7nm tunnel oxide thickness. The impact of tunnel oxide thickness on erase operation is more important, because the ∆Vt=6V is reached in 100ms for the same sample. Figure 2 . 2 Figure 2. 16. a) Program and b) erase Fofler-Nordheim characteristics of Si-nc cell using three different tunnel oxide thicknesses (tunox): 3.7nm, 4.2nm and 5.2nm. Figure 2 . 2 Figure 2. 18. Programming window as a function of covered area, silicon nanocrystal (Si-nc) and hybrid silicon nanocrystal (Si-nc+SiN) are compared. Figure 2 . 2 Figure 2. 19. Schematic of optimized silicon nanocrystal cell; nanocrystals with two different sizes are implemented: a) Φ=9nm, b) Φ=12nm. Figure 2 . 2 Figure 2. 20. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with optimized memory stack and two different nanocrystals size: Φ=9nm and Φ=12nm.The program/erase pulses are described in section 2.2. Figure 2 . 2 Figure 2. 21. Channel hot electron programming kinetic characteristics measured for the optimized silicon nanocrystal memory cell and Flash floating gate.Moreover in figure2.22 the erase kinetic characteristics are plotted, using the ramped gate voltage. The erase performances are improved with respect to the floating gate memory cell thanks to the thinner tunnel oxide thickness and the coupling factor increase. Figure 2 . 2 Figure 2. 22. Fowler-Nordheim erase kinetic characteristics measured for the optimized silicon nanocrystal memory cell and Flash floating gate. [Chung '07] [Padovani'10] [Molas. The hybrid silicon nnocrystal cell has thus demonstrated higher operation speed than a plain SONOS memory, while maintaining better retention characteristic than a pure Si nanocrystal memory[Steimle '03] [Chen '09] [Hung-Bin'12].We reported in figure3.1 our results of data retention at 150°C and 250°C of silicon nanocrystal cell with and without the 2nm Si3N4 capping layer; the tunnel oxide thickness is 5.2nm (see the device description in figure 2.7). Figure 3 3 Figure 3. 1. Data retention of silicon nanocrystal cell with (circles) and without (diamonds) Si 3 N 4 capping layer at 150°C and 250°C. Figure 3 . 2 . 32 Figure 3. 2. Band diagrams of hybrid silicon nanocrystal cell: a) the silicon nanocrystals are embedded in Si 3 N 4 trapping layer, b) the silicon nanocrystals are grown on SiO2 tunnel oxide and capped by Si 3 N 4 . Figure 3 3 Figure 3. 3. Data retention of silicon nanocrystal cell with different channel doping doses at 150°C. The tunnel oxide thickness is 5.2nm and the Si-nc are capped by Si 3 N 4 layer (Φ=9nm+SiN=2nm). Figure 3 . 4 . 34 Figure 3. 4. Data retention at 27°C, 150°C, 250°C for hybrid silicon nanocrystal cell (Si-nc+SiN) varying the tunnel oxide thickness: 3.7nm, 4.2nm, 5.2nm. Figure 3 3 Figure 3. 5. Arrhenius plot of retention time (defined as the time corresponding to to reach Vt=6V) for temperatures of 27°C, 150°C and 250°C. Figure 3 . 6 . 36 Figure 3. 6. Schematic of signals used for endurance experiments. Figure 3 3 Figure 3. 7. Endurance characteristics of silicon nanocrystal cells comparing the samples with different nanocrystal size: Φ=6nm and Φ=9nm.The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=90ms). by the silicon nitride (Si3N4) layer, by keeping the program/erase conditions unchanged (figure 3.6). In figure 3.8 we reported the results of cycling experiments, comparing the Si-nc cell (Φ=9nm) with and without the silicon nitride capping layer. Figure 3 3 Figure 3. 8. Endurance characteristics of silicon nanocrystal cell comparing the samples with and without the Si 3 N 4 capping layer: Φ=9nm and Φ=9nm+SiN=2nm.The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=90ms).The programming windows at the beginning of cycling experiments are coherent with the data reported in chapter 2. The Si3N4 capping layer enables improvement on the programming window, but the shift of program/erase threshold voltage is increased too. This effect is due to the parasitic charge trapping in silicon nitride and at the tunox/SiN interface that generates a 3.7V shift of Vte. The shift of threshold voltages and the low erase efficiency cause the programming window closure after 30kcycles. In order to summarize the results, we reported in table3.2 the values of programming window before and after the cell cycling voltages. The results concerning the endurance experiments are shown and discussed below, using the same samples described in figure 2.7c (and figure 2.10) with three different channel doping doses: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 . The cells were programmed by channel hot electron and erased by Fowler-Nordheim as shown in figure 3.6, but using te=10ms. The results of experiments are plotted in figure 3.9. Figure 3 . 9 . 39 Figure 3. 9. Endurance characteristics of silicon nanocrystal cell comparing the samples with different channel doping doses (CDD): 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 . The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=10ms). ( Φ=9nm+SiN=2nm). The program/erase conditions are the same as the last experiments where the CDD is varied (figure 3.6 with te=10ms). In figure 3.10 we reported the results of endurance experiments. Figure 3 . 10 . 310 Figure 3. 10. Endurance characteristics of silicon nanocrystal cell comparing the samples with different tunnel oxide thickness (tunox): 3.7nm, 4.2nm and 5.2nm. The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=10ms). described in chapter 2 (figure 2.19): tunox=4.2nm, Φ=9nm or 12nm and the EOT of ONO layer is 10.5nm. In figure 3.11 we plotted the data retention results at 27°C, 150°C and 250°C of Si-nc cell with smaller nanocrystals (Φ=9nm). The charge loss is accelerated by the temperature, but the specification at 10 years is reached up to 150°C, despite the thinner tunnel oxide thickness with respect to the samples with the Si3N4 capping layer of figure 3.4. Figure 3 . 11 . 311 Figure 3. 11. Data retention at 27°C, 150°C, 250°C for the optimized silicon nanocrystal cell (tunox= 4.2nm). Figure 3 . 12 . 312 Figure 3. 12. Data retention characteristics of hybrid silicon nanocrystal cell and optimized Si-nc cell, integrating the same tunnel oxide thickness (4.2nm). In the optimized memory stack silicon nanocrystals have been embedded with different size and density: Φ=9nm density=7.3•10 11 nc/cm 2 and Φ=12nm density=6.7•10 11 nc/cm 2 (figure 2.18). In figure 3.13 we compare the shift of programmed and erased threshold voltages versus time at 150°C of the optimized cell integrating the different silicon nanocrystal types. The impact of silicon nanocrystal size on data retention is limited, and the programming window closure due to the charge loss is unchanged. To complete the study on data retention of the optimized Si-nc memory cell we plotted in figure 3.14 the results before and after cycling, for the first time up to 1Mcycles, concerning the cell with bigger silicon nanocrystals (Φ=12nm). As published in [Monzio Compagnoni '04], quite unexpectedly the stressed cell displays the same Vt drift (i.e. stronger data retention) with respect to the virgin sample. The smaller leakage current is due to the negative charge trapping in the tunnel oxide or electron trapping at deep-trap states at the nanocrystal surface [Monzio Compagnoni '03]. Figure 3 . 3 Figure 3. 13. Data retention characteristics of programmed and erased states at 150°C. Silicon nanocrystals with different sizes (Φ=9nm and Φ=12nm) are integrated in the optimized memory cell stack. Figure 3 . 3 Figure 3. 14. Data retention characteristics of programmed Si-nc cells at 150°C. Stressed and virgin samples are compared. Figure 3 . 3 Figure 3. 15. Endurance characteristics of silicon nanocrystal memory comparing the hybrid Si-nc cell and the optimized Si-nc cell (Φ=9nm). The cells schematics are also shown. Figure 3 . 3 Figure 3. 16. Endurance characteristics of optimized silicon nanocrystal memory comparing two different nanocrystal sizes: Φ=9nm and Φ=12nm. The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms).The cells schematics are also shown. Figure 3 . 3 Figure 3. 17. Endurance characteristics of optimized silicon nanocrystal memory (Φ=12nm) at different temperatures: T=-40°C, T=27°C and T=150°C. The cells are programmed byCHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms). Figure 3 . 3 Figure 3. 18. Data retention of optimized silicon nanocrystal (Si-nc opt) and floating gate (F.G.) cells at 250°C. Figure 3 . 3 Figure 3. 19. Endurance characteristics of optimized silicon nanocrystal memory (Si-nc, Φ=9nm) compared with the Flash floating gate (F.G.). The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms). an I/V converter (shunt resistance + amplifier) is used to measure the drain current during the programming; two different configurations are shown in figure 4.1. Figure 4 . 4 Figure 4. 1. a) Basic experimental setup to measure the programming drain current[Esseni '99]. b) Complex experimental setup used for ramped-gate programming experiments to measure the drain current absorption[Esseni '00b].The development of the system shown in figure4.1a introduces some errors on measured results due to the variable shunt resistance, the IV conversion and the coupling between IV converter and the scope. In order to measure the current absorption using a ramped gate programming the setup was improved (figure4.1b) but the complexity of the system limits the current measurement sensibility and the writing pulse duration. Today the memories for embedded NOR architectures consume a current magnitude of the order of 50 microamperes for shorter time periods (several microseconds). Moreover A. Maure in his PhD thesis evaluated the error due to the IV conversion (figure4.2), showing its relevance when the programming time decreases[Maure '09]. Figure 4 . 2 . 42 Figure 4. 2. Error evaluation during a fast current measurement performed applying a ramped voltage on a resistance. a)Very fast ramp induces a high current, the parasitic capacitive effect is important. b) The slow ramp shows induces a smaller current decreasing the measurement error. the coupling factor relative to the gate and drain terminals; G V and D V : control gate and drain voltages during the programming operation; treq Vt : threshold voltage measured for the equivalent transistor (dummy cell); capacitances due to the coupling between floating gate and drain/source contacts. X C : parasitic capacitance due to the coupling between the selected floating gate and the neighbor cells in the same word line. Y C : parasitic capacitance due to the coupling between the selected floating gate and the neighbor cells in the same bit line. Figure 4 . 4 Figure 4. 3. Capacitive model used in order to calculate the Flash floating gate cell coupling factor. Figure 4 . 4 . 44 Figure 4. 4. Floating gate coupling factor (α G ) measured with the method of subthreshold slope with the error bar (36 samples) and simulated with the capacitive model. Another important parameter to measure in order to calculate the floating gate potential evolution during the programming operation is the dummy cell threshold voltage. The results are shown in figure 4.5. The average value of Vttreq is 2.96V with a dispersion of 0.6V on wafer. Figure 4 . 4 Figure 4. 5. Id-Vg characteristics of 36 dummy cells on wafer. The average value of Vt treq is 2.96V using the read current of 10µA. are the threshold voltages of floating gate cell after the n th and the (n+1) th pulse respectively. The calculated overdrive voltage is reported in figure4.7. Figure 4 . 6 . 46 Figure 4. 6. Channel hot electron programming kinetic of floating gate cell. The control gate voltage (Vg) is a 1.5v/µs ramp, Vd=4.2V. Threshold voltage (Vt) read conditions: Id=10µA, Vd=0.7V. Figure 4 . 4 Figure 4. 7. Overdrive voltage during the channel hot electron programming operation calculated using the formula (4.4). Figure 4 . 4 Figure 4. 8. Floating gate current consumption extrapolation, obtained measuring the dummy cell drain current. Figure 4 . 4 Figure 4. 9. Experimental setup used to perform dynamic drain current (Id) measurements during a channel hot electron programming operation. The device under test (DUT) is a floating gate cell addressed in a memory array of 256 kbit. In figure 4.10 an example of dynamic drain current of floating gate cell is reported, that is measured using the ramp=1.5V/µs on gate terminal and the drain voltage at 4.2V. The static measured value corresponds to the value extrapolated using the indirect technique. With the developed setup we are able to generate arbitrary pulses on gate and drain terminals, in order to obtain specific ramps or boxes. The applied signals are measured like the dynamic current. Figure 4 . 4 Figure 4. 10. a) Gate and drain voltages (Vg, Vd) generated with the setup of figure 4.9. b) Dynamic drain current measured during the channel hot electron programming operation. The static measurement is the same reported in figure 4.8. Figure 4 . 4 Figure 4. 11. (a) Gate box pulses applied during the channel hot electron operation; (b) drain current measured with dynamic method. Drain voltage is constant (Vd=4.2 V). In figure 4.12 we report the trends of cell performances for different box pulse durations. The PW and the Ec increase with the box duration; in region I a low energy consumption level can be reached maintaining a satisfactory programming window. The presence of Id peak can be a problem for the logic circuits around the memory array. Furthermore, the designed charge pump layout areas, used to supply the drain terminal, depend on the value of Id current [Munteanu '02]. Figure 4 . 4 Figure 4. 12. Programming window and energy consumption versus the box duration. In region I a low energy level is reached maintaining the programming window. The Y scale is normalized with the same factor as figure 4.14. Figure 4 . 4 Figure 4. 13. (a) Gate ramp pulses applied during the channel hot electron programming operation; (b) drain current measured with dynamic method. Drain voltage is constant (Vd=4.2 V). Figure 4 . 4 Figure 4. 14. Programming window and energy consumption trends versus the ramp speed. Three regions are highlighted: region II higher energy; region III best tradeoff; region IV higher current peak. The Y scale is normalized with the same factor as figure 4.12.Three particular regions are highlighted in figure4.14. Region II is where the Id is lowest with the highest energy consumption. On the contrary in the region IV the energy reaches the lowest value, but higher drain current peak is present. The best performances in terms of consumption are obtained in region III, but the programming window does not reach the minimum level. In order to resolve this conundrum, we decided to merge the two pulse types, Figure 4 . 4 Figure 4. 15. (a) Gate pulses with different plateau durations, applied during the channel hot electron operation; (b) drain current measured with dynamic method; the current peak is smoothed decreasing the plateau duration, thus the ramp speed. Figure 4 . 4 Figure 4. 16. (a) Programming window, (b) energy consumption and (c) drain current peak of floating gate cel,l measured using the pulses shown in figure 4.15a. Figure 4 . 4 Figure 4. 17. Id currents measured with the dynamic method: (a) No injection zone, (b) Low injection zone, (c) High injection zone. (d) Maximum of drain current versus Vd. The Y scales are normalized with the same factor. (e) Optimized gate ramp pulse (1.5V/µs) + plateau (1µs). threshold voltage in order to evaluate the cell performances as shown in figure 4.18, where the PW and the Ec are plotted for different Vd. Clearly, by lowering the drain voltage to 3.8V with respect to Vd=5V, the energy consumption is minimized, with gain around 55% versus only 15% loss on the programming window. Figure 4 . 4 Figure 4. 18. Programming window and energy consumption versus drain voltage; the point of minimum energy is highlighted. Figure 4 . 4 Figure 4. 19. Programming window and energy consumption versus drain voltage; the point of minimum energy is highlighted. The limit for the maximum value of Vb depends on the breakdown voltage of drain/bulk junction. Finally several drain and bulk biases have been analyzed, optimizing the programming signals in order to reduce the current consumption, still keeping satisfactory performances. By decreasing the drain voltage from 4.2V to 3.8V, a diminution of 15% in terms of energy consumption can be achieved. Moreover, using the CHISEL effect with the reverse body bias, further drain current reduction is possible. figure 4.17e, with Vd=3.8V and Vb=-0.5V. The drain-bulk bias is chosen in order to minimize the stress of junction, and to decrease the bitline leakage of memory array (see next section). For each measurement we started from the same erased threshold voltage (Vte) in order to evaluate and compare the effect of CDD. Three different values of doping dose are used; the results are shown in figure4.20. We can notice that an increase of CDD leads to the improvement of the programming window and the energy consumption. When the channel doping dose is increased, the drain/bulk junction doping profile becomes more abrupt, and the cell threshold voltage (Vth) increases. Thus the electrons are turned into a more energetic state and the probability of injecting electrons in the floating gate increases, leading to better programming efficiency. Moreover, the drain current absorption decreases. Since the Vte increases as well as the CDD, it is necessary to adjust the erase pulse settings by increasing the erase time.After this study on single cell we will analyze in the next section the current consumption due to the unselected memory cells connected to the same bit line of programmed cell. Finally the global consumption of an entire bitline of 512 cells will be evaluated. Figure 4 . 4 Figure 4. 20. Trends of programming window and energy consumption versus channel doping dose variation. our case, we decided to use the LDD implants for the floating gate memory cell, in order to decrease the BLL caused by the drain bias of the unselected cells and to find a tradeoff between programming efficiency degradation and BLL improvement. As illustrated in figure4.21, BLL measurement corresponds to the sum of the leakage current of all unselected cells on the same bitline (511 cells in this case), while a cell is being programmed. In order to measure it, we varied the gate potential of dummy cell between -3V (to emulate the floating gate potential of the programmed cell) and 1V (erased cell). The biasing conditions are chosen with regard to the results of section 3.1.2: Vd=3.8V, Vb=-0.5V and Vs=0V. Figure 4 . 4 Figure 4. 21. Bit-line leakage measurement principle. Figure 4 . 4 Figure 4. 22. (a) Effect of LDD implantation energy on the main leakage contributions of unselected cell: junction current (Ijc) and punchthrough current (Ipt) for different gate voltages. (b) Sum of Ijc and Ipt versus Vg for different LDD implantation energies. (c) Percentages of global bitline current consumption (selected cell + 511unselected cells). Figure 4 . 4 Figure 4. 23. TCAD simulations: (a) LDD cartography and (b) doping profiles at Si/SiO2 interface, for different implantation energies. Figure 4 . 4 Figure 4.24a shows the distribution of the absolute value of the electric field (E) obtained by TCAD simulations. The electric field E is the highest at the gate edge, but its value at this point becomes smaller as LDD implantation energy increases. The lateral (Ex) and vertical (Ey) components of the electric field at the Si/SiO2 interface are shown in figures 4.24b and c Figure 4 . 4 Figure 4. 24. TCAD simulations: (a) global electric field distribution; (b) lateral field Ex and (c) vertical field Ey along the channel for different LDD implantation energies. Figure 4 . 4 Figure 4. 25. Programming window and bitline leakage versus (a) LDD implantation energy (constant channel doping dose), and (b) channel doping dose (standard LDD energy implantation). Figure 4 . 4 Figure 4. 26. (a) Gate pulses with different plateau durations, applied during the CHE operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential. reducing the channel length. Thus the stored charges remain trapped in Si-ncs and SiN over the drain and SCR zone. While the gate voltage changes, the hybrid Si-ncs on channel zone are not charged and so their potential remains constant. Consequently, the substrate surface potential depends on gate voltage only, which dynamically drives the drain current during the channel hot electron operation [Della Marca '11a]. Figure 4 . 4 Figure 4. 27. Scheme of a Silicon Nanocrystal cell behavior during the channel hot electron programming operation. Figure 4 . 4 Figure 4. 28. (a) Programming window and (b) energy consumption comparison of Si-nc+SIN and floating gate cells, measured using the pulses shown in figure 4.26a. The F.G. data are also shown in figure 4.16. programming time in order to reduce the energy consumption. To do this we repeated the experiment on another Si+SiN cell using box pulses to evaluate the dependence on programming window and energy on programming time. The results are shown in figure4.29, the maximum value of Id is slightly different with respect to figure4.26 and this is due to the wafer dispersion; in the two figures the same y scale is used. To program the cell we used Vg=9V and Vd=4.2V. The drain current follows the gate voltage during the programming. Thus the calculated energy is a linear function of pulse duration. After each pulse the programming window was also measured and the results are plotted in figure4.30. Figure 4 . 4 Figure 4. 29. (a) Gate box pulses with different durations, applied during the channel hot electron operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential. Figure 4 . 4 Figure 4. 30. (a) Programming window and (b) energy consumption comparison of Si-nc+SIN and floating gate cells, measured using the box pulses shown in figure 4.29a. The Y scale is the same as in figure 4.28. Figure 4 . 4 Figure 4. 31. (a) Gate ramp pulses with different durations, applied during the channel hot electron operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential. Figure 4 . 4 Figure 4. 32. (a) Programming window and (b) energy consumption of Si-nc+SIN, measured using box and ramped pulses respectively shown in figure 4.29a and 4.31a. The Y scale is the same as in figure 4.30. parameter set provided by Advanced Calibration and we perform the electrical simulations with Sentaurus Device 2010.3. In figure 4.33 the simulated structures are shown. In the case of the floating gate cell, the 2D approach implies considering the device width dimension as a scaling parameter for the currents and the wings area as an additional coupling capacitance between control gate and floating gate. The programming operation increases the electrostatic potential on the whole floating gate area (figure 4.33a). The Si-nc+SiN cell simulations only need to scale the currents and the nanocrystal size and density. These two parameters are based on process results. After the programming operation the charged nanocrystals are very close to drain and SCR (figure 4.33b). Figure 4 . 4 Figure 4. 33. Designed structures used in TCAD simulation after programming operation: a) floating gate cell, and b) Si-nc+SiN cell. Figure 4 . 4 Figure 4. 34. Drain current in floating gate (F.G.) and hybrid silicon nanocrystal (Si-nc+SiN) cells using: a)and b) gate voltage box pulse or c) and d) gate voltage ramp pulse on gate; Vg=9V, Vd=4.2V, Vb=Vs=0V.Experimental data and simulations are compared; all the graphs have the same Y axe scale. Figure 4 . 4 Figure 4. 35. Dependence of programming window and energy consumption on box pulse duration for the hybrid silicon nanocrystal cell. Experimental data and simulations are compared. Figure 4 . 4 Figure 4. 36. Results of programming pulse optimization in terms of energy consumption. Figure 4 . 4 Figure 4. 37. Programming window and energy consumption as a function of gate (a-b) and drain (c-d) voltages. Ramp and box gate pulses are used. Figure 4 . 4 Figure 4. 38. Energy consumption as a function of programming window for different gate (a) and drain (b) voltages. The Y scale is comparable to that in figure 4.37.The points of figure4.38, using the programming time of 1µs, are the same as shown in figure4.37. In this way it is possible to compare the two figures with the arbitrary units. we report the programming efficiency in the case of box pulse, that has shown the best programming window results, for different biasing conditions. Figure 4 . 4 Figure 4. 39. Efficiency (PW/Ec) vs gate (left) and drain (right) voltage, using tp=1μs. optimized programming scheme (Box pulse, Vg=10V and Vd=4.2V), we plotted programming efficiency in figure 4.40 as a function of the programming time. The greatest efficiency is reached using shorter programming time; as previously shown the shorter box pulse represents the best compromise for a satisfactory programming window and energy consumption. The graph shows that silicon nanocrystal memories are suitable for fast programming operations, representing a satisfactory trade-off between programming window and energy consumption. Figure 4 . 4 Figure 4. 40. Efficiency as function of programming time for Si-nc cell, using box gate pulses (Vg=10V, Vd=4.2V). Figure 4 . 4 Figure 4. 41. Channel hot electron threshold voltage (Vt) kinetic characteristics using box and ramp pulses, for tunnel oxide thicknesses of 4.2nm and 5.2nm. Figure 4 . 4 Figure 4. 42. Energy consumption as a function of the programming window of cells with 4.2nm and 5.2nm tunnel oxide thicknesses using box pulses (Vg=10V, Vd=4.2V). Figure 4 . 4 Figure 4. 43. Efficiency as function of programming time for Si-nc cell, using box gate pulses, calculated for tunox=4.2nm and tunox=5.2nm (Vg=10V, Vd=4.2V). Figure 4 . 4 Figure 4. 44. (a) Gate box pulses with different durations, applied during the channel hot electron operation; (b) drain current of optimized Si-n cell, measured with dynamic method. The Y scale is the same as in figure 4.29. Figure 4 . 4 Figure 4. 45. Dynamic drain current measured for: hybrid silicon nanocrystal cell (Si-nc+SiN), optimized Si-nc cell and floating gate (F.G.) cell, using a box programming pulse on control gate. Figure 4 . 4 Figure 4.45 shows that the Si-nc cell, where large nanocrystals (Φ=12nm) are embedded with a high density, has a response half-way between Si-nc+SiN and floating gate cells. This means it is possible to control the dynamic current modifying the nanocrystal size and density, thus the covering area. Previously we explained the transistor-like behavior of hybrid Si-nc+SiN cell where the drain current follows the gate potential thanks to the charge localization close to the drain. Instead, in the case of the floating gate device, a peak of current is detected when a box pulse is applied on gate terminal. During the hot carrier injection the charges flow throw the silicon nanocrystals toward the source side modifying the substrate surface potential and thus the vertical and horizontal electric fields. The charge diffusion can be due to the single electron interactions between neighbor silicon nanocrystals Figure 4 . 4 Figure 4. 46. Programming window and energy consumption as a function of gate (a-b) and drain (c-d) voltages of optimized Si-nc and Si-nc+SiN cells The Y scale axe is the same as in figure 4.37. Figure 4 . 4 Figure 4. 47. Programming efficiency (PW/Ec) vs gate (a) and drain (b) voltage, using tp=1μs, of optimized Sinc and Si-nc+SiN cells. This is because a low Vg is sufficient to produce high programming window close to saturation while the drain current dependence on Vg increases the consumed energy. On the other hand by varying the drain voltage (figure 4.47b) we note that the programming Figure 4 . 4 Figure 4. 48. Energy consumption as a function of the programming window of Si-nc+SiN and optimized Si-nc cells with 4.2nm tunnel oxide thickness using box pulses (Vg=10V, Vd=4.2V).The XY scale axes are the same as in figure 4.42. Figure 4 . 4 Figure 4. 49. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells, measured using box pulses. Figure 4 . 4 Figure 4. 50. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells, measured using ramp pulses. Figure 4 . 4 Figure 4. 51. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells using box and ramp pulses when the programming time is fixed to 2µs. Figure 5 5 Figure 5. 1. Schematic of silicon nanocrystal cell when the charge diffusion mechanism is enabled.New Cell architecture (ATW-Flash) Figure 5 . 2 . 52 Figure 5. 2. Schematic of Asymmetrical Tunnel Window Flash (ATW-Flash) tablettes vendus de part le monde (figure6.1). Toutes ces applications demandent de plus en plus de hautes performances telles que: faible consommation d'énergie, temps d'accès courts, bas coûts, etc. C'est pourquoi le commerce des mémoires Flash gagne des parts de marché par rapport aux autres types de mémoires. Malgré le fait que le marché est en train de croître en continu, le prix des dispositifs mémoires diminue. Figure 6 . 6 Figure 6. 1. Evolution et prévisions du marché des appareils portables (source: muniwireless.com et trak.in) Figure 6 . 2 . 62 Figure 6. 2. Fenêtre de programmation en fonction de la surface recouverte. Les cellules à nanocristaux de silicium (Si-nc) et hybride (Si-nc+SiN) sont comparées. L 'empilement de la cellule a été complété avec une couche ONO d'épaisseur équivalente de 10.5nm. Figure 6 . 6 Figure 6. 3. Schéma de la cellule à nanocristaux de silicium optimisé; des nanocristaux avec deux tailles différentes sont mises en oeuvre: a) Φ = 9 nm, b) Φ = 12nm. Pour conclure cette partie nous avons comparé les résultats obtenus pour la cellule optimisée Si-nc avec la Flash standard à grille flottante. Dans la figure 6.4 nous montrons les caractéristiques de la cinétique de programmation des deux dispositifs. Pour la cellule optimisée Si-nc les performances sont les mêmes que la cellule à grille flottante; la fenêtre de programmation minimale de 4V est obtenue en utilisant une programmation par électrons chauds d'une durée de 3.5µs. Figure 6 . 4 . 64 Figure 6. 4. Caractéristiques de la cinétique de programmation par porteurs chauds (Vg_ramp=1.5V/µs, Vg=[3V; 9V], Vd=4.2V).Les caractéristiques de la cinétique d'effacement apparaissent quant à elles dans la figure6.5.Les performances d'effacement ont été améliorées par rapport à la cellule mémoire à grille flottante grâce à un oxyde de tunnel d'épaisseur moindre ainsi qu'à un facteur de couplage plus important. Figure 6 . 6 Figure 6. 5. Caractéristiques de la cinétique d'effacement par Fowler-Nordheim (Vg_ramp=5kV/s; Vg=[-14V; -18V]). Le temps d'effacement pour obtenir la fenêtre de programmation minimale de 4V est de 0.2ms pour la cellule optimisée Si-nc permettant un gain de 60% par rapport à la cellule Flash à grille flottante. Pour conclure, tous les essais réalisés en faisant varier les différents paramètres technologiques ont permis d'optimiser la fenêtre de programmation de la cellule Si-nc avec l'objectif de pouvoir substituer la cellule à grille flottante et donc de diminuer les coûts de production. Dans le prochain paragraphe seront comparés les résultats concernant la fiabilité de la mémoire à nanocristaux de silicium et à grille flottante. l'oxyde de tunnel et nous avons extrapolé les énergies d'activation pour chacun des échantillons. Comme dans le cas de l'opération d'effacement, la perte des charges augmente lorsque l'épaisseur de l'oxyde de tunnel est diminuée. Nous avons constaté qu'une épaisseur d'oxyde de tunnel de 5.2nm est nécessaire pour parvenir à la spécification de rétention des données pour les températures jusqu'à 150°C. D'autre part, l'épaisseur d'oxyde de tunnel a un impact considérable sur l'opération d'effacement faite par Fowler-Nordheim. Donc pour obtenir une fenêtre de programmation adéquate après 100k cycles un oxyde de tunnel de 3.5nm doit être utilisé. Pour notre étude il a été important d'évaluer le comportement de la cellule en utilisant un oxyde de tunnel de 4.2nm dans une architecture mémoire où la couche ONO a été optimisée et la couche Si3N4 n'est pas déposée. En conclusion de ce paragraphe nous avons comparé les résultats concernant la cellule à nanocristaux optimisée avec la cellule Flash à grille flottante. Dans la figure 6.6 la rétention des données à 250°C est représentée pour chacun des deux dispositifs. Pour satisfaire la contrainte de rétention des données la cellule doit maintenir un niveau de la tension de seuil supérieur à 5,75V à 250°C pendant 168 heures. On peut observer que la cellule Si-nc est à la limite de cet objectif cependant d'autres efforts seront nécessaires afin de pouvoir encore améliorer les performances et donc atteindre les résultats obtenus en utilisant la cellule à grille flottante. La principale contrainte est représentée par la perte de charges rapide initiale due au piégeage d'électrons dans l'oxyde de tunnel, dans la couche ONO et dans les interfaces relatives. Figure 6 . 6 . 66 Figure 6. 6. Rétention des données de la cellule à nanocristaux de silicium optimisé (Si-nc opt) et à grille flottante (F.G.) à la température de 250°C. Les résultats concernant l'endurance ont été aussi comparés et les conditions de programmation/effacement sont restées inchangées (programmation: Vg=9V, Vd=4.2V, tp=1µs et effacement: Vg=-18V, ramp=5kV/s+te=1ms). Les résultats expérimentaux pour la cellule optimisée à nanocristaux de silicium sont montrés dans la figure 6.7. Comme nous pouvions nous y attendre, la cellule Flash à grille flottante présente une fenêtre de programmation plus importante au début du cyclage, ceci grâce à son meilleur facteur de couplage et à sa meilleure efficacité de programmation. La dégradation la plus importante de la tension de seuil détermine une fermeture majeure de la fenêtre de programmation après 1 millions de cycles, En même temps la caractéristique de l'endurance est plus stable pour la cellule Si-nc. Figure 6 . 6 Figure 6. 7. Caractéristiques d'endurance des cellules à nanocristaux de silicium optimisé (Si-nc opt) et à grille flottante (F.G.). En conclusion nous avons démontré pour la première fois, à notre connaissance, le fonctionnement d'une cellule à nanocristaux de silicium jusqu'à 1 million de cycles de programmation/effacement. Une fenêtre de programmation de 4V est ainsi préservée dans un intervalle de température étendu [-40°C ; 150°C] (figure 6.8). En contrepartie le principal inconvénient pour la cellule à nanocristaux est représenté par la perte des charges importante à 250°C. Figure 6 . 6 Figure 6. 8. Caractéristiques d'endurance des cellules à nanocristaux de silicium optimisé dans un intervalle de température étendu [-40°C ; 150°C]. d'énergie de la cellule Flash à grille flottante et à nanocristaux de silicium pendant une opération de programmation faite par injection d'électrons chauds. L'évaluation de la consommation du courant d'une cellule Flash à grille flottante peut être effectuée à travers des manipulations utilisant un convertisseur courant/tension ou par technique indirecte. De cette manière là il n'est pas possible de comprendre le comportement dynamique et de mesurer les performances de la cellule implémentée dans une architecture NOR pour des pulses de programmation de quelques microsecondes. D'autre part la méthode indirecte de calcul du courant consommé n'est pas fonctionnelle pour les mémoires à nanocristaux de silicium. Dans ce contexte nous avons développé une nouvelle méthode expérimentale qui permet de mesurer dynamiquement la consommation du courant pendant une opération de programmation effectuée par injection d'électrons chauds. Cette méthode a permis de comprendre le comportement dynamique des deux dispositifs. L'énergie consommée a été aussi évaluée en utilisant différentes conditions de polarisation. L'objectif a été de caractériser l'impact de différents paramètres sur la consommation et de trouver le meilleur compromis afin d'améliorer les performances des deux cellules mémoires en question. De plus, la consommation due aux fuites des cellules non sélectionnées dans le plan mémoire a été mesurée pour compléter l'étude. Nous allons décrire les principaux résultats obtenus pour la cellule à nanocristaux de silicium et la cellule Flash à grille flottante.6.7 Optimisation de la consommation énergétiqueLes caractérisations électriques effectuées sur la cellule à grille flottante et sur la cellule à nanocristaux recouverts par la couche Si3N4 ont permis d'optimiser les polarisations ainsi que la forme des signaux utilisées pendant l'opération de programmation. Pour la cellule Flash à grille flottante il a été démontré que le meilleur compromis entre l'énergie consommée et la fenêtre de programmation se trouve être l'utilisation d'une rampe de programmation suivie par un plateau. Cette façon de programmer la cellule permet de réduire respectivement le pic du courant de drain et de maintenir une fenêtre de programmation adéquate. Dans le cas de la cellule à nanocristaux de silicium recouverts par la couche Si3N4 (Si-nc+SiN), nous avons trouvé que les meilleurs résultats sont acquis lorsque des pulses très courts sont appliqués au terminal de grille. Nous reportons ci-dessous les résultats expérimentaux mesurés pour une cellule à nanocristaux avec un oxyde de tunnel épais de 4.2nm, des nanocristaux avec une taille moyenne de 12nm et une fine couche d'ONO(10.5nm). Dans la figure6.9 sont montrés les résultats de la consommation de drain lorsque des pulses de formes carrées et de différentes durées sont appliqués à la grille de contrôle. Nous remarquons que le courant de drain ne suit pas le potentiel de la grille de contrôle, mais dans ce cas il diminue pendant le temps de programmation. Ce type de comportement est similaire à celui d'une cellule Flash à grille flottante où un pic de courant est mesuré. Nous avons démontré alors que la différence de comportement entre la cellule à grille flottante et la cellule Si-nc+SiN est justifiée par la localisation de la charge piégée. Dans le cas de la cellule optimisée nous pouvons affirmer que l'effet de la localisation due à la présence de la couche SiN n'est pas présent. Par ailleurs la grosse taille des nanocristaux produit une distribution de la charge vers le centre du canal et modifie le potentiel de surface de canal. Dans la figure 6.10 nous avons représenté les schémas des divers empilements mémoire et les mesures relatives au courant de drain afin de pouvoir comparer le comportement des trois cellules : Si-nc, Si-nc+SiN et F.G. Figure 6 . 6 Figure 6. 9. a) Pulses de différentes durées appliquées à la grille de contrôle. b) Résultats de la consommation du courant de drain. Figure 6 . 10 . 610 Figure 6. 10. Courant de drain dynamique mesuré pour les cellules à nanocrystaux des silicium avec et sans la couche SiN, et à grille flottante. La figure 6.10 montre que la cellule Si-nc optimisée a une réponse à mi-chemin entre les cellules Si-nc+SiN et grille flottante. Cela signifie qu'il est possible de contrôler le courant dynamique en faisant varier la taille et la densité des nanocristaux, donc la surface recouverte.Nous avons expliqué que la cellule Si-nc+ SiN a un comportement de type transistor où le courant de drain suit le potentiel de la grille de contrôle de part la localisation de la charge à côté de la zone de drain. Au contraire pour le dispositif à grille flottante un pic de courant est détecté lorsqu' une impulsion de forme carrée est appliquée sur la grille de contrôle. Pendant l'injection de porteurs chauds les charges diffusent à travers les nanocristaux en direction de la source en faisant varier le potentiel de surface du substrat et donc les champs électriques.En prenant compte le comportement dynamique de la cellule à nanocristaux optimisée et le fait que les pulses de formes carrées sont plus efficaces que les rampes de programmation, Figure 6 . 6 Figure 6. 11. a) Fenêtre de programmation et b) énergie consommée en utilisant des pulses de programmation carré de différentes durées. différents paramètres technologiques: taille et densité des nanocristaux, présence de la couche Si3N4, dose de dopage du canal et épaisseur de l'oxyde de tunnel. L'objectif des expériences a été de comprendre le comportement de la cellule afin d'améliorer le facteur de couplage et de minimiser le piégeage des charges parasites. Les résultats concernant la fiabilité de la cellule ont montré une rétention des données satisfaisante à 150°C et pour la première fois une endurance allant jusqu'à 1 million de cycles dans l'intervalle de température entre -40°C et 150°C avec une fenêtre de programmation de 4V. Enfin nous avons développé une technique innovante de mesure du courant dynamique pendant une opération de programmation faite par injection d'électrons chauds. Cette technique a été appliquée pour la première fois dans le but d'étudier les cellules Flash à grille flottante et à nanocristaux de silicium. Nous avons décrit le comportement dynamique et comment améliorer la consommation d'énergie pour atteindre une consommation inférieure à 1nJ toujours en gardant une fenêtre de programmation de 4V. Comme perspectives nous proposons de continuer à investiguer le phénomène de diffusion des charges dans la couche de piégeage de nanocristaux durant l'opération de programmation avec l'objectif de diminuer la consommation énergétique. En alternative il est possible d'intégrer les nanocristaux dans des architectures mémoires différentes de la cellule mémoire standard. Table 1 1 table 1.1 we report the international technology roadmap for semiconductor 2009 that forecasts the future trends of semiconductor technology. ITRS 2012 -Process Integration, Devices, and Structures 2013 2016 2019 2022 2026 Nor flash technology node -F (nm) 45 38 32 28 22 ? Cell size -area factor in multiplies of F2 12 12 12-14 14-16 14-16 Physical gate legth (nm) 110 100 90 85 85 Interpoli dielectric thickness (nm) 13-15 13-15 11-13 11-13 11-13 . 1. Table 1.1: Summary of the technological requirements for Flash NOR memories as stated in ITRS 2012 roadmap [ITRS Table 2 2 . 1. Vertical electric fields calculated for samples with different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm, applied during the programming operation. 3.1 Introduction ........................................................................................................................ 3.2 Data retention: impact of technological parameters .......................................................... 3.2.1 Effect of silicon nitride capping layer ......................................................................... 3.2.2 Effect of channel doping dose ..................................................................................... 3.2.3 Effect of tunnel oxide thickness .................................................................................. 3.3 Endurance: impact of technological parameters ................................................................ 3.3.1 Impact of silicon nanocrystal size ............................................................................... 3.3.2 Impact of silicon nitride capping layer ........................................................................ 3.3.3 Impact of channel doping dose .................................................................................... 3.3.4 Impact of tunnel oxide thickness ................................................................................. 3.4 Silicon nanocrystal cell optimization ................................................................................. 3.4.1 Data retention optimization ......................................................................................... 3.4.2 Endurance optimization............................................................................................... 3.5 Benchmarking with Flash floating gate ............................................................................. Bibliography of chapter 3 ........................................................................................................ Temperature (°C) tunox (nm) 27 150 250 5.2 8% 15% 27% 4.2 21% 30% 43% 3.7 31% 40% 47% Table 3 . 3 1. Percentage of charge loss after 186h table3.2 the values of programming window before and after the cell cycling and the program/erase threshold voltage (Vtp and Vte) shifts. To conclude this section we can affirm that the Si3N4 capping layer enables, in this case, a 30% programming window gain to be achieved which is maintained after 100k cycles, but the large quantity of parasitic charge cumulated during the experiments determines a great shift of program/erase threshold voltages limiting the cell functioning at 30kcycles. Si-nc Programming window @1cycle Programming window @100kcycles Vtp shift Vte shift Endurance limit Φ=6nm 1.1V 0.4V 1.6V 2.4V 10cycles Φ=9nm 1.4V 0.7V 1.1V 2.4V 10kcycles Φ=9nm+SiN=2nm 2.7V 0.8V 1.8V 3.7V 30kcycles Table 3 . 3 2. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have a different silicon nanocrystals diameter: Φ=6nm, Φ=9nm and Si 3 N 4 capping layer (Φ=9nm+SiN=2nm). Table 3 . 3 3. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have a different channel doping doses: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 . chapter 2. The results of programming windows before and after the cell cycling, and the program/erase threshold voltages shifts, are summarized in table 3.4. Once again the program/erase threshold voltage shift demonstrates the parasitic charge trapping in the silicon nitride layer. This effect slightly depends on tunnel oxide thickness because the charges are trapped during the CHE injection (independent on tunox). Concerning the erase operation we notice the improvement gained using thinner tunnel oxide (3.7nm) which enables the programmed and the erased state to be kept separate, while the working condition are very closed to the limit of good functioning for the case of the sample with tunox=4.2nm. In conclusion, to improve the programming window and the cell endurance, the experiments suggest minimizing the tunnel oxide thickness, but this contradicts the considerations on data retention, where the cell performance is increased by using a thicker tunnel oxide. It is important, thus, to find the best tradeoff between all the technological parameters to satisfy the data retention and endurance specifications. tunox (nm) Programming window @1cycle Programming window @100k cycles Vtp shift Vte shift Endurance limit 3.7 5.5V 2.7V 1.5V 4.2V >100kcycles 4.2 4.5V 1.9V 1.4V 4.0V >100kcycles 5.2 3.5V 1.2V 1.7V 4.0V 60kcycles Table 3 . 3 4. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm. ). Programming window @1cycle Programming window @1M cycles Vtp shift Vte shift Endurance limit Si-nc (Φ=12nm) 5.6V 4.0V 0.6V 2.2V >1Mcycles F.G. 7.0V 2.8V -2.4V 1.8V >1Mcycles Table 3 3 . 7. Programming window before and after 1M program/erase cycles, and program/erase threshold voltage shifts; the studied samples are the optimized silicon nanocrystal memory cell (Si-nc Φ=12nm), and the Flash floating gate (F.G.). Cycling conditions: CHE programming (Vg=9V, Vd=4.2V, tp=1µs) and FN erase (Vg=-18V, ramp=5kV/s+te=1ms). We repeated the endurance experiments on Flash floating gate varying the program/erase conditions in order to achieve the same initial threshold voltage levels (CHE programming: Vg=8V, Vd=3.7V, tp=1µs and FN erase: Vg=-19.5V, ramp=5kV/s+te=1ms). The comparison with the optimized Si-nc cell is shown in figure 3 .20. Quite unexpectedly the programming window of Flash floating gate starts to degrade after 100cycles, even if a lower drain and gate voltages are used. This is not due to the tunnel oxide degradation, but it can be due to the lower programming efficiency caused by lower vertical and horizontal fields during the channel hot electron operation. Figure 3. 20. Endurance characteristics of optimized silicon nanocrystal memory (Si-nc, Φ=12nm) programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms), compared with the Flash floating gate (F.G.) programmed by CHE (Vg=8V, Vd=3.7V, tp=1µs) and erased by FN (Vg=-19.5V, ramp=5kV/s+te=1ms). In fact after the cycling experiments using Vg=9V and Vd=4.2V, we verified that it is possible to reach the program threshold voltage previously achieved (figure 3 .19). Also in this case the Si-nc cell presents a more stable endurance characteristic; the data concerning the programming window before and after the cycling and threshold voltage shifts are reported in table 3 .8. Finally we demonstrated for the first time to our knowledge the functioning of a silicon nanocrystal cell up to 1M program/erase cycles. A 4V programming window is preserved in a wide range of temperature [-40°C; 150°C]. As a drawback the silicon nanocrystal cell present a higher charge loss than the floating gate at 250°C. Table 4 4 5µs box pulse reference case PW (%) Ec (%) Id peak (%) Ramp -22 +9 -36 Optimized -11 +10 -35 . 1. Summary of results obtained using a single ramp or ramp + plateau (optimized) programming pulses, with respect to the box (reference case). 4.41). In one case the amplitude of gate voltage remains constant (Vg=9V) during the kinetic, but in the other case Vg changes between 3V and 9V in order to emulate the programming ramp of 1.5V/µs; the drain voltage is kept constant at 4.2V. A very small impact of the tunnel oxide thickness on the threshold voltage shift is observed when a box pulse is applied, in accordance with Hot Carrier Injection theory. The Vt kinetics are differ significantly when box and ramp are compared. This is due to the different programming speed. The hot electron injection starts when Vg≈Vd, thus in the case of box it starts after the first 0.5µs pulse and after 1µs the 80% of global charge is stored. Using the ramp programming, the Vt characteristic is smoother than the case of box pulse, because of the gradual hot carrier injection. In table 4.2 we Table 4 . 4 Gate voltage=9V Programming time=0.5µs The impact of different technological parameters on program/erase speed is studied. The results have shown that to improve the programming window it is important to increase the silicon nanocrystal size and density (covered area), further improvement can be made by capping the silicon nanocrystal with the SiN layer. These results are coherent with the literature. Moreover, the channel doping dose can be increased to improve the programming efficiency, but in this case, an adjusting of program/erase threshold voltage is needed. The last technological parameter studied in this chapter was the tunnel oxide thickness. We demonstrated that its variation strongly impacts the Fowler-Nordheim operation, while the channel hot electron programming is slightly dependent. After these considerations we optimized the silicon nanocrystal memory stack comparing this device with the Flash floating gate. Concerning the channel hot electron programming operation, the Si-nc cell reached the same performance as floating gate, which means a 4V programming window in 3.5µs with a ramped gate voltage (1.5V/µs). For the Fowler-Nordheim erase operation an improvement is seen with respect the floating gate device due to a thinner tunnel oxide. The optimized Si-nc cell is erased with a ramped gate (5kV/s) in 200µs in despite of the floating gate that reaches a 4V programming window in 500µs. Table 5 . 5 1.Performance comparison achieved for optimized silicon nanocrystal and floating gate memory cell. Ce travail de thèse concerne une étude expérimentale et de modélisation sur les mémoires à nanocristaux de silicium qui représentent une des solutions les plus intéressantes pour remplacer le dispositif Flash à grille flottante. L'objectif de ce travail de thèse est de comprendre les mécanismes physiques qui gouvernent le comportement de la cellule à nanocristaux de silicium afin d'optimiser l'architecture du dispositif et comparer les résultats avec la cellule Flash standard.Dans le premier chapitre nous présenterons le contexte économique, l'évolution et le fonctionnement des mémoires Flash-EEPROM. Ensuite une description détaillée de la technologie, du comportement et des limitations de la miniaturisation sera fournie. En conclusion nous exposerons les solutions possibles pour résoudre ces problèmes. Dans le troisième chapitre, l'impact des principaux paramètres technologiques sur la fiabilité (endurance et rétention des données) sera étudié. Les performances de la mémoire à nanocristaux de silicium pour des applications fonctionnelles dans un intervalle étendu de température [-40°C; 150°C] seront aussi évaluées en montrant pour la première fois une endurance de la cellule jusqu'à 1 million de cycles avec une fenêtre de programmation finale de 4V. Pour conclure la cellule optimisée proposée sera comparée à la cellule Flash à grille flottante. Le chapitre quatre décrit une nouvelle technique de mesure dynamique pour le courant de drain consommé pendant l'injection d'électrons chauds. Cette procédure permet l'évaluation de la consommation d'énergie lorsqu'une opération de programmation est conclue. Cette Le deuxième chapitre présentera le setup expérimental ainsi que les méthodes de caractérisation utilisées dans le but de mesurer les performances de la cellule mémoire à nanocristaux de silicium. De plus, l'impact des principaux paramètres technologiques comme par exemple : la nature des nanocristaux, la présence d'une couche de nitrure de silicium, la dose de dopage du canal et l'épaisseur de l'oxyde de tunnel, sera analysé. Une optimisation de l'empilement de la cellule mémoire sera aussi proposée pour pouvoir comparer les performances avec celles de la cellule Flash à grille flottante. méthode est appliquée pour la première fois aux cellules mémoires à grille flottante et à nanocristaux de silicium. Une étude concernant la typologie des pulses durant la programmation et l'impact des paramètres technologiques sera présentée dans ce chapitre. Lorsque la taille des nanocristaux est augmentée, donc par conséquent la surface recouverte de la cellule, la fenêtre de programmation augmente et en particulier l'opération d'effacement par Fowler-Nordheim est améliorée. Nous avons constaté que l'utilisation de l'empilement mémoire standard nécessitait un recouvrement de 95% pour obtenir une fenêtre de programmation de 4V, pourcentage non cohérent avec le principe de fonctionnement de la cellule à nanocristaux de silicium. Afin d'améliorer la fenêtre de programmation et d'optimiser l'empilement de la cellule Si- nc nous avons considéré comme point clé l'augmentation du facteur de couplage comme expliqué dans la littérature pour les mémoires Flash à grille flottante. Deux différentes recettes ont été développées afin d'obtenir des nanocristaux de silicium avec une taille moyenne de 9 nm et 12nm qui permettent d'arriver respectivement à une surface recouverte de 46% et 76%. De plus, avec l'optimisation du facteur de couplage il a été possible de diminuer l'épaisseur de la couche d'ONO jusqu'à 10,5nm d'épaisseur équivalente, ceci a permis d'augmenter le champ électrique vertical pendant l'opération d'effacement. Cette valeur d'épaisseur a été choisie en accord avec les recettes disponibles dans la ligne de production de STMicroelectronics. -La présence de la couche Si3N4 recouvrant les nanocristaux de silicium augmente la probabilité de piégeage de charge et la surface de canal recouverte. Le facteur de couplage est augmenté et donc la fenêtre de programmation augmente aussi. Dans les observations CDSEM nous avons noté que la couche Si3N4 pousse autour des nanocristaux de silicium. Dans ce cas il n'est possible de confirmer si les améliorations obtenues concernant la fenêtre de programmation dont dues à la présence de la couche Si3N4 ou à l'augmentation de la surface recouverte. Dans la figure 6.2 nous montrons les résultats concernant la fenêtre de programmation obtenus en utilisant des échantillons avec différentes dimensions de nanocristaux de silicium et avec la couche Si3N4. Dans ce cas nous pouvons considérer que l'amélioration obtenue dépend principalement de la surface recouverte et faiblement de l'augmentation de probabilité de piégeage de charge. Même si la présence de la couche Si3N4 est utile pour améliorer la fenêtre de programmation, nous avons décidé d'éviter cette étape process afin de minimiser les effets de piégeage de charges parasites. Acknowledgments First and foremost I want to thank my industrial chef in STMicroelectronics (Rousset) Jean-Luc Ogier. He has taught me (like a second mother). I appreciate all his contributions of time, ideas, and funding to make my Ph.D. experience productive and stimulating. Not less I want to thank my academic advisor Frédéric Lalande and his collaborator Jérémy Postel-Pellerin. It has been an honor to be their Ph.D. student. They introduced me at the university of Marseille and they gave me the possibility to use innovative equipments for my researches. Equally Gabriel Molas that supervised me at CEA-Leti (Grenoble) and introduced me to Lia Masoero, together we reached important results concerning our researches in a funny atmosphere. I am especially grateful for the support of Silicon Nanocrystal team: Philippe Boivin,
230,124
[ "18756" ]
[ "199957" ]
01759914
en
[ "math" ]
2024/03/05 22:32:13
2018
https://inria.hal.science/hal-01759914/file/AnalysisAndHomOfBidModel.pdf
Annabelle Collin Sébastien Imperiale Mathematical analysis and 2-scale convergence of a heterogeneous microscopic bidomain model The aim of this paper is to provide a complete mathematical analysis of the periodic homogenization procedure that leads to the macroscopic bidomain model in cardiac electrophysiology. We consider space-dependent and tensorial electric conductivities as well as space-dependent physiological and phenomenological non-linear ionic models. We provide the nondimensionalization of the bidomain equations and derive uniform estimates of the solutions. The homogenization procedure is done using 2-scale convergence theory which enables us to study the behavior of the non-linear ionic models in the homogenization process. -In Section 4.3, we give the mathematical framework required for the 2-scale convergence. Among the standard results of the 2-scale convergence theory that we recall, we provide less standard properties by giving 2-scale convergence results on surfaces. -In Section 4.4 the 2-scale convergence is used and the limit homogenized problem is given. It corresponds to a 2-scale homogenized model. A specific care is taken for the convergence analysis of the non-linear terms and it represents one of the most technical point of the presented approach. -Finally, in Section 4.5, the two-scale homogenized model is decoupled and a macroscopic bidomain equation is recovered. in which a proof of the homogenization process is proposed using Γ-convergence for a specific ionic model. This technique is well suited when the model is naturally Introduction Cardiac electrophysiology describes and models chemical and electrical phenomena taking place in the cardiac tissue. Given the large number of related pathologies, there is an important need for understanding these phenomena. As illustrated in Figure 1, there are two modeling scales in cardiac electrophysiology. The modeling at the microscopic scale aims at producing a detailed description of the origin of the electric wave in the cells responsible for the heart contraction. The modeling at the macroscopic scale -deduced from the microscopic one using asymptotic techniques -describes the propagation of this electrical wave in the heart. One of the most popular mathematical models in cardiac electrophysiology is the bidomain model, introduced by [START_REF] Tung | A bi-domain model for describing ischemic myocardial d-c potentials[END_REF] and described in detail in the reference textbooks [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF], [START_REF] Sundnes | Computing the Electrical Activity in the Heart[END_REF] and [START_REF] Pullan | Mathematically Modeling the Electrical Activity of the Heart[END_REF]. At the microscopic scale, this model is based upon the description of electrical and chemical quantities in the cardiac muscle. The latter is segmented into the intra-and the extra-cellular domains -hence the name of the model. These two domains are separated by a membrane where electric exchanges occur. A simple variant found in the literature comes from an electroneutrality assumption -justified by an asymptotic analysis -applied to the Nernst-Planck equations, see for example [START_REF] Mori | A three-dimensional model of cellular electrical activity[END_REF] and [START_REF] Mori | From three-dimensional electrophysiology to the cable model: an asymptotic study[END_REF]. This variant leads to partial differential equations whose unknowns are intra-and extra-cellular electric potentials coupled with non linear ordinary differential equations called ionic models at the membrane. They represent the transmembrane currents and other cellular ionic processes. Many non-linear ionic models exist in the literature and can be classified into two categories: physiological models, see for instance [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF][START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF][START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF][START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] and phenomenological models, see for example [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF][START_REF] Nagumo | An active pulse transmission line stimulating nerve axon[END_REF][START_REF] Fitzhugh | Impulses and physiological states in theoretical models of nerve membrane[END_REF]. See also [START_REF] Keener | Mathematical Physiology[END_REF] and [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF] as reference textbooks on the matter. The choice of an adapted model is based on the type of considered cardiac cells (ventricles, atria, Purkinje fibers, . . . ) but also on the desired algorithm complexity (in general phenomenological models are described with less parameters). From the mathematical standpoint, existence and uniqueness analysis for different ionic models is given in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF]. A homogenization procedure allows for the deduction of the macroscopic behaviors from the microscopic ones and leads to the equations of the macroscopic bidomain model. Concerning the mathematical point of view, this homogenization procedure is given formally in [START_REF] Neu | Homogenization of syncytial tissues[END_REF] or more recently in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF]. In [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], it is proven using Γ-convergence. The existence and the uniqueness of a solution for the bidomain model at the macroscopic scale have been studied for different ionic models in the literature [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Sanfelici | Convergence of the Galerkin approximation of a degenerate evolution problem in electrocardiology[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF][START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF]. The aim of this paper is to fill a gap in the literature by providing a complete mathematical analysis based on 2-scale convergence theory of the homogenization procedure that leads to the macroscopic bidomain model. Our analysis is exhaustive in the sense that we provide existence and uniqueness results, nondimensionalization of the equations and 2-scale convergence results -in particular, for the non-linear terms supported on the membrane surfacein the same mathematical framework. To anticipate meaningful modeling assumptions, we consider that the electric conductivities are tensorial and space varying at the microscopic scale. We also consider ionic models of both types (physiological and phenomenological) that may vary smoothly in space (in order to consider ventricular or atrial cells for instance). We carefully introduce the various standard assumptions satisfied by the ionic terms and discriminate the models compatible with our analysis. We are convinced that this work will further allow the analysis of more complex models by laying the ground of the bidomain equations 2-scale analysis. More precisely, among the modeling ingredients that could fit in our context, one could consider: heterogeneous concentrations of ionic species inside the cells, influences of heart mechanical deformations [START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF][START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF][START_REF] Corrado | Identification of weakly coupled multiphysics problems. application to the inverse problem of electrocardiology[END_REF], gap junctions [START_REF] Hand | Homogenization of an electrophysiological model for a strand of cardiac myocytes with gap-junctional and electric-field coupling[END_REF] and the cardiac microscopic fiber structure in the context of local 2-scale convergence [START_REF] Briane | Three models of non periodic fibrous materials obtained by homogenization[END_REF][START_REF] Ptashnyk | Multiscale modelling and analysis of signalling processes in tissues with non-periodic distribution of cells[END_REF]. The paper is organized as follows. • In Section 2, we describe the considered heterogeneous microscopic bidomain model and review the main ionic models. Depending on how they were derived, we organize them into categories. This categorization is useful for the existence (and uniqueness) analysis. • Although it is not the main focus of the article, we present -in Section 3 -existence and uniqueness results of the heterogeneous microscopic bidomain model. The proof -given in Appendix 5 -uses the Faedo-Galerkin approach. The originality of the proposed strategy is the reformulation of the microscopic equations as a scalar reaction-diffusion problem, see Section 3.1. Such an approach is inspired by the macroscopic analysis done in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and from the analysis of an electroporation model given in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF]. Then, before stating the existence and uniqueness theorems of Section 3.3, we present and discuss -see Section 3.2 -in detail all the mathematical assumptions required. Finally, in Section 3.4, we explain how the solutions of our original problem can be recovered by a post-processing of the scalar reaction-diffusion problem solutions. • In Section 4, the homogenization process of the heterogeneous microscopic bidomain model is given. It relies on the underlying assumption that the medium is periodic and uses the 2-scale convergence theory (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]). In preliminary steps, we provide a nondimensionalization of the bidomain equations, see Section 4.1. In order to mathematically analyze the micro-and macroscopic scales of the model and to consider the time-dependence of the system, we develop -in Section 4.2 -adapted uniform estimates. Our strategy can be extended to other problems as for instance the study of cell electroporation. Then the 2-scale convergence theory is applied in order to obtain the macroscopic bidomain equations. The analysis is done in three steps: described as the minimization of a convex functional which is not the case for all the physiological ionic models. Microscopic bidomain model In this section, we give a short description of the heterogeneous microscopic bidomain model and the considered ionic models. The cardiac muscle is decomposed into two parts. We denote by Ω ⊂ R 3 the volume of the heart, Ω i the intracellular region and Ω e the extracellular region. Physiologically, the cells are connected by many gap junctions therefore, geometrically, we assume that Ω i and Ω e are two connected domains with Lipschitz boundary verifying Ω = Ω e ∪ Ω i and Ω e ∩ Ω i = ∅. ( ) 1 The subscripts i and e are used to distinguish the intra-and extracellular quantities, respectively, and α to refer to either of them indifferently. We suppose that the membrane Γ m = ∂Ω e ∩ ∂Ω i is regular and non-empty. We define n i and n e as the unit normal vectors pointing from Ω i and Ω e , respectively, to the exterior. The following microscopic bidomain model is studied for time t ∈ [0, T ]                ∇ x • ( σ α ∇ x u α ) = 0 Ω α , σ i ∇ x u i • n i = σ e ∇ x u e • n i Γ m , -σ i ∇ x u i • n i = C m ∂V m ∂t + I tot ion Γ m , V m = u i -u e Γ m , (2) where u i and u e are electric potentials, C m the membrane capacitance and I tot ion an electrical current depending on ionic activities at the membrane. The conductivities σ α are assumed to be tensorial and depend on x in order to take various modeling assumptions into account. For example, this general form of the conductivities allows us to consider: the dependence of ionic concentrations (remark that a first approximation is to consider space-wise constant ionic concentrations); the heart mechanical deformations [START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF][START_REF] Corrado | Identification of weakly coupled multiphysics problems. application to the inverse problem of electrocardiology[END_REF] or a complex model of gap junctions (see [START_REF] Hand | Homogenization of an electrophysiological model for a strand of cardiac myocytes with gap-junctional and electric-field coupling[END_REF]). In order to close the problem, we need to prescribe adequate boundary conditions on ∂Ω, the external boundary of the domain. We assume that no electric current flows out of the heart σ α ∇ x u α • n α = 0, ∂Ω α ∩ ∂Ω. (3) Finally, one can observe that Equations ( 2) and (3) define u i or u e up to the same constant. Therefore, we choose to impose Γm u e dγ = 0. We now describe the term I tot ion which appears in [START_REF] Aliev | A simple two-variable model of cardiac excitation[END_REF]. In terms of modeling, action potentials are produced as a result of ionic currents that pass across the cell membrane, triggering a depolarization or repolarization of the membrane over time. The currents are produced by the displacement of ionic species across the membrane through ionic channels. The channels open and close in response to various stimuli that regulate the transport of ions across the membrane. The cell membrane can be modeled as a combined resistor and capacitor, C m ∂V m ∂t + I tot ion . The ionic current I tot ion is decomposed into two parts, I tot ion = I ion -I app , where the term I app corresponds to the external stimulus current. Historically, the first action potential model is the Hodgkin-Huxley model [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF]. In order to understand the complexity of physiological models, we give a brief description of this model -the most important model in all of the physiological theory see [START_REF] Keener | Mathematical Physiology[END_REF] -originally formulated for neurons. The transmembrane current I ion proposed by the Hodgkin-Huxley model is I ion = I N a +I K +I l , where I N a is the sodium current, I K the potassium current and I l the leakage current which concerns various and primarily chloride ions. The currents are determined for k = N a, K, l by I k = g k (V m -E k ), where g k is the conductance and E k , the equilibrium voltage. The conductance g l is supposed to be constant and the other conductances are defined by g N a = m 3 hḡ N a , g K = n 4 ḡK , (5) where ḡNa and ḡK are the maximal conductances of the sodium and potassium currents, respectively. The dimensionless state variables m, n and the inactivation variable h satisfy the following ordinary differential equations ∂ t w = α w (V m )(1 -w) -β w (V m )w, w = m, n, h, (6) where α w and β w are the voltage-dependent rate constants which control the activation and the inactivation of the variable w. In Chapter 4 of [START_REF] Keener | Mathematical Physiology[END_REF], α w and β w both have the following form C 1 e (Vm-V0)/C2 + C 3 (V m -V 0 ) 1 + C 4 e (Vm-V0)/C5 , (7) where C i , i = 1, • • • , 5 and V 0 are the model parameters. An adaptation of the Hodgkin-Huxley model to the cardiac action potential was suggested by D. Noble in 1962 [START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF]. Many physiological models have been proposed ever since: for the ventricular cells [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF][START_REF] Tusscher | A model for human ventricular tissue[END_REF][START_REF] Tusscher | Alternans and spiral breakup in a human ventricular tissue model[END_REF][START_REF] Grandi | A novel computational model of the human ventricular action potential and Ca transient[END_REF][START_REF] O'hara | Simulation of the undiseased human cardiac ventricular action potential: Model formulation and experimental validation[END_REF] and for the atrial cells [START_REF] Grandi | A novel computational model of the human ventricular action potential and Ca transient[END_REF][START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF][START_REF] Nygren | Mathematical model of an adult human atrial cell the role of K+ currents in repolarization[END_REF][START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF][START_REF] Koivumäki | Impact of sarcoplasmic reticulum calcium release on calcium dynamics and action potential morphology in human atrial myocytes: A computational study[END_REF][START_REF] Grandi | Human atrial action potential and Ca2+ model: sinus rhythm and chronic atrial fibrillation[END_REF][START_REF] Wilhelms | Benchmarking electrophysiological models of human atrial myocytes[END_REF]]. We refer for example to [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF] for a 2004 survey. All the cited models are physiological models. Other models -called phenomenological models -are approximations of the ionic channels behavior. These models are intended to describe the excitability process with a lower complexity. With only one (or few) additional variable(s) denoted by w and called the state variable(s) -and then only one (or few) ordinary differential equation(s) -these models are able to reproduce the depolarization and/or the repolarization of the membrane. The FitzHugh-Nagumo (FHN) model [START_REF] Fitzhugh | Impulses and physiological states in theoretical models of nerve membrane[END_REF][START_REF] Nagumo | An active pulse transmission line stimulating nerve axon[END_REF], the Roger and McCulloch model [START_REF] Rogers | A collocation-Galerkin finite element model of cardiac action potential propagation[END_REF] and the Aliev and Panfilov model [START_REF] Aliev | A simple two-variable model of cardiac excitation[END_REF] can be written as follows I ion (V m , w) = k(V m -V min )(V m -V max )(V m -V gate ) + f 2 (V m ) w, ∂ t w + g(V m , w) = 0, (8) with g(V m , w) = δ(γ g 1 (V m ) + w), and where δ, γ, k and V gate are positive constants. The parameters V min and V max are reasonable potential ranges for V m . The functions f 2 and g 1 (see Assumption 7 for the notational choice) depend on the model. A more widely phenomenological accepted model for ventricular action potential is the Mitchell-Schaeffer model presented in [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF],      I ion = w τ in (V m -V min ) 2 (V m -V max ) V max -V min - V m -V min τ out (V max -V min ) , ∂ t w + g(V m , w) = 0, (9) with g(V m , w) =        w τ open - 1 τ open (V max -V min ) 2 if V m ≤ V gate , w τ close if V m > V gate , and with τ open , τ close , τ in , τ out and V gate , positive constants. Due to its lack of regularity, the mathematical analysis of this model is complicated. A straightforward simplification consists in using a regularized version of this model. Following [START_REF] Djabella | A two-variable model of cardiac action potential with controlled pacemaker activity and ionic current interpretation[END_REF], it reads g(V m , w) = 1 τ close + τ close -τ open τ close τ open h ∞ (V m ) w - h ∞ (V m ) (V max -V min ) 2 (10) and h ∞ (V m ) = - 1 2 tanh V m -V min (V max -V min )η gate - V gate η gate + 1 2 , where η gate is a positive constant. This regularized version is considered in what follows in order to prove mathematical properties of the bidomain problem. Remark 1. We consider non-normalized versions of the ionic models. For the considered phenomenological models, we expect to have V min ≤ V m ≤ V max , although it can not be proven without strong assumptions on the source term. Concerning the gating variable, we expect to have 0 ≤ w ≤ 1 for FHN like models [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] and, following a choice commonly done in the literature of the Mitchell-Schaeffer model, we expect to have 2 for the Mitchell-Schaeffer model [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]. This last inequality can be proven (see Assumption 10 below and the proof of Lemma 2 in Appendix). Finally for all physiological models, we expect some bounds -from below and above -on the gating variables(s) as natural consequences of the structure of Equation [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF]. Note that in what follows, we consider the bidomain equations with only one gating variable w. All the results presented below can be extended to the case where the ionic term I ion depends on several gating variables. 0 ≤ w ≤ 1 (V max -V min ) In the next section, the following microscopic bidomain model is studied (it corresponds to System (2) coupled with an ionic model and with the boundary conditions (3) and (4)), for all time t ∈ [0, T ],                                          ∇ x • ( σ α ∇ x u α ) = 0 Ω α , σ i ∇ x u i • n i = σ e ∇ x u e • n i Γ m , -σ i ∇ x u i • n i = C m ∂V m ∂t + I tot ion (V m , w) Γ m , V m = u i -u e Γ m , ∂ t w = -g(V m , w) Γ m , σ i ∇ x u i • n i = 0 ∂Ω i ∩ ∂Ω, σ e ∇ x u e • n e = 0 ∂Ω e ∩ ∂Ω. Γm u e dγ = 0. ( 11 ) 3 Analysis of the microscopic bidomain model In this section, the analysis (existence and uniqueness) of the heterogeneous microscopic bidomain model presented in Section 2 is proposed. As explained before, we assume that Ω i and Ω e are connected sets and that they have a Lipschitz boundary. Our analysis involves the use of standard L p Banach spaces and H s Hilbert spaces. Apart from the use of (•, •) D to denote the L 2 scalar product on a domain D, we use standard notations found in many textbooks on functional analysis. In what follows, we use the trace of u i and u e on the boundary. Therefore to work in the adequate mathematical framework we introduce the Hilbert (trace) space H 1/2 (∂Ω α ) whose dual (the space of continuous linear functionals) is H -1/2 (∂Ω α ). Using the fact that the boundary Γ m is a subdomain of ∂Ω α and following the notation of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] (Chapter 3), we introduce the Hilbert space H 1/2 (Γ m ) = u| Γm , u ∈ H 1/2 (∂Ω i ) = u| Γm , u ∈ H 1/2 (∂Ω e ) . Note that the two definitions of H 1/2 (Γ m ) coincide since there exists a continuous extension operator from H 1/2 (Γ m ) to H 1/2 (∂Ω α ) (see the proof of Theorem 4.10 in [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF]). We denote by H -1/2 (Γ m ) the dual space and the duality pairing is denoted •, • Γm . For each j ∈ H -1/2 (Γ m ), the standard dual norm is defined by j H -1/2 (Γm) = sup 0 =v∈H 1/2 (Γm) | j, v Γm | v H 1/2 (Γm) . It is standard to assume that some positivity and symmetry properties are satisfied by the parameters of the system. Assumption 1. The capacitance satisfies C m > 0 and the diffusion tensors σ α belong to [L ∞ (Ω α )] 3×3 and are symmetric, definite, positive and coercive, i.e. there exists C > 0 such that σ α ρ, ρ Ωα ≥ C ρ 2 L 2 (Ωα) , for all ρ ∈ L 2 (Ω α ) 3 . This implies that • L 2 σα : L 2 (Ω α ) → R + ρ → σ α ρ, ρ Ωα defines a norm in L 2 (Ω α ). Elimination of the quasi-static potential unknown Following an idea developed in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] for the bidomain equation at the macroscopic level or in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF] for an electroporation model at the microscopic level, we rewrite System (11) by eliminating the unknown electric potentials u α in Ω α and writing an equation for V m = u iu e on Γ m . Note that the equation for the gating variable w is kept because the only electric quantity involved is V m along Γ m . We introduce the linears operators T i and T e that solve interior Laplace equations in Ω i and Ω e respectively. First, we define T i : H 1/2 (Γ m ) → H -1/2 (Γ m ) which is given formally by T i (v) := σ i ∇ x v i • n i along Γ m , where v i is the unique solution of      ∇ x • ( σ i ∇ x v i ) = 0 Ω i , v i = v Γ m , σ i ∇ x v i • n i = 0 ∂Ω ∩ ∂Ω i . ( 12 ) Since the problem above is well-posed (it is elliptic and coercive because Γ m = ∅, see Theorem 4.10 of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF]), the linear functional T i (v) can be rigorously defined by, for all w ∈ H 1/2 (Γ m ), T i (v), w Γm = ( σ i ∇ x v i , ∇ x w i ) Ωi , where v i is given by ( 12) and w i ∈ H 1 (Ω i ) is the unique solution of      ∇ x • ( σ i ∇ x w i ) = 0 Ω i , w i = w Γ m , σ i ∇ x w i • n i = 0 ∂Ω ∩ ∂Ω i . (13) The operator T i satisfies the properties summed up in the following proposition. Proposition 1. If Assumption 1 holds, we have for all (v, w) ∈ [H 1/2 (Γ m )] 2 , T i (v), w Γm = T i (w), v Γm , T i (v), v Γm = ( σ i ∇ x v i , ∇ x v i ) Ωi ≥ 0, T i (v) H -1/2 (Γm) ≤ C v H 1/2 (Γm) , where C is a positive scalar depending only on σ i and the geometry. Proof. By the definition of T i and v i , we have T i (v), w Γm = ( σ i ∇ x v i , ∇ x w i ) Ωi . Moreover from the weak form of Problem [START_REF] Cioranescu | The periodic unfolding method in domains with holes[END_REF], one can deduce that ( σ i ∇ x w i , ∇ x v i ) Ωi = T i (w), v Γm , hence the first relation of the proposition. The second relation is obtained by setting w = v in the previous equation and using the fact that σ i is a definite positive tensor (Assumption 1). Moreover, since σ i is L ∞ (Assumption 1), we also have sup w =0 | T i (v), w Γm | w H 1/2 (Γm) ≤ C v i H 1 (Ωi) w i H 1 (Ωi) w H 1/2 (Γm) . The third relation is then a consequence of stability results on elliptic problems with mixed boundary conditions (again see Theorem 4.10 of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF])) that map boundary data v and w to, respectively v i and w i , i.e. v i H 1 (Ωi) ≤ C v H 1/2 (Γm) . Corollary 1. There exists c > 0 such that for all v ∈ H 1/2 (Γ m ), we have T i (v), v Γm + Γm v dγ 2 ≥ c v 2 H 1/2 (Γm) . Proof. This result is obtained using the second relation of Proposition 1 and a Poincaré -Wirtinger type inequality. The operator T i is used in order to substitute the first equation with α = i of System (11) into the third equation of the same system. This is possible since u i satisfies a static equation inside Ω i . The same argument holds for the extra-cellular potential u e , therefore, for the same reason we introduce the operator T e : H -1/2 (Γ m ) → H 1/2 (Γ m ), which is defined by T e (j) := v e along Γ m , where v e ∈ H 1 (Ω e ) is the unique solution of                    ∇ x • ( σ e ∇ x v e ) = 0 Ω e , σ e ∇ x v e • n i = j - j, 1 Γm |Γ m | Γ m , σ e ∇ x v e • n e = 0 ∂Ω ∩ ∂Ω e , Γm v e dγ = 0. ( 14 ) Similar to the operator T i , the operator T e satisfies some properties which are summed up in the following proposition. Proposition 2. If Assumption 1 holds, we have for all (j, k) ∈ [ H -1/2 (Γ m )] 2 k, T e (j) Γm = j, T e (k) Γm , j, T e (j) Γm = -( σ e ∇ x v e , ∇ x v e ) Ωe ≤ 0, T e (j) H 1/2 (Γm) ≤ C j H -1/2 (Γm) , where C is a positive scalar depending only on σ e and the geometry. Proof. For all k ∈ H -1/2 (Γ m ), we define w e ∈ H 1 (Ω e ) such that,                    ∇ x • ( σ e ∇ x w e ) = 0 Ω e , σ e ∇ x w e • n i = k - k, 1 Γm |Γ m | Γ m , σ e ∇ x w e • n e = 0 ∂Ω ∩ ∂Ω e , Γm w e dγ = 0. By the definition of v e and w e , we deduce the two following equalities ( σ e ∇ x v e , ∇ x w e ) Ωe = -j - j, 1 Γm |Γ m | , T e (k) Γm (15) and ( σ e ∇ x w e , ∇ x v e ) Ωe = -k - k, 1 Γm |Γ m | , T e (j) Γm . The first relation of the proposition is obtained by noticing that Γm T e (j) dγ = Γm T e (k) dγ = 0 hence j, 1 Γm |Γ m | , T e (k) Γm = k, 1 Γm |Γ m | , T e (j) Γm = 0. The second relation of the proposition is obtained by setting k = j in (15) and using the fact that σ e is a definite positive tensor. To prove the continuity we first notice that ∇ x v e 2 L 2 (Ωe) ≤ C j H -1/2 (Γm) T e (j) H 1/2 (Γm) = C j H -1/2 (Γm) v e H 1/2 (Γm) , since σ e ∈ [L ∞ (Ω α )] 3×3 (        C m ∂ t V m + I tot ion (V m , w) = -T i (u i ) Γ m , u e = T e (T i (u i )) Γ m , ∂ t w = -g(V m , w) Γ m , (17) where by definition V m = u iu e . Using the second equation of [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], we obtain u i -T e (T i (u i )) = V m Γ m . (18) We can prove (see Lemma 1) that the operator T := I -T e T i : H 1/2 (Γ m ) → H 1/2 (Γ m ) has a bounded inverse (I stands for the identity operator) and we therefore introduce the operator A : H 1/2 (Γ m ) → H -1/2 (Γ m ) v → T i (I -T e T i ) -1 v and substitute the term u i in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]. This implies that if Assumption 1 holds, solutions of [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] satisfy C m ∂ t V m + I tot ion (V m , w) = -A(V m ) Γ m , ∂ t w = -g(V m , w) Γ m . ( 19 ) The converse is true. Solutions of ( 19) can be used to recover solutions of [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] as shown in Section 3.4. The properties of the operator A are summed up in the following proposition. Proposition 3. If Assumption 1 holds, the linear operator A satisfies for all (v, ṽ) ∈ [H 1/2 (Γ m )] 2 , A(v), ṽ Γm = A(ṽ), v Γm and A(v), v Γm ≥ 0. Moreover, there exist constants c and C such that, for all (v, ṽ) ∈ [H 1/2 (Γ m )] 2 , | A(v), ṽ Γm | ≤ C v H 1/2 (Γm) ṽ H 1/2 (Γm) (20) and A(v), v Γm + Γm v dγ 2 ≥ c v 2 H 1/2 (Γm) . (21) Proof. To simplify the proof, we define θ ∈ H 1/2 (Γ m ) and θ ∈ H 1/2 (Γ m ) as (I -T e T i )(θ) := v and (I -T e T i )( θ) := ṽ. Using Propositions 1 and 2, we obtain A(v), ṽ Γm = A(v), (I -T e T i )( θ) Γm = (I -T i T e )A(v), θ Γm . Since by definition A = T i (I -T e T i ) -1 and using (I -T i T e )T i = T i (I -T e T i ), we deduce A(v), ṽ Γm = T i (v), θ Γm = T i ( θ), v Γm . ( 22 ) The symmetry of A(•), • is then a consequence of the definition of θ as we have T i ( θ) = A(ṽ). From [START_REF] Evans | Partial differential equations[END_REF], we can also deduce the non-negativity of the bilinear form by choosing ṽ ≡ v. We find that A(v), v Γm = T i (θ), (I -T e T i )(θ) Γm , (23) which is non-negative thanks to the non-negativity of T i and the non-positivity of T e . The continuity [START_REF] Djabella | A two-variable model of cardiac action potential with controlled pacemaker activity and ionic current interpretation[END_REF] is a direct consequence of the third equation of Proposition 1 and Lemma 1 (i.e. T := I -T e T i has a bounded inverse). Now remark that by the definition of T e , we have, for all j ∈ H -1/2 (Γ m ), Γm T e (j) dγ = 0 ⇒ Γm v dγ = Γm T (θ) dγ = Γm θ dγ. ( 24 ) Using the equalities above and (23), we find A(v), v Γm + Γm v dγ 2 ≥ T i (θ), θ Γm + Γm θ dγ 2 . Corollary 1 shows the existence if of a positive scalar c such that, for all v ∈ H 1/2 (Γ m ), A(v), v Γm + Γm v dγ 2 ≥ c θ H 1/2 (Γm) = c T -1 (v) H 1/2 (Γm) . Inequality [START_REF] Dragomir | Some Gronwall type inequalities and applications[END_REF] is then a consequence of the boundedness of T given by Lemma 1. The last step of this section consists in proving the technical lemma regarding the invertibility of the operator I -T e T i , which allows to define the operator A. Lemma 1. The linear operator T := I -T e T i : H 1/2 (Γ m ) → H 1/2 (Γ m ) is bounded and has a bounded inverse. Proof. Using Propositions 1 and 2, we see that the operator T is linear and bounded, and hence continuous. In what follows, we prove that T is injective and then we deduce a lower bound for the norm of T . Finally, we prove that the range of the operator is closed and that its orthogonal is the null space. These two last steps allow to show that the range of T is H 1/2 (Γ m ) and the result follows from the bounded inverse theorem. Step 1: Injectivity of the operator For any v ∈ H 1/2 (Γ m ) such that T (v) = 0, we have T i (v), v Γm = T i (v), T e T i (v) Γm . The first term of the equality is non-negative (Proposition 1) while the second is non-positive thanks to Proposition 2. Therefore, we obtain T i (v), v Γm = 0 and this implies thanks to Proposition 1 that v is constant along Γ m . However, for any constant function c, we have T i (c) = 0 therefore, in our case, T (v) = 0 implies v = T e T i (v) = 0. Step 2: Lower bound for the operator norm For all θ ∈ H 1/2 (Γ m ), we define v ∈ H 1/2 (Γ m ) as T (θ) = v. Then, as written in Equation [START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF], θ and v have the same average along Γ m and T (θ 0 ) = v 0 where θ 0 := θ - 1 |Γ m | Γm θ dγ and v 0 := v - 1 |Γ m | Γm v dγ. We have T i (θ 0 ), v 0 Γm = T i (θ 0 ), T (θ 0 ) Γm = T i (θ 0 ), θ 0 Γm -T i (θ 0 ), T e T i (θ 0 ) Γm . Since T i is non negative and T e non positive (Proposition 1 and 2), we deduce T i (θ 0 ), θ 0 ≤ T i (θ 0 ) H -1/2 (Γm) v 0 H 1/2 (Γm) ≤ C θ 0 H 1/2 (Γm) v 0 H 1/2 (Γm) , ( 25 ) where C is the continuity constant given by Proposition 1. Using Corollary 1 and the fact that θ 0 has zero average along Γ m , we have c θ 0 2 H 1/2 (Γm) ≤ T i (θ 0 ), θ 0 , therefore using (25), we find θ 0 H 1/2 (Γm) ≤ C c v 0 H 1/2 (Γm) . Finally, we get θ H 1/2 (Γm) ≤ θ 0 H 1/2 (Γm) + 1 |Γ m | Γm θ dγ 1 H 1/2 (Γm) ≤ C c v 0 H 1/2 (Γm) + 1 |Γ m | Γm v dγ 1 H 1/2 (Γm) . This implies that there exists another constant C depending only on the geometry and σ i such that θ H 1/2 (Γm) ≤ C v H 1/2 (Γm) = C T (θ) H 1/2 (Γm) . Step 3: Orthogonal of the operator range Let j ∈ H -1/2 (Γ m ) such that for all v ∈ H 1/2 (Γ m ), j, T (v) Γm = 0. ( 26 ) We choose v = T e (j) in ( 26) and thanks to Propositions 1 and 2, we find j, T e (j) Γm = j, T e T i T e (j) Γm = T i T e (j), T e (j) Γm . The last term is non negative therefore, since T e (j), j Γm is non positive, it should vanish. This implies that j is constant along Γ m (since in [START_REF] Cioranescu | Homogenization in open sets with holes[END_REF], ∇ x v e = 0). Now, we choose v ≡ 1 in ( 26) and we find j, T (v) Γm = 0 ⇒ j, 1 Γm = 0 ⇒ j = 0. We are now ready to give the required assumptions to prove the existence and the uniqueness of System (19). Mathematical assumptions for the well-posedness As mentioned in [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF], no maximum principle has been proven for Problem [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] (or any linearized version) contrary to standard reaction-diffusion problems. The consequence is that one can not deduce easily bounds in time and space for the electric variable V m . This implies that some specific assumptions are required to guarantee that non-linear terms involving V m are well-defined and that existence results hold. More generally, in this section, the required assumptions to prove existence and/or uniqueness of weak solutions of System ( 19) are presented. First, from the equivalence of the bidomain equations and System [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] (see Section 3.1), it is clear that our problem has to be completed with the following initial conditions on Γ m V m (•, 0) = u i (•, 0) -u e (•, 0) = V 0 m , w(•, 0) = w 0 . It is sufficient to require the following assumption concerning the initial conditions. Assumption 2. Properties of the initial conditions. V 0 m ∈ L 2 (Γ m ) and w 0 ∈ L 2 (Γ m ). As already said, the ionic term I tot ion is decomposed into two parts I tot ion = I ion -I app , (27) where I app is the applied current and is a function of time and space. We assume the following property concerning the applied current. Assumption 3. Property of the source term. I app ∈ L 2 (Γ m × (0, T )). To represent the variety of the behavior of the cardiac cells, we mathematically define the ionic terms I ion and g by introducing a family of functions on R 2 parametrized by the space variable I ion ( x, •) : R 2 -→ R and g( x, •) : R 2 -→ R. At each fixed x ∈ Ω, the ionic terms I ion and g describe a different behavior that corresponds to a non-linear reaction term. Therefore to further the mathematical analysis, it is assumed that these functions have some regularity in all their arguments. This leads to the following assumption. Assumption 4. Regularity condition. I ion ∈ C 0 (Ω × R 2 ) and g ∈ C 0 (Ω × R 2 ). In the mathematical analysis of macroscopic bidomain equations, several paths have been followed in the literature according to the definition of the ionic current. We summarize below the encountered various cases. Physiological models For these models, one can prove that the gating variable w is bounded from below and above, due to the specific structure of ( 6) and [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF]. To go further in the physiological description, some models consider the concentrations as variables of the system, see for example the Luo-Rudy model [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF]. In [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF], such models are considered. Phenomenological models a) The FitzHugh like models [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] The FitzHugh like models have been studied in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF][START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF]. In these models, there are no obvious bounds on the gating variable. The FitzHugh-Nagumo model satisfies some good mathematical properties (existence and uniqueness of solutions for arbitrary observation times) whereas the Aliev-Panfilov and MacCulloch model still rise some mathematical difficulties. In particular, no proof of uniqueness of solutions exists in the literature because of the non-linearity in the coupling terms between w and V m . b) The Mitchell-Schaeffer model [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF] The Mitchell-Schaeffer model has been studied in [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF][START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF]. This model and its regularized version have a very specific structure. First, we will show that the gating variable is bounded from below and above but uniqueness of the solution is a difficult mathematical question which is addressed in [START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF] for a related ordinary differential (ODE) problem. In what follows, we describe in detail the structures of I ion and g. In all models, some "growth conditions" are required to write the problem in an adequate variational framework. Assumption 5. Growth condition. There exists a scalar C ∞ > 0 such that for all x ∈ Ω and (v, w) ∈ R 2 we have |I ion ( x, v, w)| ≤ C ∞ (|v| 3 + |w| + 1) (28) and |g( x, v, w)| ≤ C ∞ (|v| 2 + |w| + 1). ( 29 ) Remark 2. The inclusion H 1/2 (Γ m ) into L 4 (Γ m ) is continuous, see Proposition 2.4 of [17], therefore by identification of integrable functions with linear forms, there is a continuous inclusion of L 4/3 (Γ m ) into H -1/2 (Γ m ). Moreover if v ∈ L 2 ((0, T ), H 1/2 (Γ m )), w ∈ L 2 ((0, T ) × Γ m ), we have (the dependency w.r.t. x is omitted for the sake of clarity) I ion (v, w) ∈ L 2 ((0, T ), L 4/3 (Γ m )) and g(v, w) ∈ L 2 ((0, T ) × Γ m ). Remark 3. Several mathematical analyses concern only the macroscopic bidomain equations (see [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] for instance) and these analyses can be extended to the case of the microscopic bidomain equations with some slightly different assumptions on the non-linear terms I ion and g. These assumptions take into account the functional framework in which we have to work. Namely, we have to use the trace space H 1/2 (Γ m ) instead of the more standard functional space H 1 (Ω). More precisely in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF], the growth conditions are |I ion ( x, v, w)| ≤ C ∞ (|v| 5 + |w| + 1) and |g( x, v, w)| ≤ C ∞ (|v| 3 + |w| + 1). In that case, it can be shown that I ion (v, w) and g(v, w) are integrable and well defined if v ∈ H 1 (Ω) and w ∈ L 2 (Ω). As mentioned in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], the growth conditions ( 28) and ( 29) are there to ensure that the functions I ion and g can be used to construct models that are well defined in the variational sense. The growth condition (Assumption 5) is not sufficient to guaranty the existence of solutions of Problem [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF]. Indeed, since I ion can behave like a cubic polynomial for large value of v and the function g could behave as a quadratic polynomial in v, it turns out to be necessary to have a signed condition (see Equations ( 51) -( 54) for more insight). This leads to the following assumption Assumption 6. There exist µ > 0 and C I > 0 such that for all x ∈ Ω and (v, w) ∈ R 2 , we have v I ion ( x, v, w) + µ w g( x, v, w) ≥ -C I ( |v| 2 + |w| 2 + 1). ( 30 ) In general, V m is more regular in space than w due to the presence of the positive operator A. Due to this lack of regularity on the gating variables, some additional assumptions have to hold to carry out the mathematical analysis. Concerning this question two kinds of assumption are proposed in the literature. Roughly speaking, these assumptions depend on if the model is physiological or phenomenological. To be able to refine our analysis depending on the different properties of the models, we make the following assumption. Assumption 7. One of the following assumptions hold a) Global lipschitz property. There exists a positive scalar L g > 0, such that for all x ∈ Ω, (v 1 , v 2 ) ∈ R 2 and (w 1 , w 2 ) ∈ R 2 , |g( x, v 1 , w 1 ) -g( x, v 2 , w 2 )| ≤ L g |v 1 -v 2 | + L g |w 1 -w 2 |. (31) b) Decomposition of the non-linear terms. There exist continuous functions (f 1 , f 2 , g 1 , g 2 ) ∈ [C 0 (Ω × R 2 )] 4 such that I ion ( x, v, w) = f 1 ( x, v) + f 2 ( x, v) w, g( x, v, w) = g 1 ( x, v) + g 2 ( x) w, and there exist positive constants C 1 , c 1 and C 2 such that for all x ∈ Ω and v ∈ R, v f 1 ( x, v) ≥ C 1 |v| 4 -c 1 ( |v| 2 + 1) and |f 2 ( x, v)| ≤ C 2 ( |v| + 1). ( 32 ) Remark 4. The existence result of solutions given later (Theorem 1) is valid with either Assumption 7a or 7b. Note also that Assumption 7a is satisfied for the physiological models, the Mitchell-Schaeffer model and the FitzHugh-Nagumo model (the simplest form of FitzHugh-like models), whereas Assumption 7b holds for the Aliev-Panfilov and MacCulloch models. Finally, to prove the uniqueness of a solution for the microscopic bidomain problem, the terms I ion and g have to satisfy a global signed Lipschitz relation (the following assumption is a variant of what is suggested in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF] or [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]). Assumption 8. One-sided Lipschitz condition. There exist µ > 0 and L I > 0 such that for all x ∈ Ω, (v 1 , w 1 ) ∈ R 2 and (v 2 , w 2 ) ∈ R 2 I ion ( x, v 1 , w 1 ) -I ion ( x, v 2 , w 2 ) (v 1 -v 2 ) + µ g( x, v 1 , w 1 ) -g( x, v 2 , w 2 ) (w 1 -w 2 ) ≥ -L I |v 1 -v 2 | 2 + |w 1 -w 2 | 2 . (33) Note that such an assumption is not satisfied for the Aliev-Panfilov, the MacCulloch and the Mitchell-Schaeffer models. Uniqueness of a solution for these models is still an open problem. For physiological models and for the Mitchell-Schaeffer model, there exist two finite scalars w min < w max such that we expect the solution w to be bounded from below by w min and above by w max . For this to be true, it has to be satisfied for the initial data and this leads to the following assumptions when considering such models. Assumption 9. w min ≤ w 0 (•) ≤ w max , Γ m . Assumption 10. For all x ∈ Ω and v ∈ R, g( x, v, w min ) ≤ 0 and g( x, v, w max ) ≥ 0. The last assumption is satisfied when considering the function g defined as in physiological models [START_REF] Allaire | Homogenization of the Neumann problem with nonisolated holes[END_REF][START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] or as in the Mitchell-Schaeffer model [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]. For such models, the terms I ion ( x, v, w) and g( x, v, w) should be replaced by I ion ( x, v, χ(w)) and g( x, v, χ(w)) respectively, where χ(w) =    w min w ≤ w min , w max w ≥ w max , w otherwise. Note that with such substitutions, we do not modify the solution while the global conditions of Assumptions 5, 6, 7a and 8 are more likely to be fulfilled since it corresponds to verify local conditions on w. As an example, we can remark that the physiological models -of the form given by (5, 6) -do not satisfy the Lipschitz property (Assumption 8) globally in w but only locally. However, for these models, we can show a priori that w is bounded from below and above, hence the suggested modification of the non-linear terms. Finally, note that this assumption is also a physiological assumption. Indeed, it makes sense to have some bounds on the gating variables. Remark 5. Assumptions 1-4 are always satisfied and do not depend on the structure of the non-linear terms I ion and g. Moreover, it is possible to classify the models of the literature depending on which assumption they satisfy, see Table 5. We refer the reader to [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] for Assumption 5 6 7 8 9-10 FitzHugh-Nagumo a) Roger MacCulloch a) Aliev-Panfilov b) Regularized Mitchell-Schaeffer a) Physiological models a) Table 1: Ionic models and verified assumptions the analysis of FitzHugh-Nagumo, Roger MacCulloch and Aliev-Panfilov assumptions and Section 2 for the regularized Mitchell-Schaeffer and physiological models. Existence and uniqueness analysis All the proofs of this section are given in Appendix 5. In the literature, the analysis of the classical bidomain model is done most of the time at the macroscopic scale. Equations at that scale are obtained from the microscopic model using a formal asymptotic homogenization procedure (see [START_REF] Neu | Homogenization of syncytial tissues[END_REF][START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]) or the Γ-convergence method (see [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]). In any cases, to justify the homogenization process, a complete mathematical analysis at the microscopic scale is necessary. In this section, we give existence and uniqueness results for solutions of System [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF]. Note that the existence of solutions for the macroscopic bidomain equation used in the literature is a consequence of the 2-scale convergence theory presented in the next section. In the literature, one can find three different approaches that are used for the mathematical analysis of the macroscopic classical bidomain equations. Following the classification suggested in the recent book [START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF], these three approaches are 1. The use of the abstract framework of degenerate evolution variational inequalities in Hilbert spaces (see for instance [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]). Such an approach has been used to do the analysis when FitzHugh-Nagumo models are considered and is adapted to the analysis of semi-discretization in time of the problem (see [START_REF] Sanfelici | Convergence of the Galerkin approximation of a degenerate evolution problem in electrocardiology[END_REF]). 2. The use of Schauder fixed point theorem. This is the approach suggested in [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF] and [START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF]. In these references, the ionic term depends on the concentration of ionic species (in addition to the dependence on the gating variables). This approach is adapted to the analysis of physiological models. 3. The Faedo-Galerkin approach. This approach is used in the context of electrophysiology in [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and in the context of electroporation in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF]. It is the most versatile approach although it has been used to analyze only phenomenological models in the mentioned references. We refer the reader to the textbook of J.-L. Lions [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] for a detailed description. This technique is based upon a limit process of discretization in space of the partial differential equations combined with the use of standard results on systems of ODEs (at each fixed discretization). This is the approach that we consider in the appendix of this paper. In what follows, we proceed in three steps. The first step consists in showing existence/uniqueness results for the evolution equation of the gating variable w. More precisely, given any (electric potential) U ∈ C 0 ([0, T ]; L 2 (Γ m )), (34) we show in the appendix that the associated gating variable -solution of ( 35) -is bounded from below and above when physiological or Mitchell-Schaeffer models are concerned. The next step concerns existence results for the full non-linear microscopic bidomain equations and the final step gives the uniqueness result. Step 1 -Evolution equation of the gating variable. The term g(V m , w) in ( 19) is replaced by the term g(U, w) and we denote the corresponding solution w = w U . As mentioned previously, our main purpose here is to state the fact that -in the case of physiological or Mitchell-Schaeffer models -the solution w U is bounded from below and above. Lemma 2. If Assumption 2, 4, 5 and 7a hold, there exists a unique function w U ∈ C 1 ([0, T ]; L 2 (Γ m )), which is a solution of ∂ t w U + g( x, U, w U ) = 0, Γ m , ∀ t ∈ [0, T ], w U ( x, 0) = w 0 ( x), Γ m . ( 35 ) Moreover if Assumptions 9 and 10 are satisfied then for all t ∈ [0, T ] and almost all x ∈ Γ m , w min ≤ w U ( x, t) ≤ w max . The proof of Lemma 2 is done by considering smooth approximations of U and w 0 . Then, the problem reduces to the analysis of an ordinary differential equation where the space variable x plays the role of a parameter. Finally, the solution of (35) is constructed by a limit process using the density of smooth functions into L 2 (Γ m ). Step 2 -Existence result for the microscopic bidomain equation Theorem 1. If Assumptions 1-7 hold, there exist V m ∈ C 0 ([0, T ]; L 2 (Γ m )) ∩ L 2 ((0, T ); H 1/2 (Γ m )), ∂ t V ∈ L 2 ((0, T ); H -1/2 (Γ m )), and w ∈ H 1 ((0, T ); L 2 (Γ m )), which are solutions of C m ∂ t V m + A V m + I ion V m , w = I app , H -1/2 (Γ m ), a.e. t ∈ (0, T ), ∂ t w + g V m , w = 0, Γ m , a.e. t ∈ (0, T ), (36) and V m ( x, 0) = V 0 m ( x) Γ m , w( x, 0) = w 0 ( x) Γ m . ( 37 ) The proof of Theorem 1 is done using the Faedo-Galerkin method. More precisely, the equations are first space-discretized using a finite dimensional basis of L 2 (Γ m ) constructed with the eigenvectors of A. After the discretization, it is proven that semi-discrete solutions exist by applying the Cauchy-Peano theorem (to be more specific, we use the more general Carathéodory's existence theorem) on systems of ordinary differential equations. Finally, by a limit procedure, the existence of solutions is proven for the weak form of (36) (the limit procedure uses compactness results to deduce strong convergence of the semi-discrete solutions. This strategy allows to pass to the limit in the non-linear terms I ion and g). Remark 6. If Assumption 7a is valid (i.e. g is globally Lipschitz) then by application of Lemma 2 the solution given by Theorem 1 has the additionnal regularity w ∈ C 1 ([0, T ]; L 2 (Γ m )). Step 3 -Uniqueness results for the microscopic bidomain equation Uniqueness is proven by standard energy techniques for models satisfying the one sided Lipschitz property, see Assumption 8. Corollary 2. If Assumption 8 holds, then the solution of the microscopic bidomain equations given by Theorem 1 is unique. Post-processing of the intra-and extra-cellular potentials From the solution V m = u i -u e given by Theorem 1, we can first recover the intra-cellular potential u i ∈ L 2 ((0, T ); H 1 (Ω i )). Using Equation ( 18), one can see that it is defined as the unique solution of the following quasi-static elliptic problem (the time-dependence appears only in the boundary data),      ∇ x • ( σ i ∇ x u i ) = 0 Ω i , u i = (I -T e T i ) -1 V m Γ m , σ i ∇ x u i • n i = 0 ∂Ω ∩ ∂Ω i . (38) In the same way, the extra-cellular potential u e ∈ L 2 ((0, T ); H 1 (Ω e )) is defined as the unique solution of the following quasi-static elliptic problem,                    ∇ x • ( σ e ∇ x u e ) = 0 Ω e , σ e ∇ x u e • n i = T i (u i ) - T i (u i ), 1 |Γ m | Γ m , σ e ∇ x u e • n e = 0 ∂Ω ∩ ∂Ω e , Γm u e dγ = 0. ( 39 ) From the definition above, one can recover energy estimates on (V m , w, u i , u e ) from the energy estimates derived for the system [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF] where only (V m , w) appears. To do so, we will use later the following proposition. Proposition 4. Let V m ∈ L 2 ((0, T ); H 1/2 (Γ m )) , then for almost all t ∈ (0, T ), we have A(V m ), V m Γm = ( σ i ∇ x u i , ∇ x u i ) Ωi + ( σ e ∇ x u e , ∇ x u e ) Ωe , where (u i , u e ) are given by [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] and [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF]. Proof. We have A(V m ), V m Γm = T i T -1 (V m ), V m Γm = T i T -1 (V m ), T T -1 (V m ) Γm = T i T -1 (V m ), T -1 (V m ) Γm -T i T -1 (V m ), T e T i T -1 (V m ) Γm . The first term of the right-hand side gives by definition and Proposition 1 the quadratic term on u i whereas the second term gives by definition of T e , u e and Proposition 2 the quadratic term on u e . We are now in position to perform a rigorous homogenization of the microscopic bidomain model (2). Homogenization of the bidomain equations The microscopic model is unusable for the whole heart in term of numerical applications. At the macroscopic scale, the heart appears as a continuous material with a fiber-based structure. At this scale, the intracellular and extracellular media are undistinguishable. Our objective is to use a homogenization of the microscopic bidomain model in order to obtain a bidomain model where all the unknowns are defined everywhere hence simplifying the geometry of the domain. Formally after homogenization, we consider that the cardiac volume is " Ω = Ω i = Ω e ". Homogenization of partial differential systems is a well known technique (see [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF] for a reference textbook on the matter). It is done by considering the medium as periodic, with the period denoted by ε, then by constructing equations deduced by asymptotic analysis w.r.t. ε. A classical article for the formal homogenization of the microscopic bidomain model -when the conductivities σ α are strictly positive constants -is [START_REF] Neu | Homogenization of syncytial tissues[END_REF]. In [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], the homogenization of the microscopic bidomain equations (with constants conductivities) is presented using formal asymptotic analysis. It should also be noted that in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], the same type of results have be proven using the theory of Γ-convergence in some simplified situations. The approach presented below uses the 2-scale convergence method (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) to extend the results obtained in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]. As typical for this kind of homogenization problem, we adopt the following approach: 1. Nondimensionalization of the problem. The microscopic bidomain equations are scaled in space and time, and written unitless. Using the characteristic values of the physical parameters of our problem (cells size, conductivities, ionic current,...), a small parameter ε is introduced in the equations and in the geometry. 2. Uniform estimate of solutions. Using energy estimates, norms of the solutions -as well as norms of the non-linear terms -are uniformly bounded with respect to the small parameter ε. 3. Two-scale convergence. The limit equations are deduced by application of the 2-scale convergence theory. One of the main difficulties of this step is the convergence analysis of the non-linear terms. It relies on the one-sided Lipschitz assumption (Assumption 8). Although the elimination of the electrostatic potentials u i and u e was useful to simplify the analysis of the microscopic bidomain equations, we must re-introduce these unknowns for the homogenization process. The reason is that -to the best of our knowledge -only local in space differential operators are adapted to the 2-scale homogenization process and the operator A does not enter into this category of operators. Nondimensionalization of the problem The nondimensionalization of the system is necessary in order to understand the relative amplitudes of the different terms. In the literature, few works in that direction have been carried out, although we can cite [START_REF] Rioux | A predictive method allowing the use of a single ionic model in numerical cardiac electrophysiology[END_REF] for a nondimensionalization analysis of the ionic term I ion and [START_REF] Neu | Homogenization of syncytial tissues[END_REF] for the analysis in the context of homogenization. We define L 0 as a characteristic length of the heart and T 0 as a characteristic time of a cardiac cycle. In the same spirit, we denote by Σ 0 a characteristic conductivity, C 0 a characteristic membrane capacitance, V max and V min characteristic upper and lower bounds for the transmembrane potential and W 0 a characteristic value of the gating variable. We set                                u e ( x, t) = (V max -V min ) ũe x L 0 , t T 0 , u i ( x, t) = (V max -V min ) ũi x L 0 , t T 0 + V min , w( x, t) = W 0 w x L 0 , t T 0 , C m = C 0 Cm , σ α = Σ 0 ˜ σ α . (40) We assume that the contribution of I ion and I app are of the same order, namely, there exists a characteristic current amplitude I 0 such that    I ion ( x, u i -u e , w) = I 0 Ĩion x L 0 , ũi -ũe , w , I app = I 0 Ĩapp . (41) In the same way, we assume that g can be defined using a normalized function g and a characteristic amplitude G 0 such that g( x, u i -u e , w) = G 0 g x L 0 , ũi -ũe , w . (42) All quantities denoted by a tilde are dimensionless quantities. We obtain from ( 11), the dimensionless system                      ∇ x • ( ˜ σ α ∇ x ũα ) = 0 Ωα , ˜ σ i ∇ x ũi • n i = ˜ σ e ∇ x ũe • n i Γm , ˜ σ i ∇ x ũi • n i = - L 0 I 0 Σ 0 U 0 Ĩion (•, ũi -ũe , w) + L 0 I 0 Σ 0 U 0 Ĩapp - L 0 C 0 Σ 0 T 0 Cm ∂ t (ũ i -ũe ) Γm , ∂ t w = - T 0 G 0 W 0 g(ũ i -ũe , w) Γm , (43) where Ωα and Γm are rescaled by L 0 and where U 0 = V max -V min . The same nondimensionalization process is used to define boundary conditions along ∂ Ω using the boundary conditions given by Equation (3). We now define ε -the parameter which tends to zero in the homogenization processas the ratio between the maximal length of a cell (of the order 10 -4 m) and L 0 (equals to 10 -1 m). This implies that ε is of the order of 10 -3 . The dimensionless quantity L 0 C 0 Σ 0 T 0 -with T 0 of the order of 1s, C 0 of 10 -2 F.m -2 and Σ 0 of 1 S.m -1 -is of the same order of ε and can be set to ε by a small modification of the reference quantities. The term U 0 is of order 10 -1 V and the term I 0 of order 10 -3 A.m -2 (this is the typical order of magnitude of I app ), therefore we have L 0 I 0 Σ 0 U 0 of the order of ε and set it to ε. Finally, up to a small change in the definition of g, we assume that T 0 G 0 /W 0 is of order 1. The fact that ε is small means that the microscopic scale and the macroscopic scale are well separated. For the sake of clarity, we do not keep the tilde notation but we write explicitly the dependence in ε. To study the mathematical properties of this problem, we consider the family of problems parametrized by ε > 0, and we will characterize the limit equation as ε tends to zero. We will use the results of the 2-scale convergence, see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. This method has been used in many fields of science and engineering. The main assumption of the 2-scale convergence to obtain a well defined limit problem, is that the domain -in which the equations are solvedis periodic. We denote by Y the open reference domain that is used to define the idealized microstructure corresponding to the periodic arrangement of cardiac cells. This micro-structure is decomposed into two open connected subdomains: the intracellular part Y i and the extracellular part Y e . We have Y i ∩ Y e = ∅, Y = Y i ∪ Y e . The intra-and the extra-cellular domains are separated by Γ Y . The global position vector is denoted by x and the local position vector by y. We define the domain Ω = Ω ε i ∪ Ω ε e by ε-periodicity and we denote by Γ ε m the boundary between the intra-and the extra-cellular domains Ω ε i and Ω ε e . More precisely we assume that Ω ε i and Ω ε e are the union of entire cells and we have Ω ε α = k (εY α + ε w k ) and then Γ ε m = k (εΓ Y + ε w k ), (44) where w k is the vector corresponding to the translation between the considered cell and the reference cell. By definition, we have w 0 = 0. Note that by construction, from any macroscopic position vector x, one can deduce a corresponding position y in the reference periodic cell by y = x/ε. We assume that the diffusion tensors depend on the two scales using ε-independent tensor fields σ ε α ( x) = σ α x, x ε . The objective is to homogenize the following problem, i.e. study the convergence -when ε tends to zero -of the solutions of the microscopic bidomain model,              ∇ x • ( σ ε α ∇ x u ε α ) = 0 Ω ε α , σ ε i ∇ x u ε i • n i = σ ε e ∇ x u ε e • n i , Γ ε m σ ε i ∇ x u ε i • n i = -εC m ∂ t (u ε i -u ε e ) -εI ion (u ε i -u ε e , w ε ) + εI ε app Γ ε m , ∂ t w ε = -g(u ε i -u ε e , w ε ), Γ ε m . (45) The boundary conditions along ∂Ω read σ ε i ∇ x u ε α • n α = 0 ∂Ω ∩ ∂Ω ε α , (46) and the initial conditions are u ε i (•, 0) -u ε e (•, 0) = V 0,ε m , w ε (•, 0) = w 0,ε , Γ ε m . (47) Finally, for the definition of a unique extracellular electric potential, we impose Γ ε m u ε e dγ = 0. ( 48 ) Uniform estimate of the solutions The homogenization processi.e. the analysis of the limit process when ε tends to 0requires norm estimates of the solution that are independent of the parameter ε. This is the objective of this subsection. A variational equation for unknowns u ε α can be directly deduced from the partial differential equations ( 45) and [START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF]. It reads, for almost all t ∈ [0, T ], σ ε i ∇ x u ε i , ∇ x v ε i Ω ε i + σ ε e ∇ x u ε e , ∇ x v ε e Ω ε e + ε C m ∂(u ε i -u ε e ) ∂t , v ε i -v ε e Γ ε m + ε I ion (u ε i -u ε e , w ε ), v ε i -v ε e Γ ε m = ε I ε app , v ε i -v ε e Γ ε m , (49) for all (v ε i , v ε e ) ∈ H 1 (Ω ε i ) × H 1 (Ω ε e ). We can formally derive an energy estimate by assuming that u ε α is regular in time and by taking v ε α = u ε α (•, t) in [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF]. Since, by definition V ε m = u ε i -u ε e on Γ ε m , we obtain ∇ x u ε i 2 L 2 σ i + ∇ x u ε e 2 L 2 σe + ε C m 2 d dt V ε m 2 L 2 (Γ ε m ) + ε I ion (V ε m , w ε ), V ε m Γ ε m = ε I ε app , V ε m Γ ε m . (50) Before integrating [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF] with respect to time, we multiply the equation by e -λt , where λ is a positive constant which will be determined in what follows. The third term of (50) becomes ε C m 2 t 0 e -λs d dt V ε m 2 L 2 (Γ ε m ) ds = ε C m 2 e -λt V ε m (t) 2 L 2 (Γ ε m ) -ε C m 2 V ε m (0) 2 L 2 (Γ ε m ) + ε λ C m 2 t 0 e -λs V ε m 2 L 2 (Γ ε m ) ds. Similarly, for all µ > 0, one can deduce that ε µ 2 t 0 e -λs d dt w ε 2 L 2 (Γ ε m ) ds = ε µ 2 e -λt w ε (t) 2 L 2 (Γ ε m ) -ε µ 2 w 0,ε 2 L 2 (Γ ε m ) + ε λ µ 2 t 0 e -λs w ε 2 L 2 (Γ ε m ) ds = -ε µ t 0 e -λs g(V ε m , w ε ), w ε Γ ε m ds. Note that in the previous equation, we have introduced the scalar µ in order to use Assumption 6. Then using the two previous equations as well as [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], we obtain E λ,µ (u ε i , u ε e , w ε , t) -E λ,µ (u ε i , u ε e , w ε , 0) + ε t 0 e -λs I ion (V ε m , w ε ), V ε m Γ ε m + λC m 2 V ε m 2 L 2 (Γ ε m ) ds +ε t 0 e -λs µ g(V ε m , w ε ), w ε Γ ε m + λ µ 2 w ε 2 L 2 (Γ ε m ) ds = ε t 0 e -λs I ε app , V ε m Γ ε m ds, (51) where the term E λ,µ is the energy associated to the system and is defined by E λ,µ (u ε i , u ε e , w ε , t) = ε C m 2 e -λt V ε m (t) 2 L 2 (Γ ε m ) + ε µ 2 e -λt w ε (t) 2 L 2 (Γ ε m ) + t 0 e -λs ∇ x u ε i 2 L 2 σ i ds + t 0 e -λs ∇ x u ε e 2 L 2 σe ds. ( 52 ) To shorten the presentation, we omit the reference to the physical quantities in the definition of the energy, i.e. in what follows, we introduce the notation E ε λ,µ (t) = E λ,µ (u ε i , u ε e , w ε , t). For µ > 0 given by Assumption 6, we assume that λ is sufficiently large, more precisely it should satisfy min λC m 2 , λµ 2 ≥ C I , (53) where C I is the positive scalar appearing in [START_REF] Keener | Mathematical Physiology[END_REF] and is independent of ε. Using Assumption 6 and (51). We then obtain the first energy estimate E ε λ,µ (t) ≤ E ε λ,µ (0) + ε t 0 e -λs I ε app , V ε m Γ ε m + C I |Γ ε m | ds. ( 54 ) Relation ( 51) is the energy relation that can be proven rigorously using a regularization process. Such derivations are done in the proof of Theorem 1 (see Remark 7 in Appendix 5). We also refer to the macroscopic bidomain model analysis of [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] for some related considerations. We are now in the position to state the first proposition of this subsection. Proposition 5. There exist positive scalars µ, λ 0 and C independent of ε such that, for all λ ≥ λ 0 and for all t ∈ [0, T ], the solutions given by Theorem 1 satisfy E ε λ,µ (t) ≤ E ε λ,µ (0) + C 1 + ε 1/2 t 0 e -λ 2 s I ε app L 2 (Γ ε m ) ds . Proof. By noting that the last term of ( 54) can be estimated as follows ε t 0 e -λs I ε app , V ε m Γ ε m ds ≤ 2ε C m 1/2 t 0 e -λ 2 s I ε app L 2 (Γ ε m ) E ε λ,µ (s) 1/2 ds, and that ε |Γ ε m | is bounded uniformly w.r.t. ε, we can conclude using Gronwall's inequality (see [START_REF] Dragomir | Some Gronwall type inequalities and applications[END_REF], Theorem 5). To obtain uniform estimates, we need the following assumption. Assumption 11. Uniform estimates of the data. We assume that there exists a scalar C > 0 independent of ε such that ε 1/2 T 0 I ε app L 2 (Γ ε m ) dt ≤ C and ε V 0,ε m 2 L 2 (Γ ε m ) + ε w 0,ε 2 L 2 (Γ ε m ) ≤ C. Now we introduce the following proposition which is the main result of this section. Proposition 6. If Assumption 11 holds, there exists a positive scalar C independent of ε, such that solutions of the bidomain equations -given by Theorem 1 -satisfy T 0 u ε i 2 H 1 (Ω ε i ) dt + T 0 u ε e 2 H 1 (Ω ε e ) dt ≤ C (55) and ε T 0 Γ ε m |I ion (V ε m , w ε )| 4/3 dγ dt + ε T 0 Γ ε m |g(V ε m , w ε )| 2 dγ dt ≤ C. (56) In order to simplify the proof of Proposition 6, we will introduce one preliminary corollary and two preliminary lemmas. Our starting point is Proposition 5 together with Assumption 11 that provides uniform bounds on the data. As a direct consequence of this assumption and using the equivalence between • L 2 and • L 2 σα , we have the following corollary. Corollary 3. If Assumption 11 holds, there exists a positive scalar C independent of ε, such that solutions of the bidomain equations -given by Theorem 1 -satisfy T 0 ∇ x u ε i 2 L 2 (Ω ε i ) dt + T 0 ∇ x u ε e 2 L 2 (Ω ε e ) dt ≤ C and for all t ∈ [0, T ], ε V ε m (t) 2 L 2 (Γ ε m ) + ε w ε (t) 2 L 2 (Γ ε m ) ≤ C. ( 57 ) Corollary 3 is still not sufficient for our purpose since we need an estimation of the intraand extra-cellular potentials in the L 2 -norm in Ω ε i and Ω ε e respectively. To do so, we need a Poincaré-Wirtinger inequality and a trace inequality that should take into account the geometry dependence in ε. Such inequalities are given in [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF], Corollary B.1 and Lemma C.1. They are given in dimension 2 but they can be extended to the 3-dimensional setting. With our notations, these inequalities are given in the following lemma. Lemma 3. There exists a constant C independent of ε such that for all v ε e ∈ H 1 (Ω ε e ), we have v ε e - 1 |Ω ε e | Ω ε e v ε e d x L 2 (Ω ε e ) ≤ C ∇ x v ε e L 2 (Ω ε e ) (58) and v ε e 2 L 2 (Γ ε m ) ≤ C ε -1 v ε e 2 L 2 (Ω ε e ) + C ε ∇ x v ε e 2 L 2 (Ω ε e ) . (59) Note that inequality ( 58) is no longer true if the domain Ω ε e does not satisfy [START_REF] Neuss-Radu | Some extensions of two-scale convergence[END_REF], i.e. if Ω is not the union of entire cells for all ε (which allows non-connected extra-cellular subdomains to appear at the boundary of the domain for some sequences of ε). Moreover, we also need a Poincaré-like inequality to bound the L 2 -norm of the solution inside the intraand the extra-cellular domains. Such an inequality can be found in [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF] Lemma C.2, and in our context, it is given in the following lemma. Lemma 4. There exists a constant C independent of ε such that for all v ε α ∈ H 1 (Ω ε α ), v ε α 2 L 2 (Ω ε α ) ≤ C ε v ε α 2 L 2 (Γ ε m ) + C ε 2 ∇ x v ε α 2 L 2 (Ω ε α ) . (60) Finally collecting the results of Corollary 3 and Lemmas 3 and 4, we can prove Proposition 6. Proof. (Proof of Proposition 6) Step 1: A preliminary inequality. To simplify the following computations, we introduce the linear forms m Ω and m Γ defined for all v ε e ∈ H 1 (Ω ε e ) by m e (v ε e ) = 1 |Ω ε e | Ω ε e v ε e d x and m Γ (v ε e ) = 1 |Γ ε m | Γ ε m v ε e dγ. For all v ε e ∈ H 1 (Ω ε e ), we have v ε e -m e (v ε e ) 2 L 2 (Γ ε m ) = v ε e -m Γ (v ε e ) -m e (v ε e -m Γ (v ε e )) 2 L 2 (Γ ε m ) = v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) + m e (v ε e -m Γ (v ε e )) 2 L 2 (Γ ε m ) -2 Γ ε m v ε e -m Γ (v ε e ) m e (v ε e -m Γ (v ε e )) dγ. Now observing that the last term vanishes hence, for all v ε e ∈ H 1 (Ω ε e ), we have v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ v ε e -m e (v ε e ) 2 L 2 (Γ ε m ) . (61) Step 2: Uniform estimates of the potentials. Using the trace inequality (59) (applied to v ε em e (v ε e )) and inequality (61) we find ε v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ C v ε e -m e (v ε e ) 2 L 2 (Ω ε e ) + ε 2 ∇ x v ε e 2 L 2 (Ω ε e ) . Thanks to the Poincaré-Wirtinger inequality (58), we conclude that there exists C independent of ε such that ε v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ C ∇ x v ε e 2 L 2 (Ω ε e ) . Now setting v ε e = u ε e in the previous equation (remind that m Γ (u ε e ) = 0), integrating with respect to time and using the estimation of Corollary 3, we find ε T 0 u ε e 2 L 2 (Γ ε m ) ds ≤ C, where C is another constant independent of ε. With the estimate (57) of Corollary 3, we also have ε T 0 u ε i 2 L 2 (Γ ε m ) ds ≤ C , where C is another constant independent of ε. Finally, we can use Lemma 4 to obtain the estimate [START_REF] Rogers | A collocation-Galerkin finite element model of cardiac action potential propagation[END_REF]. Step 3: Uniform estimates of the non-linear terms. It is also important to obtain a uniform estimate on the term I ion (u ε i -u ε i , w ε ) and on the term g(u ε i -u ε i , w ε ) in the appropriate norms. From (51), we have ε T 0 e -λt λ |Γ ε m | + I ion (V ε m , w ε ), V ε m Γ ε m + λC m 2 V ε m 2 L 2 (Γ ε m ) dt + ε T 0 e -λt µ g(V ε m , w ε ), w ε Γ ε m + λ µ 2 w ε 2 L 2 (Γ ε m ) dt ≤ E ε λ,µ (0) + ε T 0 e -λt I ε app , V ε m Γ ε m + C I |Γ ε m | dt ≡ R ε (T ), ( 62 ) then the right hand side of the equation above (denoted R ε (T )) can be estimated as follows R ε (T ) ≤ C 1+ε V 0,ε m 2 L 2 (Γ ε m ) +ε w 0,ε 2 L 2 (Γ ε m ) +ε C sup t∈[0,T ] V ε m (t) L 2 (Γ ε m ) T 0 I ε app L 2 (Γ ε m ) where we have used the property that ε |Γ ε m | is bounded uniformly with respect to ε and C is a constant independent of ε. As a consequence of Assumption 11 and Corollary 3, we have that R ε (T ) is uniformly bounded w.r.t. ε. Using Assumption 6, we know that there exists µ > 0 such that for λ satisfying [START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF], the integrand of the left hand side of ( 62) is positive almost everywhere on Γ ε m . Therefore, bounding e -λt by below, we deduce that ε T 0 Γ ε m λ + I ion (V ε m , w ε ) V ε m + λ C m 2 (V ε m ) 2 + µ g(V ε m , w ε ) w ε + λ µ 2 (w ε ) 2 dγ dt ≤ C, ( 63 ) where C is another constant that depends on e λT but is independent of ε. We must now study two distinct cases. Step 3a: Assumption 7a holds. Since g is Lipschitz we can show, with the estimate (57) that ε T 0 Γ ε m |g(V ε m , w ε )| 2 dγ dt ≤ C, where C independent of ε. Therefore, we deduce from ( 63) that ε T 0 Γ ε m |I ion (V ε m , w ε ) V ε m | dγ dt ≤ C, ( 64 ) where C is another scalar independent of ε. Therefore, using the growth condition ( 28) and Young's inequalities, we get ε T 0 Γ ε m |I ion (V ε m , w ε )| 4/3 dγ dt ≤ ε C C 1/3 ∞ T 0 Γ ε m |I ion (V ε m , w ε )| (|V ε m | + |w ε | 1/3 + 1) dγ dt ≤ ε C C 1/3 ∞ T 0 Γ ε m |I ion (V ε m , w ε ) V ε m | + 3 2 η 4/3 |I ion (V ε m , w ε )| 4/3 + η 4 4 |w ε | 4/3 + η 4 4 dγ dt, where η > 0 can be chosen arbitrarily and C is another scalar independent of ε. Finally, since |w ε | 4/3 ≤ |w ε | 2 + 4/27 almost everywhere on Γ ε m , we can use estimates ( 64)-( 57) and choose η sufficiently large (but independent of ε) in order to obtain [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF]. Step 3b: Assumption 7b holds. Starting from ( 62) and using the first inequality in [START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF] we have ε T 0 Γ ε m λ + C 1 |V ε m | 4 -c 1 (|V ε m | 2 + 1) + f 2 (V ε m ) V ε m w ε + λ C m 2 (V ε m ) 2 + µ g(V ε m , w ε ) w ε + λ µ 2 (w ε ) 2 dγ dt ≤ C. Note that for λ sufficiently large (but independent of ε), the integrand above is positive. Moreover, from Assumption 7b, we have, for all x ∈ Ω and (v, w) ∈ R 2 , |f 2 ( x, v, w) v w| ≤ C (|v| + 1) 2 |v| 2 η + η |w| 2 , and |g( x, v, w) w| ≤ C |v| 4 η 4/3 + (1 + η 4 )|w| 2 + 1 , for some η > 0 and where C is a positive constant independent of ε and η. Therefore, by choosing η large enough (η is independent of ε), one can show that there exists another positive constant C independent of ε such that ε T 0 Γ ε m |V ε m | 4 dγ dt ≤ C. The results of Proposition 6 are then a direct consequence of Assumption 5. Homogenization of the bidomain equations by 2-scale convergence The 2-scale convergence theory has been developed in the reference articles [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] and [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF]. This mathematical tool justifies and deduces the homogenized problem in a single process. It is also well adapted to treat the case of perforated domains (in our case Ω i and Ω e can both be seen as perforated domains). The analysis of homogenization in a perforated domain presents some additional difficulties since the solutions are defined in domains whose geometry is not fixed. This issue is not new (see [START_REF] Cioranescu | Homogenization in open sets with holes[END_REF]) and it is addressed in [START_REF] Allaire | Homogenization of the Neumann problem with nonisolated holes[END_REF] -using compactness results for sequence of functions defined in a family of perforated domains -or in [START_REF] Acerbi | An extension theorem from connected sets, and homogenization in general periodic domains[END_REF], [START_REF] Cioranescu | Homogenization of Reticulated Structures[END_REF] and [START_REF] Cioranescu | The periodic unfolding method in domains with holes[END_REF]. In the last two mentioned references, the periodic unfolding method is used. Such a method can be related to 2-scale convergence, as in [START_REF] Marciniak-Czochra | Derivation of a macroscopic receptor-based model using homogenization techniques[END_REF] in which the homogenization of a reaction-diffusion problem is done using both techniques: the 2-scale convergence gives the preliminary convergence results and the periodic unfolding method is used to deal with the reaction term. The treatment of the reaction termsi.e. the non linear terms or the ionic terms in our context -is one of the main difficulty. To tackle this problem, we present an approach based upon the general ideas of the original article [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. Finally, the last difficulty which is typical to biological tissue modeling, is that the non-linear terms of the equations -that correspond to exchange of ionic quantities at the membrane of a cell -lie on the boundary of the domain. Therefore the 2-scale convergence theory must be adapted and to do so, we use the results presented in [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (see also [START_REF] Neuss-Radu | Some extensions of two-scale convergence[END_REF] for related results). We define Ω T := Ω × (0, T ), and we introduce the space C 0 (Y ) of continuous periodic functions on the periodic cell Y (up to the boundary) and L ∞ (Y ) the space of essentially bounded periodic functions on Y . Proposition 7. Let {u ε } be a sequence of functions in L 2 (Ω T ) such that Ω T |u ε ( x, t)| 2 d x dt ≤ C where C does not depend on ε, then the sequence 2-scale converges to a limit u 0 ∈ L 2 (Ω T ×Y ), i.e. for any ϕ ∈ C 0 (Ω T ; L ∞ (Y )) we have, up to a subsequence, lim ε→0 Ω T u ε ( x, t) ϕ( x, t, x/ε) d x dt = 1 |Y | Ω T ×Y u 0 ( x, t, y) ϕ( x, t, y) d x dt d y. The same notion of weak convergence exists for a function defined on Γ ε m , and straightforward generalizations of the results in [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] lead to the following proposition. Proposition 8. Let {u ε } be a sequence of functions in L p (Γ ε m × (0, T )) with p ∈ (1, +∞) such that ε Γ ε m ×(0,T ) |u ε ( x, t)| p dγ dt ≤ C, (65) where C does not depend on ε, then the sequence 2-scale converges to a limit u 0 ∈ L p (Ω T × Γ Y ), i.e. for any ϕ ∈ C 0 (Ω T ; C 0 (Y )) we have, up to a subsequence, lim ε→0 Γ ε m ×(0,T ) u ε ( x, t) ϕ( x, t, x/ε) dγ dt = 1 |Y | Ω T ×Γ Y u 0 ( x, t, y) ϕ( x, t, y) d x dt dγ. As previously mentioned, one of the main advantages of 2-scale convergence is the ability to analyze partial differential equations in a perforated domain by introducing simple extension operators. In our context Ω ε i and Ω ε e can be seen as perforated domains, therefore, following [START_REF] Allaire | Homogenization and two-scale convergence[END_REF], we denote by • the extension by zero in Ω. More precisely, we define ∇ x u ε i = ∇ x u ε i Ω ε i , 0 Ω ε e , ∇ x u ε e = 0 Ω ε i , ∇ x u ε e Ω ε e , and we define σ ε by periodicity as follows σ( x, y) = σ i ( x, y) y ∈ Y i , σ e ( x, y) y ∈ Y e , and σ ε ( x) = σ x, x ε . Using [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF], one can see that the functions (u ε i , u ε e ) and ( ∇ x u ε i , ∇ x u ε e ) satisfy, for all v ε i , v ε e ∈ H 1 (Ω) and for almost all t ∈ (0, T ), σ ε ∇ x u ε i , ∇ x v ε i Ω + σ ε ∇ x u ε e , ∇ x v ε e Ω + ε C m ∂(u ε i -u ε e ) ∂t , v ε i -v ε e Γ ε m + ε I ion (u ε i -u ε e , w ε ), v ε i -v ε e Γ ε m = ε I ε app , v ε i -v ε e Γ ε m . (66) Two-scale convergence theory enables us to relate the 2-scale limits of ∇ x u ε i and ∇ x u ε e with the 2-scale limits of the extension of (u ε i , u ε e ) defined as ũε i = u ε i Ω ε i , 0 Ω ε e , ũε e = 0 Ω ε i , u ε e Ω ε e . Then the a priori estimate (55) allows for the application of the 2-scale convergence theory in a perforated domain as presented in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. To do so, we define H 1 (Y ) -as the completion for the norm H 1 (Y ) of C ∞ (Y ) -the space of infinitely differentiable functions that are periodic of period Y (a similar definition holds if Y is replaced by Y α and H 1 (Y ) is replaced by L 2 (Y )). Proposition 9. If Assumption 11 holds ( i.e. uniform norm-estimate of the data), there exist u 0 α ∈ L 2 ((0, T ); H 1 (Ω)) and u 1 α ∈ L 2 (Ω T ; H 1 (Y α )) , such that the solution of the bidomain equations given by Theorem 1 satisfies          ũε α -→ 2-scale u 0 α ( x, t)χ Yα ( y), ∇ x u ε α -→ 2-scale ( ∇ x u 0 α + ∇ y u 1 α )χ Yα ( y), (67) where χ Yα is the characteristic function of Y α . Note that in this proposition, the convergences have to be understood in the sense given in Proposition 7. For regular enough functions, we need to relate the 2-scale limit on a volume to the 2-scale limit on surface. This is the object of the proposition given below whose proof is very inspired by [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (Proposition 2.6) and therefore only the main idea is given. Proposition 10. Let {u ε } be a sequence of functions in L 2 ((0, T ); H 1 (Ω ε α )) such that T 0 Ω ε α |u ε ( x, t)| 2 + | ∇ x u ε ( x, t)| 2 d x dt ≤ C, where C does not depend on ε. Let ũε denote the extension by zero in (0, T ) × Ω of u ε . There exists u 0 ∈ L 2 ((0, T ); H 1 (Ω)) such that ũε ( x, t) -→ 2-scale u 0 ( x, t) χ Yα ( y) (68) and for any ϕ ∈ C 0 (Ω T ; C 0 (Y )), ε T 0 Γ ε m u ε ( x, t) ϕ x, t, x ε dγ dt -→ 1 |Y | Ω T u 0 ( x, t) Γ Y ϕ( x, t, y ) dγ d x dt. ( 69 ) Proof. Being given ϕ ∈ C 1 (Ω T ; C 0 (Y )), for each x and t (seen here as parameters), we introduce ψ x,t a function of y periodic in Y i with mean value 0 as the solution of      ∆ y ψ x,t = 1 |Y i | Γ Y ϕ( x, t, y ) dγ in Y i , ∇ y ψ x,t • n Γ Y = ϕ( x, t, •) on Γ Y , (70) where n Γ Y is the outward normal of Y i . Then the results of the proposition are obtained by first noticing that ε T 0 Γ ε m u ε ( x, t) ϕ x, t, x ε dγ dt = ε T 0 Γ ε m u ε ( x, t) ∇ y ψ x,t • n Γ Y x ε dγ dt, then using Green's formulae to recover integral over Ω ε i and finally using 2-scale convergence results as in Proposition 7. The two-scale homogenized limit model The next step in deriving the homogenized equations (i.e. setting the equations of the limit terms u 0 α ) consists in using regular enough test functions in (66) of the form v ε α ( x, t) = v 0 α ( x, t) + ε v 1 α ( x, t, x ε ), (71) with    v 0 α ∈ C 1 (Ω T ), v 0 α ( x, T ) = 0, v 1 α ∈ C 1 (Ω T ; C 0 (Y α )), v 1 α ( x, T, y ) = 0. The decomposition (71) can be explained a priori since it corresponds to the expected behavior of the limit field, i.e. it should not depend on y. Doing so, Equation (66) gives, after integration with respect to time, σ ε ∇ x u ε i , ∇ x v 0 i + ∇ y v 1 i + ε ∇ x v 1 i Ω T + σ ε ∇ x u ε e , ∇ x v 0 e + ∇ y v 1 e + ε ∇ x v 1 e Ω T -ε C m T 0 u ε i -u ε e , ∂ t (v 0 i + εv 1 i -v 0 e -εv 1 e ) Γ ε m dt + ε T 0 I ion (u ε i -u ε e , w ε ), v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt = ε T 0 I ε app , v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -ε C m V 0,ε m , v 0 i + ε v 1 i -v 0 e -ε v 1 e Γ ε m . (72) We want to apply the results of the 2-scale convergence. First, we will focus on the volume terms. As explained in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF], the next step is to choose ψ α ( x, y) := σ( x, y)( ∇ x v 0 α ( x) + ∇ y v 1 α ( x, y)) as a test function in the definition of 2-scale convergence. However, Assumption 1 on the diffusion tensor σ ε ( x) is not sufficient to have ψ α ∈ C 0 (Ω T ; L ∞ (Y )) 3 . This motivates the following additional assumption. Assumption 12. σ α ∈ C 0 (Ω; L ∞ # (Y α )) 3×3 . Such an assumption ensures that ψ α is an admissible test function (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) and can be considered as a test function in Proposition 7. The surface terms also need a detailed analysis. Since I ε app is uniformly bounded in L 2 (Γ ε m × (0, T )) (see Assumption 11), we can use Proposition 8 to write that there exists I 0 app ∈ L 2 (Ω T × Γ Y ) such that up to a subsequence, ε T 0 I ε app , v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -→ 1 |Y | Γ Y I 0 app ( x, t, y) dγ, v 0 i -v 0 e Ω T . (73) Moreover, since we have assumed that the initial data are also uniformly bounded in the adequate norm (see Assumption 11) and using again the 2-scale convergence theorem on surfaces (Proposition 8), we know that there exists V 0 m ∈ L 2 (Ω × Γ Y ) such that, up to a subsequence, ε C m V 0,ε m , v 0 i + ε v 1 i -v 0 e -ε v 1 e Γ ε m -→ C m |Y | Ω Γ Y V 0 m ( x, y) dγ (v 0 i -v 0 e )( x, 0) d x. (74) In the same way, thanks to Proposition 6, we can show that I ion satisfies the uniform bound [START_REF] Wilhelms | Benchmarking electrophysiological models of human atrial myocytes[END_REF] with p = 4/3. We can therefore apply Proposition 8 and find there exists I 0 ∈ L 4/3 (Ω T × Γ Y ) such that, up to a subsequence, ε T 0 I ion (u ε i -u ε e , w ε ), v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -→ 1 |Y | Ω T Γ Y I 0 ( x, t, y) dγ (v 0 i -v 0 e )( x, t) d x dt. (75) One of the difficulties -which is postponed -is to relate I 0 with the limits of u ε α and w ε . To deal with the third term in (72), we also need Proposition 10 and we get ε T 0 Γ ε m u ε α ( x, t) ∂ t v ε α ( x, t) dγ dt -→ |Γ Y | |Y | Ω T u 0 α ( x, t) ∂ t v 0 α ( x, t) d x dt. ( 76 ) Using the convergence results obtained above, the weak formulation of the microscopic bidomain equations -as given by (72) -becomes at the limit σ i ∇ x u 0 i + ∇ y u 1 i , ∇ x v 0 i + ∇ y v 1 i Ω T ×Yi + σ e ∇ x u 0 e + ∇ y u 1 e , ∇ x v 0 e + ∇ y v 1 e Ω T ×Ye -C m |Γ Y | u 0 i -u 0 e , ∂ t (v 0 i -v 0 e ) Ω T + Γ Y I 0 dγ , v 0 i -v 0 e Ω T = Γ Y I 0 app dγ , v 0 i -v 0 e Ω T -C m Ω Γ Y V 0 m ( x, y) dγ v 0 i -v 0 e ( x, 0) d x. ( 77 ) We now consider the equation on the gating variable. From (45), we deduce that for all ψ ∈ C 1 (Ω T ; C 0 (Y )) such that ψ( x, T, y) = 0, we have - T 0 Γ ε m w ε ( x, t) ∂ t ψ x, t, x ε dγ dt + T 0 Γ ε m g(V ε m , w ε ) ψ x, t, x ε dγ dt = - Γ ε m w 0,ε ( x) ψ x, 0, x ε dγ. Using again the 2-scale convergence of a sequence of L 2 functions on Γ ε m × (0, T ) (Proposition 8), we find that -w 0 , ∂ t ψ Ω T ×Γ Y + g 0 , ψ Ω T ×Γ Y = - Ω Γ Y w 0 ( x, y) ψ( x, 0, y) dγ d x, (78) where w 0 , g 0 and w 0 are the 2-scale limits of w ε , g(V ε m , w ε ) and w 0,ε respectively. These 2-scale limits are well-defined (up to subsequences) since w ε is a continuous function in time with value in L 2 (Γ ε m ) and is uniformly bounded with respect to ε (Corollary 3). The same argument holds for g (Proposition 6). Finally, one can pass to the 2-scale limit in Equation ( 48) to recover a condition on the average of u 0 e and we get the closure equation, for all ϕ ∈ C 0 ([0, T ]), ε T 0 Γ ε m u ε e dγ ϕ dt = 0 -→ ε→0 |Γ Y | |Y | T 0 Ω u 0 e d x ϕ dt = 0. ( 79 ) This implies that we have, for almost all t ∈ [0, T ], Ω u 0 e d x = 0. (80) Equations ( 77), ( 78) and (79) define some micro-macro equations for the bidomain problem. To close these equations, we need to relate I 0 and g 0 to the 2-scale limits of V ε m = u ε iu ε e and w ε . This is the object of Proposition 11 given in what follows but first, let us mention that is it possible to derive an energy estimate for the micro-macro problem. As mentioned, such energy relations are formally obtained by setting v 0 α = e -λs u 0 α and v 1 α = e -λs u 1 α in Equation ( 77) and ψ = e -λs w 0 in Equation (78). The following energy relation can be proven, for all λ > 0 and µ > 0, E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , t) -E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , 0) + t 0 e -λs Γ Y I 0 dγ , v 0 i -v 0 e Ω + λC m |Γ Y | 2 (u 0 i -u 0 e )(t) 2 L 2 (Ω) ds + t 0 e -λs µ g 0 , w 0 L 2 (Ω×Γ Y ) + λ µ 2 w 0 2 L 2 (Ω×Γ Y ) ds = t 0 e -λs Γ Y I 0 app dγ , u 0 i -u 0 e Ω ds, (81) where the term E 0 λ,µ is the energy associated with the system and is defined by E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , t) = C m |Γ Y | 2 e -λt (u 0 i -u 0 e )(t) 2 L 2 (Ω) + µ 2 e -λt w 0 (t) 2 L 2 (Ω×Γ Y ) + α∈{i,e} t 0 e -λs σ α ∇ x u 0 α + ∇ y u 1 α , ∇ x u 0 α + ∇ y u 1 α Ω×Yα ds. ( 82 ) As previously mentioned, the following proposition relates I 0 and g 0 to the 2-scale limits of V ε m = u ε iu ε e and w ε , respectively. Proposition 11. We assume that Assumptions 8 and 12 are satisfied and that the source term and the initial data are given by, I ε app ( x, t) = I app ( x, t), V 0,ε m ( x) = V 0 m ( x), w 0,ε ( x) = w 0 ( x), ∀ t ∈ [0, T ], ∀ x ∈ Γ ε m , with I app ∈ C 0 (Ω T ), V 0 m ∈ C 0 (Ω T ), w 0 ∈ C 0 (Ω T ). Let (u ε i , u ε e , w ε ) be a solution of equations ( 45)- [START_REF] O'hara | Simulation of the undiseased human cardiac ventricular action potential: Model formulation and experimental validation[END_REF] given by Theorem 1 and let u 0 α , w 0 , I 0 and g 0 , the 2-scale limits of u ε α , w ε , I ion (u ε iu ε e , w ε ) and g(u ε iu ε e , w ε ) respectively then we have, I 0 = I ion (u 0 i -u 0 e , w 0 ), g 0 = g(u 0 i -u 0 e , w 0 ). Proof. This result is not a straightforward consequence of the u ε α and w ε estimates. We adapt here the technique used in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] for the 2-scale analysis of non-linear problems. Thanks to Assumption 8, for λ > 0 large enough (but independent of ε) and µ > 0 given, for all regular enough test functions ϕ ε i , ϕ ε e and ψ ε , we have E λ,µ (u ε i -ϕ ε i , u ε e -ϕ ε e , w ε -ψ ε , T ) + ε T 0 e -λt I ion (V ε m , w ε ) -I ion (ν ε m , ψ ε ), V ε m -ν ε m Γ ε m + λC m 2 V ε m -ν ε m 2 L 2 (Γ ε m ) dt + ε T 0 e -λt µ g(V ε m , w ε ) -g(ν ε m , ψ ε ), w ε -ψ ε Γ ε m + λ µ 2 w ε -ψ ε 2 L 2 (Γ ε m ) dt ≥ 0, (83) where ν ε m = ϕ ε iϕ ε e on Γ ε m and where the energy functional E λ,µ is defined in Equation ( 52). The idea is to use the energy relation (51) -which is satisfied by the solution (u ε i , u ε e , w ε ) -to simplify the previous equation. Doing so, we introduce a term corresponding to the data d ε (T ) := ε T 0 e -λt I app , V ε m Γ ε m dt + ε C m 2 V 0 m 2 L 2 (Γ ε m ) + ε µ 2 w 0 2 L 2 (Γ ε m ) . Substituting ( 51) into (83), we obtain the inequality 0 ≤ d ε (T ) -2 e ε (T ) + E λ,µ (ϕ ε i , ϕ ε e , ψ ε , T ) + i ε (T ) + µ g ε (T ), (84) with e ε (T ) = ε C m 2 e -λT (V ε m , ν ε m ) Γ ε m + ε µ 2 e -λT (w ε , ψ ε ) Γ ε m + T 0 e -λt ( σ i ∇ x u ε i , ∇ x ϕ ε i ) Ω ε i dt + T 0 e -λt ( σ e ∇ x u ε e , ∇ x ϕ ε e ) Ω ε e dt, i ε (T ) = T 0 e -λt -ε I ion (V ε m , w ε ), ν ε Now, we set, for any given positive real scalar τ , ϕ ε α ( x, t) = ϕ 0 α ( x, t) + ε ϕ 1 α ( x, t, x/ε) + τ ϕ α ( x, t), ψ ε ( x, t) = ψ 0 ( x, t, x/ε) + τ ψ( x, t, x/ε), where (ϕ 0 α , ϕ α ) ∈ C 1 (Ω T ) 2 , ϕ 1 α ∈ C 1 (Ω T ; C 1 (Y )), (ψ 0 , ψ) ∈ C 0 (Ω T ; C 0 (Y )) 2 . By construction, ϕ ε i and ϕ ε e are test functions that allow us to use the 2-scale convergence (see Propositions 7 and 8). The same remark is true for I app , V 0 m and w 0 by assumption. Moreover, we need the following results concerning 2-scale convergence of test functions (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) χ Yα x ε ϕ ε α -→ 2-scale χ Yα ( y)(ϕ 0 α + τ ϕ α ), and χ Yα x ε ∇ x ϕ ε α -→ 2-scale χ Yα ( y)( ∇ x ϕ 0 α + ∇ y ϕ 1 α + τ ∇ x ϕ α ), where the convergence has to be understood in the sense given by Proposition 7. Moreover, ν ε m -→ 2-scale Φ 0 + τ Φ, ψ ε -→ 2-scale ψ 0 + τ ψ, where Φ 0 = ϕ 0 i -ϕ 0 e and Φ = ϕ i -ϕ e , and where the convergence has to be understood in the sense given by Proposition 8. To study the limit of (84) when ε tends to 0, we treat each terms separately. We first have lim ε→0 d ε = d 0 with d 0 (T ) = |Γ Y | T 0 e -λt (I app , V 0 ) Ω dt + |Γ Y | C m 2 V 0 m 2 L 2 (Ω) + |Γ Y | µ 2 w 0 2 L 2 (Ω) . In the same way, one can pass to the limit ε → 0 in e ε (t) and we denote by e 0 (t) this limit. It is given by e 0 (T ) = C m |Γ Y | 2 e -λt u 0 i -u 0 e , Φ 0 + τ Φ Ω + µ 2 e -λt w 0 , ψ 0 + τ ψ Ω×Γ Y + α∈{i,e} T 0 e -λs σ α ∇ x u 0 α + ∇ y u 1 α , ∇ x ϕ 0 α + ∇ y ϕ 1 α + τ ∇ x ϕ α Ω×Yα dt. We also get e 0 λ,µ (T ) := lim ε→0 E λ,µ (ϕ ε i , ϕ ε e , ψ ε , T ) = E 0 λ,µ (ϕ 0 i + τ ϕ i , ϕ 1 i , ϕ 0 e + τ ϕ e , ϕ 1 e , ψ 0 + τ ψ, T ), where E 0 λ,µ is the energy of the limit 2-scale problem defined in Equation (82). Note that to pass to the limit in the microscopic energy, we have used the strong 2-scale convergence of test functions. Indeed using [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (Lemma 2.4), we have lim ε→0 ε ψ ε 2 L 2 (Γ ε m ) = ψ 0 + τ ψ 2 L 2 (Ω×Γ Y ) . To pass to the limit in the terms i ε and g ε , we need to study the convergence of I ion (ν ε m , ψ ε ) and g(ν ε m , ψ ε ) respectively. Since the function I ion is continuous (Assumption 4), as well as (Φ 0 , Φ, ψ 0 , ψ), one can see that I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) is an adequate test function in the sense of Proposition 8, i.e. I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) ∈ C 0 (Ω T ; C 0 (Y )). Moreover, we have, lim ε→0 I ion (ν ε m , ψ ε ) -I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) L ∞ (Ω T ×Γ Y ) = 0, and therefore lim ε→0 ε T 0 e -λt I ion (ν ε m , ψ ε ), V ε m -ν ε m Γ ε m dt = T 0 e -λt I ion (Φ 0 + τ Φ, ψ 0 + τ ψ), V 0 m -Φ 0 -τ Φ Ω×Γ Y dt. Using the results above, one can show that i 0 (T ) := lim ε→0 i ε (T ) = T 0 e -λt -I 0 , Φ 0 + τ Φ Ω×Γ Y -I ion (Φ 0 + τ Φ, ψ 0 + τ ψ), V 0 m -Φ 0 -τ Φ Ω×Γ Y -λ |Γ Y | C m (V 0 m , Φ 0 + τ Φ) 2 L 2 (Ω) + λ |Γ Y | C m 2 Φ 0 + τ Φ 2 L 2 (Ω) dt. Similar results can be deduced in order to compute the limit of g ε which we denote g 0 . Collecting all the convergence results mentioned above, Inequality (84) becomes 0 ≤ d 0 (T ) -2 e 0 (T ) + e 0 λ,µ (T ) + i 0 (T ) + µ g 0 (T ). Since this inequality is true for all ϕ 0 α , ϕ 1 α and ψ 0 , it is true for each element of the sequences {ϕ 0 α,n }, {ϕ ϕ 0 α,n -u 0 α L 2 ((0,T );H 1 (Ω)) + ϕ 1 α,n -u 1 α L 2 (Ω T ;H 1 (Yα)) = 0 and lim n→+∞ ψ 0 n -w 0 L 2 (Ω T ×Γ Y ) = 0. From the continuity requirement (Assumption 4) and the growth conditions ( 28) -( 29), I ion and g can be seen as weak continuous applications. Such results are a consequence of a variant of Lemma 1.3 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (the same proof can be followed) that can be stated as follows: (for conciseness, we set D = Ω × Γ Y ) Let {I n }, a uniformly bounded sequence and I, a function in L 2 ((0, T ), L p (D)) with 1 < p < +∞. Assume that the sequence {I n } converges almost everywhere to I in (0, T ) × D, then I n converges weakly towards I in L 2 ((0, T ), L p (D)). Using the result above one can pass to the limit in Inequality (85), it shows that this inequality is true after the following formal substitutions, ϕ 0 α = u 0 α , ϕ 1 α = u 1 α and ψ 0 = w 0 . Using the energy identity given in Equation (81), one can simplify Inequality (85) and we obtain 0 ≤ τ T 0 e -λt I ion (V 0 m + τ Φ, w 0 + τ ψ) -I 0 , Φ Ω×Γ Y dt + τ T 0 e -λt g(V 0 m + τ Φ, w 0 + τ ψ) -g 0 , ψ Ω×Γ Y dt + O(τ 2 ). Dividing by τ and then letting τ tends to 0, we find that for all continuous functions Φ and ψ, 0 ≤ T 0 e -λt I ion (V 0 m , w 0 ) -I 0 , Φ Ω×Γ Y dt + T 0 e -λt g(V 0 m , w 0 ) -g 0 , ψ Ω×Γ Y dt, which gives the result of the proposition. The macroscopic bidomain equations The obtained model combines a priori the micro-and the macroscopic scales. However, we will show below that we can decouple these two scales by explicitly determining the correction terms u 1 i and u 1 e . This determination appears through the analysis of canonical problems which are set in the reference periodic cells Y i and Y e . First, we choose v 0 i = v 0 e = v 1 e = 0 and (77) becomes σ i ( ∇ x u 0 i + ∇ y u 1 i ), ∇ y v 1 i Ω T ×Yi = 0. ( 86 ) One can observe that such a problem corresponds to the classical cell problem, see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF][START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF]. It can be shown that u 1 i are defined up to a function ũ1 i ∈ L 2 (Ω T ) and can be decomposed as follows u 1 i ( x, y, t) = 3 j=1 X j i ( y) ∇ x u 0 i ( x, t) • e j + ũ1 i ( x, t), (87) where the canonical functions X j i , j = 1..3 belong to H 1 (Y i ) and are uniquely defined by the following variational formulation        σ i ( e j + ∇ y X j i ), ∇ y ψ Yi = 0, ∀ ψ ∈ H 1 (Y i ), Yi X j i d y = 0. ( 88 ) From the canonical functions, one can define the associated effective medium tensor T i as follows ( T i ) j,k = Yi σ i ( ∇ y X j i + e j ) • ( ∇ y X k i + e k )d y. (89) We use exactly the same method in order to define and decouple the cell problem in the extracellular domain Y e and to define the effective medium T e . The macroscopic equations are obtained by taking v 1 i = v 1 e = v 0 e = 0 in (77) and we get σ i ∇ x u 0 i + ∇ y u 1 i , ∇ x v 0 i Ω T ×Yi -C m |Γ Y | u 0 i -u 0 e , ∂ t v 0 i Ω T + Γ Y I ion (u 0 i -u 0 e , w 0 ) dγ , v 0 i Ω T = |Γ Y | I app , v 0 i Ω T -C m |Γ Y | Ω V 0 m ( x) v 0 i ( x, 0) d x. ( 90 ) Using the decomposition of ∇ y u 1 i = 3 j=1 ∇ y X j i ∂ x j u 0 i and ∇ x u 0 i = 3 j=1 ∂ x j u 0 i e j , we obtain σ i ( ∇ x u 0 i + ∇ y u 1 i ), ∇ x v 0 i Ω T ×Yi = σ i 3 j=1 ( e j + ∇ y X j i )∂ x j u 0 i , ∇ x v 0 i Ω T ×Yi = T i ∇ x u 0 i , ∇ x v 0 i Ω T . This equality allows us to simplify Equation (90). In the same way, for the extra-cellular part, we get T e ∇ x u 0 e , ∇ x v 0 e Ω T + C m |Γ Y | u 0 i -u 0 e , ∂ t v 0 e Ω T - Γ Y I ion (u 0 i -u 0 e , w 0 ) dγ , v 0 e Ω T = -|Γ Y | I app , v 0 e Ω T + C m |Γ Y | Ω V 0 m ( x) v 0 e ( x, 0) d x. ( 91 ) Note that Equations (90) and (91) are not yet satisfactory because I ion (u 0 iu 0 e , w 0 ) may depend on y since w 0 is a priori a function of y. However, we have assumed that the initial data do not depend on ε therefore, we get -w 0 , ∂ t ψ Ω T ×Γ Y + g(u 0 i -u 0 e , w 0 ), ψ Ω T ×Γ Y = - Ω w 0 ( x) Γ Y ψ( x, 0, y) dγ d x, which is the weak formulation of the following problem      ∂w 0 ∂t + g(u 0 i -u 0 e , w 0 ) = 0 Ω × (0, T ) × Γ Y , w 0 ( x, 0, y) = w 0 ( x) Ω × Γ Y . (92) Since the non-linear term g is not varying at the micro scale and since (u 0 iu 0 e ) does not depend on y, it can be proven, using Assumption 8, that the solution w 0 of (92) is unique for all y ∈ Γ Y hence it is independent of the variable y. As a consequence, we have 1 |Y | Γ Y I ion (u 0 i -u 0 e , w 0 ) = A m I ion (u 0 i -u 0 e ,                                  -∇ x • T i |Y | ∇ x u 0 i + A m C m ∂(u 0 i -u 0 e ) ∂t + A m I ion (u 0 i -u 0 e , w 0 ) = A m I app Ω × (0, T ), -∇ x • T e |Y | ∇ x u 0 e -A m C m ∂(u 0 i -u 0 e ) ∂t -A m I ion (u 0 i -u 0 e , w 0 ) = -A m I app Ω × (0, T ), ( T i • ∇ x u i ) • n = 0 ∂Ω × (0, T ), ( T e • ∇ x u e ) • n = 0 ∂Ω × (0, T ), (u 0 i -u 0 e )( x, 0) = V 0 m ( x) Ω. (93) System ( 93)-(92) corresponds to the sought macro-scale equations. Finally, note that we close the problem by recalling Equation ( 80) Ω u 0 e d x = 0. Since the analysis has already been done for the microscopic bidomain model, we can infer -up to a subsequence -the existence of a solution of System (92)-( 93). The proposed model is a generalization of the very classical macroscopic bidomain model in which constant electric conductivities are considered. Indeed, compare to previous studies, see for example [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], and to anticipate meaningful modeling assumptions, we have considered at the microscopic level that the electric conductivities are tensorial. This does not appear in the expression of System (93) but is hidden in the definition of the cell-problems (88) and in the definition of the tensor (89). Moreover, we have shown that the classical macroscopic bidomain model formally obtained in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF] and [START_REF] Neu | Homogenization of syncytial tissues[END_REF] is valid under some more general conditions on the ionic terms than those assumed in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]. More precisely, we have extended the validity of the macroscopic bidomain equations to space-varying physiological models. Note that this weighted norm is obviously equivalent to the standard L 2 -norm. The parameter λ is chosen later to obtain existence and uniqueness results. Given any function U ∈ C 0 ([0, T ]; L 2 (Γ m )), we have the following lemma. Lemma 2 If Assumptions 2, 4, 5 and 7a hold then there exists a unique function w U ∈ C 1 ([0, T ]; L 2 (Γ m )), which is a solution of ∂ t w U + g(U, w U ) = 0, Γ m , ∀ t ∈ [0, T ], w U ( x, 0) = w 0 ( x), Γ m . (94) Moreover if Assumptions 9 and 10 are satisfied then for all t ∈ [0, T ] and almost all x ∈ Γ m , w min ≤ w U ( x, t) ≤ w max . Proof. By density of continuous functions in L 2 spaces, there exist two sequences {w 0 n } ⊂ C 0 (Γ m ) and {U n } ⊂ C 0 ([0, T ] × Γ m ) such that w 0 n -w 0 L 2 (Γm) -→ n→+∞ 0, and for all t ∈ [0, T ], U n (t) -U (t) L 2 (Γm) -→ n→+∞ 0. Then for all x ∈ Γ m , we denote by w Un ( x, •), the solution of the following Cauchy problem (now x plays the role of a parameter), for all x ∈ Γ m and t ∈ [0, T ],    d dt w Un ( x, •) + g( x, U n ( x, •), w Un ( x, •)) = 0, w Un ( x, 0) = w 0 ( x). (95) Since we have assumed that g is Lipschitz in its second argument by a standard application of the Picard-Lindelöf theorem, we can show that there exists a unique solution w Un ( x, •) to this problem which belongs to C 1 ([0, T ]). Now, for all x ∈ Γ m , for all (n, m) ∈ N 2 , we have, d dt [w Un ( x, •) -w Um ( x, •)] = g(U m ( x, •), w Um ( x, •)) -g( x, U n ( x, •), w Un ( x, •)). We set w n,m := w Unw Um . We multiply the previous equation by e -λt w n,m and integrate with respect to space and time. After some manipulations we get, for all t ∈ [0, T ], e -λt 2 w n,m (t) 2 L 2 (Γm) - 1 2 w 0 n -w 0 m 2 L 2 (Γm) + λ 2 |||w n,m ||| 2 t,λ = t 0 e -λs (g(U m , w Um ), w n,m ) Γm -(g(U n , w Un ), w n,m ) Γm ds. Since g is globally Lipschitz (Assumption 7a), we have e -λt 2 w n,m (t) 2 L 2 (Γm) ≤ 3L g 2 - λ 2 |||w n,m ||| 2 t,λ + 1 2 w 0 n -w 0 m 2 L 2 (Γm) + L g 2 |||U n -U m ||| 2 t,λ . (96 ) Then choosing λ > 3L g in (96), we can deduce that w Un is a bounded Cauchy sequence in L 2 ((0, T ) × Γ m ) which is a Banach space. Therefore the sequence w Un converges strongly to a limit denoted w U . Moreover, since U n and w Un converge strongly in L 2 ((0, T ) × Γ m ) and since g is Lipschitz (Assumption 7a), we have |||g(U n , w Un ) -g(U (t), w U )||| T,0 -→ n→+∞ 0. Then by passing to the limit in the weak formulation of (95), it can be proven that the limit w U ∈ L 2 ((0, T ) × Γ m ) is a weak solution of (94). In a second step, by inspection of (94), one can show that ∂w U ∂t ∈ L 2 ((0, T ) × Γ m ). Therefore thanks to Lemma 1.2 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] -up to some modification on zero-measure setsthe function w U satisfies w U ∈ C 0 ([0, T ]; L 2 (Γ m )). This last property implies, again by inspection of [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF], that w U ∈ C 1 ([0, T ]; L 2 (Γ m )). Finally, by simple arguments and Assumption 9, the solution w Un satisfies -for all x ∈ Γ m and for all t ∈ [0, T ] -w min ≤ w Un ( x, t) ≤ w max if and only if w min ≤ w 0 n ≤ w max . (97) This can be ensured for every n and at the limit if it is satisfied for the initial data w 0 (i.e. Assumption 9 holds) and if we choose a sequence {w 0 n } of approximating functions that preserves (97). Such sequences are classically constructed by convolution with parametrized smooth positive functions of measure one and of decreasing supports around the origin (see [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] Chapter 3 for instance). Theorem 1 If Assumptions 1-7 hold, there exist V m ∈ C 0 ([0, T ]; L 2 (Γ m )) ∩ L 2 ((0, T ); H 1/2 (Γ m )), ∂ t V ∈ L 2 ((0, T ); H -1/2 (Γ m )), and w ∈ H 1 ((0, T ); L 2 (Γ m )), which are solutions of C m ∂ t V m + A V m + I ion V m , w = I app , H -1/2 (Γ m ), a.e. t ∈ (0, T ), ∂ t w + g V m , w = 0, Γ m , a.e. t ∈ (0, T ), (98) and V m ( x, 0) = V 0 m ( x) Γ m , w( x, 0) = w 0 ( x) Γ m . (99) Proof. The proof uses the general ideas of the Faedo-Galerkin technique and some useful intermediate results which come from [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF]. Our proof deals simultaneously with physiological and phenomenological models only if Assumption 7a is satisfied. Our proof is only partial when phenomenological models satisfying Assumption 7b are considered. More precisely, the proof is valid up to Step 4 and we refer the reader to the analysis done in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF] to extend the proof. Step 1: Introduction of the eigenvector basis of the operator A. We introduce {λ k } k≥0 ⊂ R + and {ψ k } k≥0 ⊂ H 1/2 (Γ m ), the set of increasing nonnegative eigenvalues and corresponding eigenvectors such that, for all v ∈ H 1/2 (Γ m ), A(ψ k ), v Γm = λ k (ψ k , v) Γm . Thanks to the properties of the operator A given by Proposition 3 and the fact that H 1/2 (Γ m ) is dense in L 2 (Γ m ) with compact injection, such eigenvalues and eigenvectors exist and the set {ψ k } k≥0 is an orthonormal basis of L 2 (Γ m ) (see [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF], Theorem 2.37). Note that λ 0 = 0 and ψ 0 = 1/|Γ m | 1/2 ). As in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF], we introduce the continuous projection operator P : L 2 (Γ m ) → L 2 (Γ m ) defined by P (v) = k=0 (v, ψ k ) Γm ψ k . It is standard to show that, for any w ∈ L 2 (Γ m ) and v ∈ H 1/2 (Γ m ), lim →+∞ w -P (w) L 2 (Γm) = 0 and lim →+∞ v -P (v) H 1/2 (Γm) = 0. Finally, note that P is also continuous from H 1/2 (Γ m ) to H 1/2 (Γ m ) and P (v) 2 L 2 (Γm) = k=0 |(v, ψ k ) Γm | 2 , c P (v) 2 H 1/2 (Γm) ≤ |(v, ψ 0 ) Γm | 2 + k=1 λ k |(v, ψ k ) Γm | 2 where c > 0 is independent of . Step 2: Local existence result for a corresponding finite dimensional ODE system. Multiplying the equations on V m and w by ψ k and integrating over Γ m suggest to introduce for any given and for all 0 ≤ k ≤ the following system of ordinary differential equations (ODE)        C m d dt V k + λ k V k + Γm I ion (V , w ) ψ k dγ = Γm I app ψ k dγ, d dt w k + Γm g(V , w )ψ k dγ = 0, where we have defined V ( x, t) := k=0 V k (t) ψ k ( x) and w ( x, t) := k=0 w k (t) ψ k ( x). The following initial conditions complete the system V k (0) := V ,0 k = P (V 0 m ), w k (0) := w ,0 k = P (w 0 ). The idea is to apply standard existence results for an ODE system of the form              d dt V k = i k (t, {V k }, {w k }), d dt w k = g k (t, {V k }, {w k }), V k (0) = V ,0 k , w k (0) = w ,0 k . ( 100 ) where for all integers 0 ≤ k ≤ , the functions i k : [0, T ] × R × R → R and g k : [0, T ] × R × R → R are defined by i k (t, {V k }, {w k }) := - λ k C m V k - 1 C m Γm I ion (V , w ) ψ k dγ + 1 C m Γm I app ψ k dγ, and g k (t, {V k }, {w k }) := - Γm g(V , w ) ψ k dγ. Such functions are continuous in each V k and w k thanks to Lemma 1.3 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (see also the proof of Proposition 11). Moreover from Assumption 5 (i.e. I ion and g bounds) and from Assumption 3 (i.e. I app ∈ L 2 ((0, T ) × Γ m )), one can show that there exist positive scalars C and C (depending on ) such that k=0 |i k (t, {V k }, {w k })| 2 + k=0 |g k (t, {V k }, {w k })| 2 ≤ C ( V 2 H 1/2 (Γm) + w 2 L 2 (Γm) + 1) + I app (t) 2 L 2 (Γm) . ≤ C ( k=0 |V k | 2 + k=0 |w k | 2 + 1) + sup t∈[0,T ] I app (t) 2 L 2 (Γm) . This implies that {i k , g k } are L 2 -Carathéodory functions (see Definition 3.2 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF]). Then, one can apply Theorem 3.7 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF] to show that there exist T 0 ∈ [0, T ] and solutions V k , w k of Equation (100) such that V k ∈ H 1 (0, T 0 ), w k ∈ H 1 (0, T 0 ), 0 ≤ k ≤ . Step 3: Existence result on [0, T ] for the finite dimensional ODE system. Theorem 3.8 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF] shows that T 0 > 0 is independent of the initial data. Now, our objective is to show that one can find a bound independent of T 0 on the solution for t ∈ [0, T 0 ]. Then such a uniform bound is used to guaranty existence of solutions up to a time T ≥ T 0 by a recursion argument. By using an energy technique, one can deduce e -λt C m 2 V (t) 2 L 2 (Γm) - C m 2 V (0) 2 L 2 (Γm) + λ C m 2 |||V ||| 2 t,λ = - t 0 e -λs AV , V ds - t 0 e -λs (I ion (V , w ) -I app , V ) Γm ds. (101) Furthermore, for µ > 0 as in [START_REF] Keener | Mathematical Physiology[END_REF] of Assumption 6, we have e -λt µ 2 w (t) 2 L 2 (Γm) + λ µ 2 |||w ||| 2 t,λ = µ 2 w (0) 2 L 2 (Γm) -µ t 0 e -λs (g(V , w ), w ) Γm ds. Summing the two previous equations and using [START_REF] Keener | Mathematical Physiology[END_REF] Estimate (103) shows that the solution remains bounded up to time T 0 . The bound being independent of T 0 , one can repeat the process with initial data V (T 0 ) and w (T 0 ) and therefore construct a solution up to time 2 T 0 (since T 0 is independent of the initial data). Such a solution satisfies (103) with initial data corresponding to V (T 0 ) and w (T 0 ). By repeating this process, we can construct a solution up to time T . Step 4: Strong convergence result for the potential First from (103) and from the coercivity of A (see ( 21)), we can deduce that there exists C > 0 independent of such that T 0 V (t) 2 H 1/2 (Γm) dt ≤ C. (104) One can see that V satisfies, for all v ∈ H 1/2 (Γ m ), C m ∂ t V , v Γm = C m ∂ t V , P (v) Γm = -A(V ), P (v) Γm -I ion (V , w ) -I app , P (v) Γm . (105) Using the continuity of A (Eq. ( 20)), the continuity of P (v) in H 1/2 (Γ m ), the bound on I ion [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF], and the estimates (103)-( 104), there exists another positive scalar C independent of such that T 0 ∂ t V (t) 2 H -1/2 (Γm) dt ≤ C. From these observations, we deduce that {V } is bounded in Q := v ∈ L 2 ((0, T ); H 1/2 (Γ m )), ∂v ∂t ∈ L 2 ((0, T ); H -1/2 (Γ m )) . Therefore, using the Lions-Aubin compactness theorem introduced in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (translated into english in [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF]), we know that the space Q is included in the space L 2 ((0, T ) × Γ m ) with compact injection. As a consequence, there exists V m ∈ Q such that, up to a subsequence, V converges weakly to V m in the space L 2 ((0, T ); H 1/2 (Γ m )) and ∂ t V converges weakly to ∂ t V m in L 2 (0, T ; H -1/2 (Γ m )). Moreover, we have lim →+∞ |||V (t) -V m (t)||| T,0 = 0. ( 106 ) Finally from [START_REF] Lions | Non-Homogeneous Boundary Value Problems and Applications[END_REF] (Chapter 1 Theorem 3.2 or Chapter 3 Theorem 4.1), we know that V m ∈ Q ⇒ V m ∈ C 0 ([0, T ]; L 2 (Γ m )). (107) We now want to identify the equations satisfied by the limit terms w and V m in Steps 5 and 6 respectively. These steps have to be adapted in the case where Assumption 7b holds instead of 7a. Step 5a: Strong convergence result and identification of the limit evolution equation for the gating variable Since we have assumed that g is globally Lipschitz, we can deduce a similar strong convergence result for the gating variable w . For all w ∈ L 2 (Γ m ), the equation satisfied by w reads, (∂ t w , P ( w)) Γm +(g(V , w ), P ( w)) Γm = 0 ⇔ (∂ t w , w) Γm +(g(V , w ), P ( w)) Γm = 0, We introduce the unique solution w given by Lemma 2 with U = V m . It is possible to show that the following equation is satisfied (∂ t w -∂ t w , w) Γm + (g(V m , w)g(V , w ), w) Γm + (g(V , w ), w -P ( w)) Γm = 0. Setting w := ww (hence w -P ( w) = w -P (w)), we obtain for almost all time t ∈ (0, T ) for λ > 0 sufficiently large. Step 6a: Identification of the limit evolution equation for the potential We have already shown that w satisfies the equation for the gating variable. We now want to pass to the limit in the space-time weak form of (105) which reads: for all v ∈ C 1 ([0, T ]; H 1/2 (Γ m )) such that v(T ) = 0, we have The first two terms pass to the limit -using the weak convergence of V to V m in Qtherefore, Finally, the last difficulty is to prove that the term I ion (V , w ) converges weakly to the term I ion (V m , w m ). Using Assumption 5, there exists a scalar C > 0 independent of such that From Estimation (103) and the continuous injection H 1/2 (Γ m ) ⊂ L 4 (Γ m ), we can deduce that I ion (V , w ) is uniformly bounded in L 4/3 ((0, T ) × Γ m ). Moreover, since V and w converge almost everywhere to V m and w respectively, I ion (V , w ) converges weakly to the term I ion (V m , w) in L 4/3 ((0, T ) × Γ m ) ⊂ L 2 ((0, T ); H -1/2 (Γ m )), by application of Lemma 1.3 in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (see also the proof of Proposition 11). We can therefore deduce that V m satisfies the weak formulation of the bidomain equations. For all v ∈ C 1 ([0, T ]; H 1/2 (Γ m )) such that v(T ) = 0, we have -C m T 0 ∂ t v, V m Γm dt + T 0 A(V m ), v Γm dt + T 0 I ion (V m , w), v Γm dt = T 0 I app , v Γm dt -C m (V 0 m , v(0)) Γm . ( 112 ) Note that using the weak formulation, we see that the initial condition is V m (0) = V 0 m . However, this has a meaning only if V m is continuous in [0, T ] with value sin L 2 (Γ m ) which is the case (Equation (107)). Moreover, from the weak formulation (112), we deduce System (98) (in the sense of distribution in time). Finally, we deduce the regularity given in the statement of the theorem for ∂ t V m . The regularity for ∂ t w follows. Then, all the terms above can be seen as linear forms on v which are continuous in the norm of L 2 ((0, T ); H 1/2 (Γ m )). Therefore, by the density of functions in C 1 ([0, T ]; H 1/2 (Γ m )) with compact support into L 2 ((0, T ); H 1/2 (Γ m )), Equation (113) is true with v replaced by e -λt V m . Finally, since V m ∈ C 0 ([0, T ], L 2 (Γ m )), it can be shown that (see Theorem 3, Chapter 5 of [START_REF] Evans | Partial differential equations[END_REF] for some ideas of the proof ), Corollary 2 If Assumption 8 holds then the solution of the microscopic bidomain equation [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF] given by Theorem 1 is unique. Proof. The proof is standard. Indeed, we assume that two solutions exist and we show that they must be equal by the energy estimate. We denote by (V 1 , w 1 ) and (V 2 , w 2 ) two solutions of [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF]. Following the same way that we have used to obtain (114) and then (115), we have for ( g(V 2 , w 2 ), w 1w 2 Γm ds. V 1 , V 2 ), e -λt C m 2 V 1 (t) -V 2 (t) 2 L 2 (Γm) + λ C m 2 |||V 1 -V 2 ||| 2 t,λ ≤ - t 0 e -λs I ion (V 1 , w 1 ) -I ion (V 2 , w 2 ), V 1 -V 2 Collecting the two previous equations and using the one-side Lipschitz assumption (Assumption 8), we find C m V 1 (t) -V 2 (t) 2 L 2 (Γm) + µ w 1 (t) -w 2 (t) 2 L 2 (Γm) ≤ e λt (2 L I -λ C m ) ||| V 1 -V 2 ||| 2 t,λ + e λt (2 L I -λ µ) ||| w 1 -w 2 ||| 2 t,λ . Choosing λ large enough, we obtain V 1 = V 2 and w 1 = w 2 . 3 Figure 1 : 31 Figure 1: Cartoon of the considered domain at the microscopic scale and the macroscopic scale. w 0 ), where A m = |Γ Y |/|Y | is the ratio of membrane area per unit volume. Now observe that Equations (90) and (91) are the weak forms of the following set of equations g(V m , w)g(V , w ), ww ) Γm = -(g(V m , w), w -P (w)) Γm . (108)Since the right-hand side vanishes when tends to infinity and using the Lipschitz property of g, we can deduce by a standard energy technique that lim →+∞ |||ww ||| λ,T = 0, (109) -C m T 0 ∂ 0 I 0 I 000 t P (v), V Γm dt + T 0 A(V ), P (v) Γm dt + T ion (V , w ), P (v) Γm dt = T app , P (v) Γm dt -C m (P (V 0 m ), v(0)) Γm . (110) lim →+∞ T 0 ∂ 0 ∂ 0 ∂ 0 A 0 I 0 I 000000 t P (v), V Γm dt = lim →+∞ T t v, V Γm dt = T t v, V m Γm dt.Furthermore, using Proposition 3 and the properties of the operator P , we havelim →+∞ T 0 A(V ), P (v) Γm dt = lim →+∞ T 0 A(V ), v Γm dt = lim →+∞ T (v), V Γm dt = T 0 A(v), V m Γm dt = T 0 A(V m ), v Γm dt. (111)Using only the approximation properties of the operator P , we find lim→+∞ T app , P (v) Γm dt -C m (P (V 0 m ), v(0)) Γm = T app , v Γm dt -C m (V 0 m , v(0)) Γm . T 0 I 3 L 4 / 3 ( 0 V 4 L 4 (w 2 L 2 ( 034304422 ion (V , w ) 4/Γm) dt ≤ C T Γm) dt + T 0 Γm) dt + 1 . Remark 7 . 0 I 0 I 700 Energy identity for the limit solutionTo obtain an energy identity, observe that from the weak formulation (112)C m T 0 ∂ t V m , v Γm dt + T 0 A(V m ), v Γm dt + T ion (V m , w), v Γm dt = T app , v Γm dt. (113) 2 T 0 ∂ 0 e -λt V m 2 L 2 (m 2 T 0 e -λt V m 2 L 2 ( 0 e 0 e 0 ee 200222022000 t V m , e -λt V m Γm dt = e -λT V m (T ) 2 L 2 (Γm) -V m (0) 2 L 2 (Γm) +λ T Γm) dt.From (113), we deduceC m 2 e -λT V m (T ) 2 L 2 (Γm) -C m 2 V m (0) 2 L 2 (Γm) + λ C Γm) dt + T -λt A(V m ), V m Γm dt + T -λt I ion (V m , w), V m Γm dt = T -λt I app , V m Γm dt. (114)Moreover, from the evolution equation (98) on the gating variable w and sincew ∈ H 1 ([0, T ]; L 2 (Γ m )) and g(V m , w) ∈ L 2 ([0, T ]; L 2 (Γ m )),we deduce straightforwardly the energy identityµ 2 e -λT w(T ) 2 L 2 (Γm) --λt g(V m , w), w Γm dt = 0. (115)Summing (114) and (115), we get the fundamental energy identity. Therefore v e H 1/2 (Γm) ≤ C ∇ x v e L 2 (Ωe) , since v e has zero average along Γ m and we finally obtain the relation v e H 1/2 (Γm) ≤ C j H -1/2 (Γm) , hence the third inequality of the proposition.Remark that our choice of definitions of T i and T e implies that Assumption 1). Using the continuity of the extension H 1/2 (Γ m ) into H 1/2 (∂Ω e ), the continuity of the trace operator and finally a Poincaré -Wirtinger type inequality, we can show that v e H 1/2 (Γm) ≤ C ∇ x v e L 2 (Ωe) + 1 |Γ m | Γm v e dγ . u e dγ = 0. (16) Γm Other choices are possible to define u e uniquely but are arbitrary and correspond to a choice of convention. Assuming that it has regular enough solutions u α (t) ∈ H 1 (Ω α ), for almost all t ∈ [0, T ], System (11) is equivalent to Choosing λ large enough, we can show using Gronwall's inequality (see also Proposition 5) that there exists a constant C T that depends only on C m , µ, C I and T such that, for all t ≤ T 0 , Γm ds≤ C T V (0) 2 L 2 (Γm) + w (0) 2 L 2 (Γm) + ≤ C m 2 V (0) 2 L 2 (Γm) + µ 2 w (0) 2 L 2 (Γm) + + (C I - λ C m 2 ) |||V ||| 2 t,λ + (C I - λ µ 2 ) |||w ||| 2 t,λ . (102) V (t) 2 L 2 (Γm) + w (t) 2 L 2 (Γm) + T e -λ 2 0 , we get e -λt C m 2 V (t) 2 L 2 (Γm) + e -λt µ 2 w (t) 2 L 2 (Γm) + t 0 e -λs AV , V Γm ds t 0 e -λs (I app , V ) Γm + C I |Γ m | ds t 0 e -λs AV , V t I app L 2 (Γm) dt + 1 . (103) Γm ds, and for (w 1 , w 2 )e -λt µ 2 w 1 (t)w 2 (t) 2 L 2 (Γm) + λ µ 2 |||w 1 -w 2 ||| 2 t,λ ≤ -µ t 0 e -λs g(V 1 , w 1 ) Appendix (proofs of Section 3.3) In what follows, we will work with the following weighted norm,
127,973
[ "174994" ]
[ "409748", "214668", "419361" ]
01760780
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01760780/file/quantitative-description-activity9.pdf
Andra Anoaica email: [email protected] Hugo Levard email: [email protected] Quantitative Description of Internal Activity on the Ethereum Public Blockchain Keywords: Blockchain, Ethereum, cryptocurrency, smart contract, graph analysis One of the most popular platform based on blockchain technology is Ethereum. Internal activity on this public blockchain is analyzed both from a quantitative and qualitative point of view. In a first part, it is shown that the creation of the Ethereum Alliance consortium has been a game changer in the use of the technology. In a second part, the network robustness against attacks is investigated from a graph point of view, as well as the distribution of internal activity among users. Addresses of great influence were identified, and allowed to formulate conjectures on the current usage of this technology. I. INTRODUCTION A transaction between individuals, in the classical sense, requires either trust in each parties, or in a third party. Blockchain solves this problem of trust between the actors by using a consensus algorithm. Beyond the debate on whether or not this technology can be qualified as innovative or simply being an aggregation of already existing building blocks (peerto-peer protocols, multi-party validation algorithm etc.), it can be reasonably stated that its usage lies on a new paradigm of communication and value exchange . The number of scientific publications dedicated to the general concept of blockchain was multiplied by 15 between 2009 and 2016, which demonstrates both a growing interest for the technical development of blockchains compounds and for its applications in domains for which it has been thought to be potentially disruptive [START_REF] Ben-Hamida | Blockchain for Enterprise: Overview, Opportunities and Challenges[END_REF]- [START_REF] Tian | An agri-food supply chain traceability system for china based on rfid blockchain technology[END_REF]. Looming over the daily increasing number of available blockchain technologies, Bitcoin is by far the most popular and widely used blockchain. However, in February 2017, a consortium of major I.T. and banking actors announced the creation of the Ethereum Alliance, a large project aiming at developing a blockchain environment dedicated to the enterprise based on the Ethereum blockchain. This event suddenly promoted the latter to the level of world wide known and trustworthy blockchain technologies -that up to then only included Bitcoin -making it an essential element of the blockchain world. Despite an increasing notoriety and a monthly growing fiat money equivalent volume, it remains difficult to find publications dedicated to the establishment of economical and behavioural models aiming at describing internal activity on Ethereum, similarly to the studies performed for the Bitcoin network [START_REF] Athey | Bitcoin pricing, adoption, and usage: Theory and evidence[END_REF]- [START_REF] Chiu | The Economics of Cryptocurrencies-Bitcoin and Beyond[END_REF]. Concomitantly, global, time-resolved or major actors-resolved statistical indicators on the past internal activity and on the blockchain network topology are not commonly found in the literature, while they exist for Bitcoin [START_REF] Kondor | Do the rich get richer? An empirical analysis of the Bitcoin transaction network[END_REF]- [START_REF] Lischke | Analyzing the bitcoin network: The first four years[END_REF]. This paper aims at providing basic quantitative insights related to the activity on the Ethereum public network, from the origin, on July 2015, to August 2017. In the first part, correlations in time between internal variables, and between internal variables and the USD/ETH exchange rate are computed. A strong sensitivity of the activity to external events is highlighted. In a second part, the network is analyzed from a graph point of view. Its topology and robustness against attacks is investigated, as well as the distribution of internal activity among users. This leads to the identification of major actors in the blockchain, and to a detailed insight into their influence on the Ethereum economy. II. THE ETHEREUM TECHNOLOGY Similarly to Bitcoin, Ethereum is a public distributed ledger of transactions [START_REF] Wood | Ethereum: A secure decentralised generalised transaction ledger[END_REF]. Yet, the latter differs from the former by major features, among which the existence of smart contracts. Smart contracts are pieces of code that execute on a blockchain. Users or other smart contracts can call its functions, in order to store or transfer tokens, perform simple calculations, dealing with multi-signature transactions etc. The existence of smart contracts allows us to distinguish five different kinds of transactions: • User-to-user: a simple transfer of tokens from one address to another -both addresses can belong to the same physical user. • User-to-smart contract: a signed call to one of the functions of a smart contract. • Smart contract deployment: a transaction that contains the binary code of a compiled smart contract and sent to a special deployment address. • Smart contract-to-smart contract and smart contract-touser: user calling a smart contract might call another function of the same smart contract, or of another smart contract, or again transfer tokens to a user. These are called internal transactions, and their study is beyond the scope of this paper. A. Blocks and transactions In order to collect data, a public Geth [14] node was connected to the Ethereum public network. Once synchronised, the blockchain was stored for further analysis. Within this paper, we only deal with validated transactions, i.e. inserted into a block that was mined and added to the main chain before August 31 st 2017. B. Transactions features Two main transactions features are retained for the analysis. • address: A hexadecimal chain of characters pointing to an account, that can either be a user or a smart contract. Even though there is no direct link between an address and its user identity, some of them are publicly known to belong to major actors such as exchange platforms or mining pools. The correspondence can be found on open access blockchain explorer websites, such as Etherscan 1 . • value: the amount of tokens, expressed in wei, transferred through the transaction. The ether/wei conversion rate is a hard coded constant equal to 10 18 . The time-resolved exchange rate between ether (ETH) and USD is provided by the Poloniex website API. The notions of uncles, gas and gas price, inherent to block validation protocol on Ethereum, are not investigated in this paper. It is worth noting that although the user-to-user transactions gather almost two thirds of the total of all transactions, they carry almost 90% of the transferred amount of tokens. A detailed investigation of the use of smart contracts reveals that most of them have been called only once, but that a small fraction of them have been massively used; this explains the smallness of the number of smart contract deployments compared to the number of user-to-smart contract transactions. Figure 1 displays the monthly total number of transactions and transferred value, respectively, for each of the three categories of transactions defined above. The first two, ranging within the same orders of magnitude, are plotted together for both kinds of plot ((a) and (c)). 1) Number of transactions: A behavior common to the three categories when it comes to the variation of the number of transactions in time is a A very similar trend is observed on the same period concerning the USD/ETH exchange rate, leading to conjecture that these parameters are strongly correlated. However, a careful examination of these variations reveals that two distinct time windows should be distinguished at this stage when investigating correlations between transactions internal features, and external features, on this network. Indeed, the activity on public blockchains such as Bitcoin or Ethereum, as they allow to invest traditional currencies through exchange platforms, may be subject to the same sudden fluctuations as those that can be observed on common market places after external events, such as marketing announcements or financial bankrupts. In the present case, we can reasonably conjecture that there is a causal relationship between the creation of the Ethereum Alliance on February 28 th 2017 and the sharp take-off of the above-mentioned features. Considering the renown of the initial partners, this announcement may have promoted Ethereum to a larger audience, even in the nonspecialist public, and may have brought a massive interest from individuals resulting in an exponential growth of the activity in terms of number of transactions of all kinds. Hence, the strong correlation that could be calculated between features on a global time range, because of scale effects, may be biased and not reflect a normal behaviour. To test this hypothesis we computed the Pearson correlation coefficient [START_REF] Mckinney | pandas: a foundational Python library for data analysis and statistics[END_REF] between the USD/ETH exchange rate and the number of each of the three kinds of transactions defined above, for four different aggregation time periods, and for two subsets of data, that differ from their latest cut-off date: the first one includes all transactions from the creation date of the blockchain (July 31 st 2015) up to the announcement date of the Ethereum Alliance (February 28 th 2017), while the second one ends on August 31 st 2017. Results are displayed in table II. When considering the entire blockchain lifetime (unbold figures), we observe a strong correlation coefficient between the exchange rate and both the user-to-user and the user-tosmart contract number of transactions for all aggregation time sizes (between 0.83 and 0.96), which is consistent with the visual impression discussed above. But when excluding the time range [March 2017-August 2017] (bold figures), such a strong correlation only remains for the user-to-user number of transactions, and for aggregation time sizes no shorter than a day (between 0.92 and 0.95). It turns out that this particular data set is the only one for which the exchange rate variation in time follows the bump observed between March and October 2016, which explains the low correlation coefficient for the two other kinds of transactions. As was conjecture, the Ethereum Alliance creation announcement seems to have been a game changer on the Ethereum internal activity. 2) Values: The total exchanged values by unit of time displayed on plots (c) and (d) of Figure 1 are shown on log scales for clarity. The peak of activity, in terms of number of transactions, in the period that follows the Ethereum Alliance creation translates here into an average multiplicative factor of 10 as for the total exchanged value through the user-to-user transactions (bottom left figure), compared to the period that precedes it. As for the range of value transferred through smart contract deployment, it spans two orders of magnitude on the whole blockchain lifetime time window, and shows no substantial correlation with any of the retained features within this study. To emphasise the rise of interest Ethereum has benefitted between 2016 and 2017, we display in table III the equivalent in USD of the total value that circulated within the blockchain during the months of June of these two years. The fluctuation of the average amount of tokens transferred per transaction bears no relation to the sudden increase of both the USD/ETH exchange rate and the number of transactions after the Ethereum Alliance creation announcement. The tremendous rise of the total value exchanged is thus a direct consequence of the internal activity increase in terms of number of transaction, and not of a behavior change among the individual addresses in terms of amount of tokens transferred through transactions. The macro perspective presented can be The Ethereum blockchain graph is built by setting the addresses as nodes, the transactions as edges, and using a time window that includes all internal events from the first block on July 31 st 2015 to August 31 st 2017. The user-to-user and userto-smart-contract are different types of interaction. In this short paper, we thus limit to user-to-user transactions. The resulting graph contains 5,174,983 nodes (unique user addresses) and 33,811,702 edges (transactions). 1) Network scaling and robustness against attack: The topology is firstly analyzed. Random networks are modeled by connecting their nodes with randomly placed links, as opposed to scale-free networks [START_REF] Clauset | Power-law distributions in empirical data[END_REF], such as the Internet, where the presence of hubs is predominant. Following a scalefree architecture implies that the network is rather robust to isolated attacks, however, remaining vulnerable to coordinated efforts, that might shut down the important nodes. In order to understand potential vulnerabilities of the Ethereum Network, we will investigate the presence of central nodes. In accordance with a previous study on the Bitcoin network in which it is shown to be scale-free [START_REF] Lischke | Analyzing the bitcoin network: The first four years[END_REF], and with what is commonly observed in real networks, a power law distribution of the nodes degree d of the form c • d -α is expected, with c a constant. Such a fit in the case of Ethereum gives α = 2.32, which lies in the observed range for most real networks [START_REF] Clauset | Power-law distributions in empirical data[END_REF]. 2) Centrality on the Ethereum Network: In order to determine whether the activity is well spread among the users, or whether there exist major actors or activity monopoles, we make use of three different centrality indicators: • In-Degree/Out-Degree: the number of incoming/outgoing edges a node is connected to; • Betweenness Centrality: an indicator that summaries how often a node is found on the shortest path between two other nodes and, when communities exist, how well it connects them; • Left Eigenvector Centrality: a measure of the influence of a node based on the node's neighbors centrality. Figure 2a depicts the network directed degree distribution discrete probability density p, i.e. the probability for a randomly picked node to show a certain in-(d i ) and out-(d o ) degree. The latter are plotted in logarithmic scale for clarity and, by convention, any initial in-(respectively out-) degree value of 0 is plotted with a -1 in-(respectively out-) degree coordinate, to preserve surface continuity. The probability associated with p is denoted P . It appears that the great majority of users do a rather limited number of transactions, having an in-degree and an out-degree equal to 1 (30.0%), followed by users that just send transactions once, never receiving any (20.0%). Firstly, the radial anisotropy is seen subsequent to larger values on the d i = d o line, which implies that in-and out-degree distribution are not independent variables: with p(d i ) and p(d o ) following a power law distribution, it seems that p( . These results suggest that, regarding the description of degree distribution, more information on the blockchain network could be obtained using a more sophisticated model than a simple power-law [START_REF] Bollobas | Directed scale-free graphs[END_REF], contrarily to what was assumed above. d i , d o ) = p(d i ) • p(d o ), Following these results, the degree spread among addresses is being investigated. Figure 2b shows the cumulative in-and out-degree percentage that represent, over all users, the first 100 addresses in descending order according to their in-degree or out-degree. It reveals that, out of more than 1 million addresses, just 20 addresses account for more than 60% of the transactions sent and 20% of the transactions received. It is then of interest to look for the identity of these addresses and try to infer their public role on Ethereum. Consequently, we identified the owner of each of the 20 first users in these two lists of addresses, and gather them under three labels, Mining pool, Exchange platform, Unknown, IV. Among the top 20 addresses that send transactions are found 12 mining pools (60%), 5 exchange platforms (25%) and 3 unknown addresses (15%). The top 20 addresses that receive transactions are 7 exchange platforms or addresses related to one of them. The rest of the addresses are unknown. Since the mining retribution is sent to the pool main address only, we can conjecture that around 40% of transactions consist in token redistribution to miners that contribute to a pool. Similarly, because of the lack of services proposing direct payment in ether, it is likely that miners transfer their earned tokens to exchange platforms to convert them into other numeric digital currencies, such as dollar or bitcoin. The betweenness centrality of the over 1 million nodes lies within the range 0-1%, apart from two addresses for which it reaches nearly 15%. These nodes are important as a high value indicates that a significant number of transactions are connected to this node. How well they connect communities in the network is left for further investigations. Among the top 20 nodes in this category there are 10 exchange services related addresses and 3 mining pools. Among the 21 unique addresses identified as most central, none of them belong to the 20 most central addresses in terms of eigenvector centrality. Because the eigenvector centrality awards higher score to nodes connected to other nodes showing a high connectivity, it can be concluded that the most central nodes, from this perspective, are individuals that interact often with major actors, rather than the latter interacting with themselves. Inspecting the interaction of services previously identified as central, according to the in-and outdegree and betweenness centrality, we compute for each of these 21 addresses the percentage of transactions in which they take part that connect each of them to other members of the group. It is found that none of them has more than 1.17% of outbound transactions within the group. The time-independent network topology was investigated and, as for the node directed connectivity, a sharp asymmetry between the in-and out-degree distribution was noticed. A conjecture on the non-independence between these two features was established. Major actors in terms of number of transactions were identified, as opposed to the vast majority of addresses which are used only once. V. CONCLUSION In this paper, quantitative indicators that summarize the internal activity on the Ethereum blockchain were presented. The study of transaction features temporal variation revealed that the announcement of the Ethereum Alliance creation initiated an increase of the activity by several hundred percent,both in terms of number of transactions and the amount of exchange tokens by unit of time. Thus the subsequent caution in the interpretation of time correlations in a blockchain network was highlighted. The study of the transaction graph revealed that more than 97% of nodes have been engaged in less than 10 transactions. Oppositely, 40 addresses, among which mining pools and exchange platforms, were found to account for more than 60% of the activity, leaving open the question of the health of the Ethereum economic ecosystem. 1 https://etherscan.io/ IV. ACTIVITY ON THE ETHEREUM NETWORK A. Evolution in time of transaction main features Fig. 1 : 1 Fig. 1: Number of different transactions and value transfered over time. The gray line highlights the creation of the Ethereum Alliance. Fig. 2 : 2 Fig. 2: (a) Directed degree distribution (logarithmic color scale) upon in-degree and out degree of user addresses expressed as probability; (b) Cumulated in-degree and cumulated out-degree for the first 100 addresses in descending order of their in-degree or out-degree Table I displays the global percentage of the number of transactions and the amount of tokens that each of the three kinds of transactions defined in Section II represents. Number of transactions Value transferred user-to-user 64.6% 90.5% user-to-smart contract 34.3% 9.5% smart contract deployment 1.1% < 0.1% TABLE I : I Proportions of transactions sent and value transfered through the three kinds of transactions. TABLE II : II Pearson correlation coefficient over time between the USD/ETH exchange rate and the number of transactions validated, for different aggregation periods (month, week, day and hour), and two time windows -bold figures highlight the time-range ending before the creation of the Ethereum Alliance. sharp increase from March to August 2017 (top two figures). TABLE IV : IV Public status of the top 20 addresses according to different measurement of centrality which we assume meaning neither mining poool or exchange platform. Results are displayed in Table ACKNOWLEDGMENT This research work has been carried out under the leadership of the Institute for Technological Research SystemX, and therefore granted with public funds within the scope of the French Program Investissements dAvenir.
21,370
[ "1030374" ]
[ "363822", "2079", "363822" ]
01757087
en
[ "sdu" ]
2024/03/05 22:32:13
2019
https://insu.hal.science/insu-01757087/file/1-s2.0-S0094576517315473-main.pdf
Alain Hérique Dirk Plettemeier Caroline Lange Jan Thimo Grundmann Valérie Ciarletti Tra-Mi Ho Wlodek Kofman Benoit Agnus Jun Du Alain Herique Valerie Ciarletti Tra-Mi Ho Wenzhe Fa Oriane Gassot Ricardo Granados-Alfaro Jerzy Grygorczuk Ronny Hahnel Christophe Hoarau Martin Laabs Christophe Le Gac Marco Mütze Sylvain Rochat Yves Rogez Marta Tokarz Petr Schaffer André-Jean Vieau Jens Biele Christopher Buck Jesus Gil Fernandez Christian Krause Raquel Rodriguez Suquet Stephan Ulamec V Ciarletti T.-M Ho Valerie Ciarletti Tra-Mi Ho Christophe Le Gac Rochat Sylvain Raquel Rodriguez A radar package for asteroid subsurface investigations: Implications of implementing and integration into the MASCOT nanoscale landing platform from science requirements to baseline design Keywords: MASCOT Lander, Radar Tomography, Radar Sounding, Asteroid, Planetary Defense, AIDA/AIM Introduction The observations of asteroid-like bodies and especially their internal structure are of main interest for science as well as planetary defense. Despite some highly successful space missions to Near-Earth Objects (NEOs), their internal structure remains largely unknown [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF][START_REF] Herique | A direct observation of the asteroid's structure from deep interior to regolith: why and how?[END_REF][START_REF] Herique | A Direct Observation of the Asteroid's Structure from Deep Interior to Regolith: Two Radars on the AIM Mission[END_REF]. There is some evidence that an aggregate structure covered by regolith ("rubble pile") is very common for medium size bodies, but there are no direct observations. The size distribution of the constitutive blocks is unknown: is it fine dust, sand, pebbles, larger blocks, or a mixture of all of these? Observations of asteroid-like bodies hint at the existence of a whole range of variation between these very extreme objects. Some may be 'fluffballs' composed entirely of highly porous fine-grained material [START_REF] Thomas | Saturn's Mysterious Arc-Embedded Moons: Recycled Fluff?[END_REF]. There are also very large objects that appear to be at least somewhat cohesive [START_REF] Polishook | The fast spin of near-Earth asteroid (455213) 2001 OE84, revisited after 14 years: constraints on internal structure[END_REF], and possibly monoliths bare of any regolith layer [START_REF] Naidu | Goldstone radar images of near-Earth asteroids[END_REF]. Binary systems in their formation by evolution of asteroid spin state [START_REF] Rubincam | Radiative Spin-up and Spin-down of Small Asteroids[END_REF] appear to disperse, re-aggregate or reconfigure their constitutive blocks over time [START_REF] Jacobson | Dynamics of rotationally fissioned asteroids: Source of observed small asteroid systems[END_REF], leading to a complex geological structure and history [START_REF] Ostro | Radar Imaging of Binary Near-Earth Asteroid (66391)[END_REF][START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF][START_REF] Cheng | Asteroid impact and deflection assessment mission[END_REF]. This history includes components of separated binaries appearing as single bodies [START_REF] Busch | Radar observations and the shape of Near-Earth Asteroid[END_REF][START_REF] Scheeres | The geophysical environment of bennu[END_REF] as well as transitional states of the system including highly elongated objects [START_REF] Brozivić | Goldstone and Arecibo radar observations of (99942) apophis in 2012-2013[END_REF], contact binaries [START_REF] Pätzold | A homogeneous nucleus for comet 67p/Churyumov-Gerasimenko from its gravity field[END_REF][START_REF] Kofman | Properties of the 67p/Churyumov-Gerasimenko interior revealed by CONSERT radar[END_REF][START_REF] Biele | The landing(s) of Philae and inferences about comet surface mechanical properties[END_REF] and possibly ring systems [START_REF] Braga-Ribas | A ring system detected around the Centaur (10199) Chariklo[END_REF]. The observed spatial variability of the regolith is not fully explained and the mechanical behavior of granular materials in a low gravity environment remains difficult to model. After several asteroid orbiting missions, these crucial and yet basic questions remain open. Direct measurements are mandatory to answer these questions. Therefore, the modeling of the regolith structure and its mechanical reaction is crucial for any interaction of a spacecraft with a NEO, particularly for a deflection mission. Knowledge about the regolith's vertical structure is needed to model thermal behavior and thus Yarkovsky (cf. [START_REF] Giorgini | Asteroid 1950 DA's Encounter with Earth in 2880: Physical Limits of Collision Probability Prediction[END_REF][START_REF] Milani | Long-term impact risk for (101955)[END_REF]) and YORP accelerations. Determination of the global structure is a way to test stability conditions and evolution scenarios. There is no way to determine this from ground-based observations (see [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF] for a detailed review of the science rationale and measurement requirements). Radar Sounding of Asteroids A radar operating remotely from a spacecraft is the most mature instrument capable of achieving the science objective to characterize the internal structure and heterogeneity of an asteroid, from sub-metric to global scale, for the benefit of science as well as planetary defense, exploration and in-situ resource prospection [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF][START_REF] Milani | Long-term impact risk for (101955)[END_REF][START_REF] Ulamec | Relevance of Philae and MASCOT in-situ investigations for planetary[END_REF]. As part of the payload of the AIM mission a radar package was proposed to the ESA Member States during the Ministerial council meeting in 2016 [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF][START_REF] Michel | European component of the AIDA mission to a binary asteroid: Characterization and interpretation of the impact of the DART mission[END_REF]. In the frame of the joint AIDA demonstration mission, DART (Double Asteroid Redirection Test ) [START_REF] Cheng | Asteroid impact and deflection assessment mission[END_REF], a kinetic impactor, was designed to impact on the moon of the binary system, (65803) Didymos, while ESA's AIM [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF] was designed to determine the momentum transfer efficiency of the kinetic impact and to observe the target structure and dynamic state. Radar capability and performance is mainly determined by the choice of frequency and bandwidth of the transmitted radio signal. Penetration depth increases with decreasing frequency due to lower attenuation. Resolution increases with bandwidth. Bandwidth is necessarily lower than the highest frequency, and antenna size M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 3 constraints usually limit the lowest frequency. These are the main trade-off factors for instrument specification, which also has to take into account technical constraints such as antenna accommodation or operation scenarios [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF]. The AIM mission would have had two complementary radars on board, operating at different frequencies in order to meet the different scientific objectives [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF]. A monostatic radar operating at higher frequencies (HFR) can achieve the characterization of the first ten meters of the subsurface with a metric resolution to identify layering and to link surface measurements to the internal structure. Deep interior structure tomography requires a low frequency radar (LFR) in order to propagate through the entire body and to characterize the deep interior. The HFR design is based on the WISDOM radar [START_REF] Plettemeier | Full polarimetric GPR antenna system aboard the ExoMars rover[END_REF][START_REF] Ciarletti | WISDOM GPR Designed for Shallow and High-Resolution Sounding of the Martian Subsurface[END_REF] developed for the ExoMars / ESA-Roskosmos mission and LFR is a direct heritage of the CONSERT radar designed for ESA's Rosetta mission. HFR: High Frequency Radar for Regolith Tomography The monostatic HFR radar on board the orbiter spacecraft is a high frequency synthetic aperture radar (SAR) to perform reflection tomography of the first tens of meters of the regolith with a metric resolution [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF]. It can image the shallow subsurface layering and connect the surface measurements to the internal structure. The HFR is a stepped frequency radar operating from 300 MHz to 800 MHz in nominal mode and up to 3 GHz in an optional mode. It inherits from the WISDOM radar and is optimized to study small bodies. Table 1 summarizes the main characteristics and budgets of the radar. It provides a decimetric vertical resolution and better than one meter resolution in horizontal direction, depending on the spacecraft speed relative to the asteroid surface. This high resolution allows characterizing the spatial variation of the regolith texture, which is related to the size and mineralogy of the constituting grains and macroporosity. A primary objective of the HFR within the AIM mission was the characterization of the regolith of Didymoon, the Didymos system's secondary body or moon of the primary, Didymain.The HFR was supposed to survey Didymoon before and after the DART impact, in order to determine the structure and layering of the secondary's shallow subsurface down to a few meters. The tomography of the DART artificial impact crater would further provide a better estimate of the ejected mass to model the momentum transfer efficiency. With a single acquisition sequence, Didymoon mapping provides the 2D distribution of geomorphological elements (rocks, boulders, etc.) that are embedded in the subsurface. Multipass acquisition and processing is required to obtain the 3D tomography of the regolith. Another primary objective is the determination of the dielectric properties of the subsurface of Didymoon. The dielectric permittivity can be derived from the spatial signature of individual reflectors or by analyzing the amplitude of the surface echoes. Instrument Design The HFR electronics (Figure 2) uses a heterodyne system architecture utilizing two frequency generators to form a stepped frequency synthesizer. Transmitted wave as well as the local oscillator frequencies are generated separately and incoherently with phase-locked loop (PLL) synthesizers. A functional block schematic of the radar system shows Figure 1. Its front-end mainly consists of a high output power transmitter and two dedicated receivers. The antenna is fed by a 0° and a 90° phase shifted signal to generate circular polarization for the transmitted wave, using a 90° hybrid divider. The transmitter output is muted during reception by switching off the power amplifier output, in order to not overload the receivers. A separate receiver processes one of the receive polarization respectively. For the SAR operation mode an ultra-stable frequency reference provides a stable reference to the digital and RF electronics. All modules are supplied by a dedicated DC/DC module, which provides all necessary supply voltages for the individual blocks from a single primary input voltage. The receiver's superheterodyne architecture uses a medium intermediate frequency at the digitizer input. This ensures high performance by eliminating the 1/f noise, thereby improving noise and interference performance. A calibration subsystem allows for a calibration of the horizontal (H) and vertical (V) receiver regarding image rejection, inter-receiver phase and amplitude balance. The received H-and V-signals are compensated subsequently to ensure very high polarization purity. The Digital Module (DM) is built around a Field Programmable Gate Array (FPGA) and microcontroller. It controls and manages the data flow of the instrument. This includes digital signal processing of the measurement data, short time data accumulation, transfer to the spacecraft and processing of control commands for radar operation. The antenna system comprises of a single antenna, which transmits circular polarization and receives both linear polarizations. This ultra-wideband dualpolarized antenna system operates in the frequency range from 300 MHz to 3.0 GHz. Figure 3 shows a 3D model and its corresponding antenna prototype. Antenna diagrams are shown in Figure 4. Operations and Operational Constraints The requirements for the HFR instrument are strongly driven by the acquisition geometry. Indeed, Synthetic Aperture Radar reflection tomography in 3D requires observations of different geometries and can only be achieved by constraining the spacecraft motion and position with respect to the observed target. For each acquisition geometry, the radar acquires the signal returned by the asteroid as function of propagation time, which is a measure of the distance from the spacecraft to the observed body. This range measurement resolves information in a first spatial dimension. The resolution in that dimension is given by the bandwidth of the radar signal and is significantly better than one meter (~30 cm in vacuum). M A N U S C R I P T A C C E P T E D For kilometric-size asteroids, the rotational period is generally in the order of a few hours, much smaller than the spacecraft orbital period during remote observation operations within the Hill sphere. In the Didymos system, the main body's rotation period is 2.3 h. Its moon orbits the primary in 11.9 h and it is expected to rotate synchronously [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF]. The spacecraft's orbital period in a gravitationally bound orbit of 10 km radius is nearly two weeks. It can also be at rest relative to the system's barycenter while on a heliocentric station-keeping trajectory. For the processing, we consider the body fixed frame with a spacecraft moving in the asteroid sky (Figure 7). Thus, the relative motion along the orbit plane between the spacecraft and the moon resolves a second dimension by coherent Doppler processing (Figure 5) [START_REF] Cumming | Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation[END_REF]. This brute-force SAR processing takes into account the observation geometry to give 2D images of the body surface mixing in the same pixel surface and subsurface features (Figure 5c). For a spherical body this induces an ambiguity between North and South hemispheres [START_REF] Hagfors | Mapping of planetary surfaces by radar[END_REF] which corresponds to the aliasing of the North target to the South hemisphere in Figure 5a. Therefore, the resolution is determined by the length of the observation orbit arc (i.e., Doppler bandwidth) and is better than one meter for an arc of 20° longitude. For the Didymos system, the surfaces of the primary and secondary object show very different Doppler behavior due to their different periods of rotation. This allows to resolve ambiguities when both are inside the radar's field of view. In addition, a spacecraft position out of the equatorial plane breaks the symmetry. Shifting the signal partially out of the orbital plane reduces the North to South ambiguities as it is spread and a less powerful alias remains in the other hemisphere (South in Figure 5a). The accuracy requirement for the spacecraft pointing is typically 5° when the reconstructed trajectory accuracy requirement is in the order of hundreds of meters. The orbit restitution accuracy can be improved by the SAR processing itself using autofocus techniques [START_REF] Carrara | Spotlight Synthetic Aperture Radar: Signal Processing Algorithms[END_REF]. To achieve a 3D tomography, the third dimension to be resolved needs to be orthogonal to the orbit plane (Figure 7). To do so, the HFR instrument performs several passes at different latitudes. Typically, 20 passes allow a metric resolution. The spacecraft position evolves in a declination and right ascension window centered around 30° radar incidence of the observed target point (Figure 7). The extent of this window is about 20°. Each pass lasts for one to two hours and is traversed close to constant declination. The spacecraft is in very slow motion of a few mm/s along this axis orthogonal to the orbit plane. Such a velocity is difficult to achieve in operations. A proposed solution is to combine this slow motion to a movement along the orbit axis. All the passes can be done in a single spacecraft trajectory. Each pass corresponds to the period when HFR is facing the moon that is orbiting around the main body (Figure 7). In this multi-pass scenario, the resulting resolution for the third direction is 1 m and it is the limiting one (Figure 6). The distance between the HFR instrument and its target is limited by the radar link budget for the upper boundary and by the speed of the electronic system for the lower boundary. HFR is expected to operate from 1 km up to 35 km, the nominal distance being 10 km. M A N U S C R I P T A C C E P T E D LFR: Low Frequency Radar for Deep Interior Sounding Deep interior structure tomography requires a low frequency radar to propagate through the entire body. The radar wave propagation delay and the received power are related to the complex dielectric permittivity (i.e. composition and microporosity) and the small-scale heterogeneities (scattering losses), while the spatial variation of the signal and multiple propagation paths provide information on the presence of heterogeneities (variations in composition or porosity), layers, large voids or ice lenses. A partial coverage will provide 2D cross-sections of the body; a dense coverage will allow a complete 3D tomography. Two instrument concepts can be considered (Figure 8). A monostatic radar like MARSIS/Mars Express (ESA) [START_REF] Picardi | Radar soundings of the subsurface of Mars[END_REF] analyzing radar waves transmitted and received by the orbiter after reflection at the asteroids' surface and internal structure or a bistatic radar like CONSERT/Rosetta (ESA) [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF] analyzing radar waves transmitted by a lander, propagated through the entire body and received by the orbiter. The monostatic radar sounder requires very low operation frequencies, necessitating the use of large antennas. It is also more demanding in terms of mission resources (mass, data flow, power), driving all the mission specifications. In contrast to the monostatic approach, a bistatic radar can use slightly higher frequencies, simplifying the accommodation on the carrier mission as well as on the surface package. The bistatic low frequency radar measures the wave propagation between the surface element and an orbiter through the target object, like Didymoon. It provides knowledge of the deep structure of the moon, a key information needed to be able to model binary formation and stability conditions. The objective is to discriminate monolithic structures from building blocks, to derive the possible presence of various constituting blocks and to provide an estimate of the average complex dielectric permittivity. This information relates to the mineralogy and porosity of the constituting material. Assuming a full 3D coverage of the body, the radar determines 3D structures such as deep layering, spatial variability of the density, of the block size distribution and of the permittivity. As a beacon on the surface of Didymoon, it supports the determination of the binary system's dynamic state and its evolution induced by the DART impact (a similar approach as used for the localization of the Philae lander during the Rosetta mission [START_REF] Herique | Philae localization from CONSERT/Rosetta measurement[END_REF]). Instrument Design The LFR radar consists of an electronic box (E-Box) shown in Figure 13 and an antenna set on each spacecraft (i.e. lander and orbiter). Both electronic units are similar: two automats sending and receiving a BPSK code modulated at 60 MHz in time-sharing (Figure 9 and Figure 14). This coded low frequency radar is an in-time transponder inherited from CONSERT on board Rosetta (ESA) [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF]: in order to measure accurately the propagation delay, a first propagation path from the orbiter to the lander is processed on-board the lander. The detected peak is used to resynchronize the lander to the orbiter. A second propagation from the lander to the orbiter constitutes then in itself the science measurement (Figure 10). This concept developed for CONSERT on board Rosetta [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF] allows measuring the propagation delay with a raw accuracy better than 100 ns over a few tens of hours of acquisition using a quartz oscillator with a frequency stability in the range of 10 -7 . This accuracy can be increased up to 20 ns by on-ground processing post-processing [START_REF] Pasquero | Oversampled Pulse Compression Based on Signal Modeling: Application to CONSERT/Rosetta Radar[END_REF], yielding a typical accuracy better than a few percent of the average dielectric permittivity. The LFR characteristics and budgets are summarized in Table 1. As the LFR antennas cannot be deployed immediately after separation from the carrier spacecraft due to the need to relocate from the landing area to the LFR operating area, an antenna deployment mechanism is required, which needs to be operable in the low gravity environment on the surface of Didymoon. Astronika has designed a mechanical system deploying a tubular boom with a total mass of ~0.25 kg. It is able to deploy the 1.4 m antennas consuming only ~2 W for ~1 minute. On the lander, the main antenna (V shape in Figure 11 and Figure 12) is deployed after reaching its final location. It provides linear polarization with high efficiency for the sounding through the body. A secondary antenna set with lower efficiency is deployed just after lander separation to allow operations in visibility during descent and lander rebounds, and for secondary objectives and operational purposes. The use of circular versus linear polarization induces limited power losses but reduces operational constraints on the spacecraft attitude. The LFR antenna on the orbiter is composed of four booms at the spacecraft corners in order to provide circular polarization. Operations and Operational Constraints Tomographic sections in bistatic mode are created in the plane of the moving line of sight through the target object between the lander and the orbiter passing by underneath. A full volume tomography is then assembled from a succession of several (as many as feasible) different of such pass geometries adjusted by changes in the orbiter trajectory. Considering the rugged topography of asteroids and the fact that all sections of the target object converge at the lander location, it is advantageous to have multiple landers and/or lander mobility, in order to ensure full volume coverage and uniform resolution of the target's interior. Lander mobility is particularly useful in binary systems where the lines of sight as well as the lander release, descent, landing, and orbiter pass trajectories can be constrained by the two M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 13 objects and their orbital motions relative to the spacecraft. The complex shapes and gravity fields of contact binaries or extremely elongated objects can create similar constraints. The geometric constraints on the operational scenario for the bistatic experiment are driven by scientific and technical requirements on both the orbiter and lander platform. Considering simultaneously the baseline mission science data volume, the orbiter minimum mission duration in the frame of the AIDA mission, and the worstcase power constraints on-board of the lander, it is not possible to ensure full coverage of Didymoon according to the Nyquist criteria, i.e. λ/2 at the surface of the body. Under this constraint, when a full tomography of the body [START_REF] Barriot | A two dimensional simulation of the CONSERT experiment (Radio tomography of comet Wirtanen)[END_REF][START_REF] Pursiainen | Detection of anomalies in radio tomography of asteroids: Source count and forward errors[END_REF] is not feasible with a priori information [START_REF] Eyraud | Imaging the interior of a comet from bistatic microwave measurements: Case of a scale comet model[END_REF], then statistical tomography allows to characterize heterogeneity scales [START_REF] Herique | A characterization of a comet nucleus interior[END_REF] and to retrieve composition information (for CONSERT see also [START_REF] Ciarletti | CONSERT constrains the internal structure of 67p at a few meter size scale[END_REF] and [START_REF] Herique | Cosmochemical implications of CONSERT permittivity characterization of 67p/CG[END_REF]). However, it is likely that a combination of higher data volume by utilization of additional passes, allocation of more ground station time, or mission extension, together with any better than worst-case power availability on the lander platform can result in a much better tomographic coverage. To achieve a good coverage of Didymoon, seven to ten tomography slices need to be collected, with each measurement sequence taking about 10 hours. Those slices must also be sufficiently separated in space. Thus, the spacecraft has to be able to operate at various latitudes relative to Didymoon. A single acquisition sequence is composed of a sequence of visibility, occultation and again visibility between orbiter and lander. The first visibility period is mandatory for a time synchronization between orbiter and lander platform. The science measurements are performed during the occultation period. The last visibility slot is re served for calibration. The accuracy on the orbiter trajectory reconstruction needs to be typically a few meters, whereas the altitude reconstruction accuracy should be in the order of about 5°. The radar link budget constrains the operational distance from the orbiter unit to the lander unit to about 10 km. Concerning the lander, proper operation of the LFR imposes constraints on the landing site selection (Figure 15 and Figure 16). The acquisition geometry constrained by Didymoon's motion around the main body. Most likely, it is in 1:1 spin-orbit resonance, which means that the side facing the main body is always the same, as with Earth's Moon. With a moving spacecraft on the latitude axis the lander needs to land near to the equator of Didymoon, i.e., between -15° and +15° latitude, in order to achieve alternating visibility and occultation periods. In that case, the orbiter spacecraft will be able to cover a range of latitudes between -25° and +25°. This alternation also constrains the longitude of the landing site to a zone between -120° and +120°, with optimal science return between -60° and +60°. It is also constrained by the lander platform's solar energy availability, which means having to avoid eclipses by the main body, and having a "forbidden zone" between -45° and +45° of longitude. Figure 16: LFR landing site possible areas in green: optimal, yellow: acceptable and red: impossible 0° longitude corresponds to the point facing the main body of the Didymos system. Integration into the MASCOT2 Lander Platform The MASCOT2 lander for the AIM mission is derived from the MASCOT lander, originally designed for and flying on the HAYABUSA2 mission to asteroid (162173) Ryugu [START_REF] Ho | MASCOT -The Mobile Asteroid Surface Scout Onboard the Hayabusa2 Mission[END_REF]. In order to integrate a radar instrument into the lander system, originally envisaged for short lifetime and mobile scouting on an asteroid surface, several changes are incorporated to cope with the measurement and instrument requirements of the radar package. Table 2 shows a summary of the main differences and commonalities between the original MASCOT and the proposed MASCOT2 variant of the lander platform [START_REF] Lange | MASCOT2 -A Small Body Lander to Investigate the Interior of 65803 Didymos' Moon in the Frame of AIDA/AIM[END_REF][START_REF] Biele | MASCOT2, a Lander to Characterize the Target of an Asteroid Kinetic Impactor Deflection Test Mission[END_REF]. M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 15 The LFR E-Box (Figure 13) is designed in order to be compatible with the MASCOT2 lander platform's available volume. The MASCOT2 lander design is ideally suited to incorporate different suites of payloads, which means that a mechanical integration of the LFR E-Box would have no impact on the overall accommodation. The integration of the LFR's primary antennas and their deployment mechanisms requires a slightly larger effort due to volume restrictions in the bus compartment of the lander The antenna system is designed to match both, the requirements of the MASCOT2 lander and the influence of surface and subsurface in the vicinity of the lander. EM simulations are used to verify the suitability of the antenna system accommodation. Figure 17 shows a simulation setup and a typical 3D radiation pattern assuming a flat surface. Figure 18 shows the antenna far field diagrams in two planes perpendicular to the surface for 50 MHz, 60MHz and 70MHz. From an electrical point of view, the integration of the instrument into the lander platform is challenging in two ways: (1) the operational concept along with the overall architecture had to be optimized in order to be compatible with a long-duration highpower measurement mode and (2) precise timing is needed in order to achieve usable instrument characteristics. Both aspects center on the energy demand of the LFR instrument and related services which are based on the need to support a lot of repeated long continuous runs. In contrast, MASCOT aboard HAYABUSA2 is designed to fulfil a short-duration scouting mission. It is expected to operate only on two consecutive asteroid days of ~7.6 h, each. The design-driving power consumption results from the operations of the MicrOmega instrument (~20 W total battery power for only ~½ hr) and the mobility unit (up to ~40 W for less than 1.5 s). The energy for this mission is completely provided by a non-rechargeable battery. The choice for primary batteries was partly driven by the fact that such, a power system operates independent of the topographic illumination [START_REF] Grundmann | One Shot to an Asteroid -MASCOT and the Design of an Exclusively Primary Battery Powered Small Spacecraft in Hardware Design Examples and Operational Considerations[END_REF]. A short mission duration also implies few opportunities and little time for ground-loop intervention, thus the power subsystem operates permanently hotredundant and provides many automatic functions. This leads to an elevated idle power consumption of about 6.5 W, rising to about 10 W with the continuous activity of the MARA and MasMAG instruments. The simplicity of this concept comes at the expense of a very significant thermal design and control effort, required to keep the primary battery cold during interplanetary cruise in order to prevent self-discharge, and warm during on-asteroid operation to ensure maximum use of the available capacity. This simplicity of the original MASCOT concept comes at the expense of a very significant thermal design and control effort, required to keep the primary battery cold during interplanetary cruise in order to prevent self-discharge, and warm during onasteroid operation to ensure maximum use of the available capacity. The support of the LFR with its long-duration high-power measurement mode requires modifications to the platform design due to thermal aspects. The MicrOmega (MMEGA, [START_REF] Pilorget | NIR reflectance hyperspectral microscopy for planetary science: Application to the MicrOmega instrument[END_REF]) instrument, accommodated at the respective location in the original MASCOT lander, requires cold operation due to its infrared sensor and optics. The LFR E-Box can operate in the typical "warm" conditions of other electronics modules (Figure 19). Therefore, its mass can be used together with the bus E-Box and mobility mechanisms to augment thermal energy storage around the battery, improving the mass to surface ratio of the warm compartment and saving electrical energy which would otherwise be required for heating. For this purpose, the cold compartment on the payload side of the lander was reduced to a "cold corner" or pocket around the camera, MasCAM [START_REF] Jaumann | The Camera of the MASCOT Asteroid Lander on Board Hayabusa2[END_REF], and the radiometer, MARA [START_REF] Grott | The MASCOT Radiometer MARA for the Hayabusa 2 Mission[END_REF]. The accommodation of the magnetometer, MasMAG [START_REF] Herčík | The MASCOT Magnetometer[END_REF], as on MASCOT was considered for optional use together with the proposed magnetometer experiments aboard COPINS, sub-satellites to be inserted into orbit in the Didymos system by the AIM spacecraft [START_REF] Walker | Cubesat opportunity payload inter-satellite network sensors (COPINS) on the ESA asteroid impact mission (AIM), in: 7th Interplanetary Cubesat Workshop[END_REF]. A triaxial accelerometer, DACC, was added in order to observe the interaction of the lander with the surface regolith during touch-down, bouncing and self-righting, reaction to motion during deployment operations, and possibly the DART impact shock wave. M A N U S C R I P T A C C E P T E D For the long-duration MASCOT2 mission, the mission energy demand will be orders of magnitude higher due to the repeated long continuous LFR runs. Thus, a rechargeable battery and photovoltaic power is required. The design-driving power consumption results from the LFR instrument operating for several hours at a time (see Table 1) defining the minimum battery capacity, and the simultaneous operation of the dual mobility mechanism. Both have a similar peak power demand, defining the power handling capability. A deployable photovoltaic panel is necessary to satisfy the energy demand of LFR operations without too extensive recharging periods between LFR sounding passes. The panel will be released after the MASCOT2 lander relocates to the optimal LFR operations site on Didymoon, self-righted there, and deployed the LFR antennas. The possibility to recharge the battery and wait for ground loop intervention allows mainly cold-redundant operations and reduces the need for highly sophisticated autonomy within the power subsystem. This alone greatly reduces idle power consumption, and thus battery capacity requirements to survive the night. Further reduction of idle consumption is achieved by optimizing the electronics design. However, the energy demand of LFR is such that a much deeper discharge of the battery will occur as would usually be accepted for Earth-orbiting spacecraft. This will reduce battery lifetime. Thus, some fundamental autonomous functions are used to protect the system from damage by short circuits or deep discharge of the battery and to ensure a restart after the battery has accumulated sufficient energy. For this purpose, the photovoltaic power conversion section charging the battery is selfsupplied and does not require battery power to start up. In case the battery gets close to the minimum charge level e.g. when a LFR run cannot be properly terminated due to an unforeseen event, all loads are disconnected so that all incoming photovoltaic power can be used for recharge. State of the art rechargeable batteries can operate sufficiently well and with only minor operational restrictions at cell temperatures from about -20°C to +50°C, nearly as wide as the temperature range of the primary battery of MASCOT, but with much better performance in cold conditions below +20°C. In case the temperature is too low to allow maximum charging rate, all excess photovoltaic power is diverted to a battery heater [START_REF] Dudley | ExoMars Rover Battery Modelling & Life Tests[END_REF]. During use and in favorable illumination on the ground, battery life extending charge control is applied [START_REF] Neubauer | The Effect of Variable End of Charge Battery Management on Small-Cell Batteries[END_REF]. As described the features of a long-lived high energy mission can be coped with by a deployable photovoltaic panel. As an alternative a moderate enlargement of the MASCOT-like box shape was also considered as an option for the AIM mission. It could provide the same daily average power level. Depending on which sides of the lander are enlarged, the immediately available photovoltaic power can be adjusted within the daily cycle. A flat shape with a similar top plate area as the deployed panel of MASCOT2 increases power generation around noon while higher or wider sides increase power at sunrise and sunset (assuming a clear view to the horizon at the landing site). The increased volume, if provided by the carrier mission, can be used to accommodate additional instruments or a larger battery, also providing more robustness during the relocation phases. Depending on the antenna design, relocation for more extensive LFR tomography also becomes possible. It is thus possible to combine investigations of the interior and the surface mineralogy as M A N U S C R I P T A C C E P T E D carried out by MASCOT. The mass increase is little more than the instrument's, i.e. the bus mass would increase by about 10% with the addition of one relatively large instrument. If the carrier mission provides still more mass allowance, a set of multiple MASCOT type landers based on a common infrastructure but carrying different instruments and individually optimized for these can also be considered [START_REF] Grundmann | Capabilities of GOSSAMER-1 derived Small Spacecraft Solar Sails carrying MASCOT-derived Nanolanders for In-Situ Surveying of NEAs[END_REF]. Design Methodologies for Lander Design Reuse The "mother" mission of MASCOT, the HAYABUSA2 mission, has benefited greatly from its predecessor HAYABUSA. It reused main portions of the design and optimized its main weaknesses based on lessons learned, such as the antenna, the orientation control and engine as well as the sampling approach [START_REF] Tsuda | Flight status of robotic asteroid sample return mission HAYABUSA2[END_REF]. Other than this particular example, and except for the well-known and documented reuse of the Mars Express Flight Spare Hardware for the Venus Express mission [START_REF] Mccoy | Call for Ideas for the Re-Use of the Mars Express Platform -Platform Capabilities[END_REF][START_REF] Mccoy | The Venus express mission[END_REF], the MASCOT2 re-use exercise is the only known system level reuse of a previously flown deep space system in a new environment and with an almost completely new science case, as described above. The fostered and maximized re-use of an already very precisely defined system for a very different mission recreates the unusual situation of an extremely wide range of subsystem maturity levels, from concepts to already flown designs. The integration of new instruments like the LFR radar is one of such lowermaturity cases. New design methodologies based on Concurrent Engineering and Model Based Systems Engineering methods can enhance the redesign, instrument integration and system adaptation process and make it faster and more cost efficient [START_REF] Lange | Systematic reuse and platforming: Application examples for enhancing reuse with modelbased systems engineering methods in space systems development[END_REF][START_REF] Lange | A model-based systems engineering approach to design generic system platforms and manage system variants applied to mascot follow-on missions[END_REF][START_REF] Braukhane | Statistics and Evaluation of 30+ Concurrent Engineering Studies at DLR[END_REF]. In addition, the general use case of a small landing package piggy-backing on a larger main mission is very lucrative and widely applicable in the context of planetary defense and small body exploration, making the platform approach, already known from Earth orbiting missions, a feasible strategy. A strategically planned MASCOT-type lander platform with an ever-increasing portfolio of technology options will further enhance the applicability of the small lander concept to all kinds of missions. Several of the technologies specifically required to realize the radar mission scenario as described above fall into this category. Other technologies such as advanced landing subsystems and new mobility concepts are also interesting and currently under development [START_REF] Lange | Exploring Small Bodies: Nanolander and -spacecraft options derived from the Mobile Asteroid Surface Scout[END_REF]. Conclusions Direct measurements are mandatory to get a deeper knowledge of the interior structure of NEOs. A radar package consisting of a monostatic high frequency radar and a bistatic low frequency radar is able to perform these direct measurements. Both radar systems provide a strong scientific return by the characterization of asteroid's internal structure and heterogeneity. Whereas the LFR provides a tomography of the deep interior structure, the HFR maps the shallow subsurface layering and connects the surface measurements to the internal structure. In addition to this main objective, the radars can support other instruments providing complementary data sets. The nanolander MASCOT2 demonstrates, by carrying the mobile part of the bistatic radar, its flexibility. It can carry instruments with a wide range of maturity levels using state of the art design methodologies. As shown, a moderate redesign allows for long-term radar runs in contrast to the original short-term operation scenario of MASCOT. M A N U S C R I P T A C C E P T E D The presented radar package and the MASCOT2 lander have been developed at phase A/B1 level in the frame of ESA´s AIM mission study. Although the mission has not been confirmed and the next steps to establish such a mission are not clear. The modification of the MASCOT lander platform to a fixed but longtime radar surface station demonstrates the large range of applications for small landing packages on small airless bodies [START_REF] Ulamec | Landing on Small Bodies: From the Rosetta Lander to MASCOT and beyond[END_REF]. The radar instrument package presented has a high maturity and is of main interest for planetary defense as well as for NEO science. M A N U S C R I P T A C C E P T E D Figure 2 : 2 Figure 2: Module stack of the HFR system prototype: DC/DC Module, Digital Module with FPGA, microcontroller and signal level converters, Low Power Module with synthesizer, receiver and switches for calibration, and High Power Module with power amplifier and preamplifier assemblies. Figure 3 : 3 Figure 3: 3D model (left) and prototype (right) of the HFR ultra-broadband dual polarized antenna. Figure 4 : 4 Figure 4: Simulated antenna pattern of HFR antenna system (E-plane). Figure 5 : 5 Figure 5: HFR mono pass impulse response on Didymoon's surface map for a point target located at 20° latitude and 180° longitude (a). The impulse response power is shown by color mapping, in dB.The same impulse response presented in 3D on a sphere portion that represents the surface of Didymoon (b) and shown in 3D (c). Note that a clear ambiguity along the vertical axis remains in a mono pass. The color scale corresponds to a dynamic range of 100 dB and exaggerates signal distortions. This measurement is simulated, along an arc of orbit or 20°, considering an isotropic point target located. On Didymoon's surface and taking into account propagation delay and geometrical losses. Simulation was done in the frequency domain using the instrument characteristics listed in this paper. A SAR processor, corresponding to a coherent summation after compensation of the propagation delay, processes the simulated measurements. Figure 6 :Figure 7 : 67 Figure 6: HFR impulse response with 30 passes for a point target located on Didymoon's surface at 30° latitude and 90° longitude. The HFR observation window is chosen so that it has a 30° incidence angle with the target. The color mapping, in dB, shows the impulse response power; (a) presents a view of the radial/along track plane while (b) presents a view of the radial/across track plane; (c) presents the same impulse response planes in 3D, including the tangent plane (across/along track).The color scale corresponds to a dynamic range of 100 dB. This dynamics exaggerates signal distortions. Figure 8 : 8 Figure 8: Bistatic (left) and monostatic (right) radar configuration, Artist view from CONSERT/Rosetta. From [1]. Credit: CGI/Rémy Rogez; shape model: Mattias Malmer CC BY SA 3.0, Image source: ESA/Rosetta/NAVCAM, ESA/Rosetta/MPS. Figure 9 : 9 Figure 9: Block diagram of the LFR instrument, orbiter (top), lander (bottom). . Figure 10 : 10 Figure 10: Lander synchronization: effect on the measured signal taking into account the periodicity of the calculated signal. Figure 11 : 11 Figure 11: Lander antennas: V-shaped dipole and secondary dipole antenna. MASCOT2 accommodation. Figure 12 : 12 Figure 12: Antenna in tubular boom technology general architecture with basic subassemblies: (1) Structure (2) Tubular boom (3) Tubular boom guidance system (4) Drive and damping unit (5) Lock an release mechanism (6) Electrical connection. Figure 13 : 13 Figure 13: LFR electronic box -housing, global view. Figure 14 : 14 Figure 14: Block schematic of the LFR system architecture showing electronic box including transmitter (Tx), Receiver (Rx) and digital module. Figure 15 : 15 Figure 15 : Definition of Didymoon reference system. Figure 17 : 17 Figure 17: Simulated 3D LFR radiation pattern inside (lower hemisphere) and outside (upper hemisphere) the asteroid at 60 MHz in case of a flat surface, assuming a relative permittivity of 5. Figure 18 : 18 Figure 18: Simulated antenna patterns of MASCOT2 antenna system above ground.Left: Φ=0° (perpendicular to the y-axis); right: Φ=90° (perpendicular to the x-axis). Figure 19 : 19 Figure 19: Detailed view of MASCOT2 platform showing accommodation of LFR, including E-box and antenna systems. Table 1 : 1 Main characteristics and performance of the bistatic low frequency radar and the monostatic high frequency radar. Bistatic Radar Monostatic Radar Orbiter LFR Lander Frequency (nominal) 50-70 MHz 300 -800 MHz Frequency (extended) 45-75 MHz up to 3 GHz Signal modulation BPSK Step frequency Resolution 10 -15 m (1D) 1 m (3D) Polarization Circular (AIM) Linear (Mascot) Tx: 1 Circular Rx: OC and SC Tx power 12 W 20 W Pulse repetition 5 seconds 1 second (typical) Sensitivity Dynamic = 180 dB NEσ0 = -40 dB.m 2 /m 2 Mass Electronic 920 g 920 g 830 g Antenna 470 g 230 + 100 g 1560 g Total w/o margin 1390 g 1250 g 2390 g Power max / mean 50 W / 10 W 50 W / 10 W 137 W / 90 W Typical Data (Gbit) 1 0.3 300 Table 2 : 2 Main differences and commonalities of the proposed MASCOT2 lander. Differing attribute MASCOT MASCOT2 Main Science Case surface composition and physical properties internal structure by radar tomography mapper Landing site restricted by thermal and communications restricted by measurement requirements reasons Target body diameter 890 m 170 m Rotation period 7.6 h 11.9 h Lifetime ~16 hours >3 months Deployment wrt to S/C sideways, 15° downwards not restricted Communications synergy with Minerva landers with AIM ISL (Copins) interoperability Lander mounting plane 15° angled "down" parallel to the carrier sidewall Storage inside panel in a pocket outside panel, flush Mobility 1 DOF 2 DOF Localization passive, by orbiter self-localization Power primary battery only solar generator and rechargeable batteries Thermal Control variable conductivity passive (MLI, heater) Self-awareness basic extended sensor suite Communication VHF transceiver from JAXA S-band transceiver Scientific Payload MARA, MASCam, MasMAG, MicrOmega MARA, MASCam, LFR, DACC, (MAG) Acknowledgement Radar development has been supported by the CNES's R&T program ("CONSERT Next Generation" study) and by the ESA's General Studies Program (AIM Phase A). The High Frequency Radar inherits from WISDOM/Exomars founded by CNES and DLR. The Low Frequency Radar inherits from CONSERT/Rosetta founded by CNES and DLR. The MASCOT2 study was funded and carried out with support of the DLR Bremen CEF team.
50,116
[ "177804", "175106", "736023", "1030094", "1339932", "968808", "968370", "777545", "19232" ]
[ "1051366", "96520", "137698", "137698", "541921", "137698", "1051366", "45733", "45733", "45733", "1051366", "1051366", "531471", "96520", "1042402", "96520", "541919", "96520", "541919", "1051366", "1051366", "520847", "96520", "541919", "44260", "152768", "152768", "44260", "307314", "481789" ]
01760925
en
[ "shs" ]
2024/03/05 22:32:13
2017
https://theses.hal.science/tel-01760925/file/59408_DEFEBVRE_2017_archivage.pdf
Keywords: work, employment, working conditions, retirement, general health, mental health, depression, anxiety, chronic diseases, childhood, endogeneity, instrumental variables, matching, panel methods, difference-in-differences, France ni improbation aux opinions émises dans les thèses ; ces opinions doivent être considérées comme propres à leurs auteurs. À l'heure à laquelle j'écris ces mots (quelques jours avant la soumission du présent manuscrit), je suis dans la crainte de ne savoir exprimer ma reconnaissance envers toutes les personnes ayant rendu cette aventure possible, et ayant permis quelle se déroule de la manière dont elle s'est déroulée. J'enjoins donc le lecteur à user de compréhension, et à considérer que tout oubli de ma part est plus le fruit de la fatigue et de la nervosité, que de l'ingratitude. Tout d'abord, je tiens à remercier tout particulièrement mon Directeur de Thèse, Thomas Barnay, qui m'a accompagné, soutenu et dirigé pendant ces quatre années. Je précise que ce n'est pas par respect de la coutume que je place ces remerciements à mon Directeur tout en haut de la liste, mais bien parce-que c'est grâce à son extrême bienveillance, son investissement et ses qualités humaines qu'il a su faire de moi un jeune chercheur, je l'espère intègre. Rien de ce qui s'est passé durant ces quatre années (et même un peu avant) n'aurait été possible sans la confiance qu'il a mise en moi, et ce dès le Master 2. Quand moi je n'y croyais pas (ou plus), Thomas a toujours été là pour prendre le contre-pied et me faire avancer. Merci infiniment pour tout ça, ainsi que pour les parties de tennis de table ! I would then like to thank Maarten Lindeboom and Judit Vall Castelló for agreeing to be members of the committee. It will be a pleasure and an honour to be meeting you at the Ph.D. Defence. Et plus particulièrement, je remercie Eve Caroli et Emmanuel Duguet, nonseulement pour m'avoir fait l'honneur d'accepter d'être membres de mon Jury, mais aussi d'avoir participé à ma soutenance blanche de thèse. J'espère de tout coeur avoir su rendre justice à votre travail et à vos commentaires de manière satisfaisante dans la présente version du manuscrit. Ma thèse s'est, pour la majeure partie du temps, déroulée au laboratoire Érudite, à l'Université Paris-Est Créteil, et ainsi il va de soi que nombre de ses membres ont aussi contribué au déroulement de ce travail. Dès mon arrivée, j'ai été accueilli chaleureusement par les doctorants historiques qui, s'ils ne sont plus à l'Upec aujourd'hui, sont encore bien présents dans ma mémoire et représentent sans doute un certain âge d'or du laboratoire. Ainsi, je remercie notamment Ali, Igor, Haithem, Issiaka et Majda pour toutes les parties de rigolades et les discussions (parfois sérieuses) à table durant les premiers temps de la thèse. Plus récemment, c'est avec Sylvain, Redha, Naïla et Adrien que la bonne ambiance a pu continuer. Sylvain, les discussions (politiques et sociétales) en terrasse étaient un vrai plaisir. Redha, bien que nos goûts en matières sportive et automobile puissent parfois diverger, t'avoir eu comme étudiant mais surtout t'avoir comme collègue a permis de me changer les idées en fin de thèse ! Naïla, bien que nous n'ayons pas véritablement eu le temps de discuter, je te souhaite toute la réussite que tu mérites pour ton stage, et bien sûr pour ta thèse si toi aussi tu te lances dans l'aventure. Adrien, il est vrai que malheureusement depuis l'épisode à Aussois nous avons eu du mal à nous retrouver, mais les blagues en lien à un certain personnage bien connu de la vie publique me font encore rire aujourd'hui ! De nombreux enseignantschercheurs du laboratoire m'ont aussi prêté main-forte dans la réalisation de ma thèse. Particulièrement, Pierre Blanchard (merci encore pour votre aide pour le lancement du chapitre 2 !), Thibault Brodaty et Emmanuel Duguet pour des questions plus économétriques, Ferhat Mihoubi (pour ta gentillesse et ta compréhension, notamment sur les questions de conférences) et Arnold Vialfont (les barres en terrasse !) ont tous facilité la démarche de la thèse. Bien-sûr, les participants du Séminaire de Thèse (ou des Doctorants, nul ne le sait plus vraiment) en Économie de la Santé (ou SM(T/D)ES) que sont Karine Constant, Patrick Domingues, François Legendre et Yann Videau, par leurs commentaires et par leur aide régulière tout au long du chemin ont aussi grandement participé à l'avancée scientifique du manuscrit. Les enseignants-chercheurs de l'Érudite ont toujours su se montrer disponibles et bienveillants envers moi (et sans ménager leurs efforts !), et les solliciter a toujours été agréable et très enrichissant. During my Ph.D., I had the opportunity to do a research stay at The Manchester Centre for Health Economics. Matt, Luke, Søren, Shing, Phil, Cheryl, Julius, Rachel, Pipa, Kate, Alex, Laura, Tom, Niall, Beth and the others, it was a blast! Thank you so much for being so welcoming, kind and helpful. Thanks to you, I was able to have very nice feedbacks on my work while having a lot of fun at the same time. I hope my rather shy and anxious nature was not too much of a pain to deal with. I really hope I will have the opportunity to see you guys again as soon as possible! Avant d'être en thèse, j'ai passé 6 mois en stage de recherche au Bureau État de Santé de la Population (BESP) à la Drees, et il va de soi que de nombreuses personnes, là aussi, m'ont donné le goût de la recherche. Tout d'abord, Lucie Gonzalez qui m'a donné ma chance et m'a encadré sur place, puis l'ensemble des membres du BESP (notamment Marc et Nicolas) et même d'autres Bureaux (BDSRAM notamment) ont rendu ce stage extrêmement intéressant et très sympathique à vivre ! Encore avant ça, c'est grâce à un stage (en classe de seconde !) à l'Institut National de Recherche et de Sécurité que la vocation de la thématique Santé-Travail est née en moi. Ainsi, je remercie particulièrement Pierre Danière et Roger Pardonnet pour avoir accepté de me prendre sous leur aile et de m'avoir permis de découvrir tant de choses qui m'ont ouvert les yeux. Si j'ai fait une thèse sur cette thématique, notamment en termes de conditions de travail, c'est grâce au feu que eux et les membres du Centre Interrégional de Mesures Physiques de l'Est (Cimpe) ont fait naître en moi à cette occasion. Je remercie aussi toutes les personnes que j'ai pu rencontrer au durant la thèse, en conférence, workshop ou autre évènement pour leurs retours, commentaires et suggestions. On ne se rend pas nécessairement compte à quel point un rapide contact peut aider à gagner énormément de temps sur un point délicat ! Il y aurait trop de monde à citer, donc je vais me contenter de ces remerciements un peu généraux… Il faut bien évidemment que je rende justice à mes amis. Bien que je donne en général peu de nouvelles, ils sont toujours restés à mes côtés (parfois à distance), et m'ont supporté moi et mon caractère durant de nombreuses années. Alexandre (meilleur ami de la première heure), Céline, Jean-Thomas, Willy et d'autres amis datant du lycée, Julie avec qui j'ai fait un bout de chemin à partir du Master 1… Bien que nous n'ayons pas énormément d'opportunités de nous voir, vous êtes tous restés de très bon amis. Le « noyau dur » Armand, Mickael et Félicien : jamais vous n'avez cédé, et vous faites ainsi partie du cercle des amis les plus infaillibles et sincères que l'on puisse avoir. Armand et Mickael, vous et nos discussions infinies autour de (plusieurs) verres me manquent énormément. Félicien, entre les soirées lilloises, le road trip en Angleterre et les tournées en Belgique, que de bons moments ! Dorian, nous avons pu faire un bout de chemin ensemble et j'ai beaucoup apprécié et profité de ces moments de discussion avec toi. Sandrine, j'ai le sentiment que nous avons pu beaucoup partager durant ces années thèse. Tu as toujours été là pour discuter, m'aider (et boire des coups). J'ai aimé chacun des moments passés avec toi, et traverser les épreuves de la thèse a été beaucoup plus facile grâce à toi… J'espère que nos routes continueront de se croiser, aussi longtemps que possible. Merci à tous, et pour tout ! Enfin, je terminerai en remerciant mon père et ma mère qui, à chaque moment, ont été là pour me soutenir et m'aider, surtout dans les moments difficiles. Je leur dois ma réussite, et ce qu'il peut y avoir de bon dans ce travail leur est dû. Je leur dédie ainsi cette thèse. Je vous aime. À mon père. À ma mère. List of figures List of tables Work evolution and health consequences Moving work The face of employment in Europe is changing. Stock-wise and on the extensive margin, employment rates in EU28 reached 70.1% in 2015, nearing the pre-crisis levels of 2008 (Eurostat). These employment rates know important variations between countries (going from 54.9% for Greece to 80.5% in Sweden). When men's employment rates remained relatively stable between year 2005 and year 2015 (75.9%), women's knew a sizeable increase (60.0% in 2005, 64.3% in 2015) and even though older workers' is still rather low (53.3%), it also went up considerably since 2005 (42.2%). Yet, an important education-related gradient still exists, as only 52.6% of the less educated population is employed, when employment rates amount to 82.7% in the more educated. The results for France are slightly below the average of developed countries, as 69.5% of the population aged 20-64 is in employment (73.2% in men, 66% in women), but know a particularly weak level of employment in older workers (48.7%) in 2015. On the intensive margin, weekly working times in Europe have known a slight and steady decreasing trend since 2005, going from 41.9 hours to 41.4 hours in 2015 with rather comparable amounts between countries. France ranks at 40.4 hours a week. What is also noticeable is that workers' careers appear to be more and more fragmented. When the proportion of workers employed with temporary contracts globally remained constant over the last decade in Europe (14.1% in total with 13.8% of men and 14.5% of women, 16.0% total in France), resorting to part-time job becomes more and more common. 17.5% of workers worked part-time in 2005, when almost one fifth of them do in 2015 (19.6% and 18.4% in France). The sex differences are very important: in 2015, only 8.9% of men worked part-time, when 32.1% of women did. Almost 4% of EU28 workers resort to a second job (from 0.5% in Bulgaria to 9.0% in Sweden and 4.3% in France). At the same time, unemployment rates also increased in Europe, going from 7% of the active population in 2007 (before the crisis) to 9.4% in 2015, and from 4.6% in Germany to 24.9% in Greece (10.4% in France). Long-term unemployment, intended as individuals actively seeking for a job for at least a year, also drastically increased during this period, going from 3.0% in 2007 to 4. 5% in 2015 (1.6% in Sweden, 18.2% in Greece and 4.3% in France). Intensificating work On top of these more fragmented career paths, European workers face growing pressures at work. Notably, [START_REF] Greenan | Has the quality of working life improved in the EU-15 between 1995 and 2005?[END_REF] indicate that, between 1995 and 2005, European employees have faced a degradation of their working-life quality. There has been a growing interest in the literature for the health-related consequences of detrimental working conditions and their evolution. In a world where the development of new technologies, management methods, activity controls (quality standards, processes rationalization, etc.) as well as contacts with the public confront employees with different and increased work pressures [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], the question of working conditions indeed becomes even more acute. When the physical strains of work have been studied for a long time, it has only been the case later on for psychosocial risk factors. Notably, the seminal Job demand -Job control model of [START_REF] Karasek | Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign[END_REF] and its variations [START_REF] Johnson | Combined effects of job strain and social isolation on cardiovascular disease morbidity and mortality in a random sample of the Swedish male working population[END_REF][START_REF] Theorell | Current issues relating to psychosocial job strain and cardiovascular disease research[END_REF] introduced a theoretical approach for these more subjective strains. Other models later included the notion of reward as a modulator, with the Effort-Reward Imbalance model [START_REF] Siegrist | Adverse health effects of high-effort/low-reward conditions[END_REF]. Whatever the retained indicators for strenuous working conditions, their role on health status seems consensual [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. These exposures to detrimental working conditions too, beyond possible evolutions in workers' perceptions of their own conditions at work [START_REF] Gollac | Donner un sens aux données, l'exemple des enquêtes statistiques sur les conditions de travail (No. 3)[END_REF][START_REF] Gollac | Les conditions de travail[END_REF], have known several changes. If exposures to physical strains have slightly declined with the years, psychosocial strains have grown massively within the same time span. Exposures to physical risks as a whole almost remained constant since 1991 (Eurofound, 2012). Some risks declined in magnitude, when some others increased: tiring and painful positions (46% of the workforce) and repetitive hand or arm movements for instance (being the most prevalent risk of all, with 63% of workers exposed). Men are the most exposed to these risks. At the same time, subjective measures for work intensity increased overall for the past 20 years. 62% of workers reported tight deadlines, 59% high speed work, with workers having potentially less opportunities to alter the pace of their work. The level of one's control on his/her job also seem to evolve in a concerning way: 37% of workers report not being able to choose their method of work; 34% report not being able to change the order of their tasks and 30% not being able to change their speed of work, among other indicators (Eurofound, 2012). The situation in France also appeared to deteriorate between 2006 and 2010, gradually linking high levels of physical strains with low levels of job autonomy: increases in exposures to high work intensity, emotional demands, lack of autonomy, tensions and especially lack of recognition (as measured in the Santé et Itinéraire Professionnel 2006 and 2010 surveys by [START_REF] Fontaine | L'exposition des travailleurs aux risques psychosociaux a-t-elle augmenté pendant la crise économique de 2008 ?[END_REF]. Everlasting work These evolutions are even more alarming that we work longer than we used to, and that we are going to work even longer in the future. Three major factors are in line to explain this situation. First, we live longer. Eurostat projections for the evolution of life expectancy in Europe indicate that, between 2013 and 2060, our life expectancy at age 65 will increase by 4.7 years in men and 4.5 years in women (European Commission, 2014). The regularly increasing life expectancy comes, as a consequence, with an increase in the retirement/worklife imbalance, inducing financing issues. Second, despite the objective set at the Stockholm European Council to achieve an employment rate of 50% for those aged 55-64 years old by year 2010, the European average was still only 47.4% in 2011 [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF], and only reached 53.3% in 2015 (Eurostat 2016). These particularly low employment rates for senior workers can be explained by a number of factors (economic growth not producing enough new jobs, poor knowledge of existing retirement frameworks, unemployment insurance being too generous, insufficient training at work for older workers, etc.). Notably, even though workers may have the capacity to stay in employment longer [START_REF] García-Gómez | Health Capacity to Work at Older Ages: Evidence from Spain (No. w21973)[END_REF], they can also be explained by the role of strenuous careers and degraded Health Capital (health status seen as a capital stock, producing life-time in good health - [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF], increasing risks of job loss or sick leave take-ups [START_REF] Blanchet | Aspiration à la retraite, santé et satisfaction au travail : une comparaison européenne[END_REF]. The obvious consequence is that potentially too few older workers contribute to the pension system in comparison to the number of recipients. Hence, because of these first two points, the third factor is that pay-as-you-go systems are more often than not facing growing deficits. To counter this phenomenon, European governments have progressively raised retirement ages and/or increased the contribution period required to access full pension rights. In France, increases in the contribution period required to obtain full-rate pensions (laws of July 1993 and August 2003) followed by gradual increases of the retirement age of 60 years-old for the generation born before July, 1 st 1951 and 62 years-old for those born on or after January 1 st 1955 (law of November 2010) have been introduced. The aim of these reforms was to compensate for longer life spans, ensuring an intergenerational balance between working-and retirement-lives, allowing "fair treatment with regard to the duration of retirement and the amount of pensions" (Article L.111-2-1 of the French Social Security Code). As a result of these reforms, the relationship between working lives and retirement has remained relatively constant for generations born between 1943and 1990[START_REF] Aubert | Durée passée en carrière et durée de vie en retraite : quel partage des gains d'espérance de vie ? Document de Travail Insee[END_REF], inducing longer work lives. Affordable work: what are the health consequences? Unaccounting for the possible exposures to detrimental conditions faced by individuals at work, being in employment has overall favourable effects on health status. Notably being in employment, among various social roles (such as being in a relationship or being a parent) is found to be correlated with lower prevalence of anxiety disorders and depressive episodes [START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF], beyond its obvious positive role on wealth and well-being. This link between health (especially mental health) and employment status is confirmed by more econometrically robust analyses, notably by [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. This relationship appears to be different depending on sex, as it seems stronger in men. This virtuous relationship between health status and employment is corroborated by another part of the literature, focusing on job loss. When being employed seems to protect one's health capital, being unemployed is associated with more prevalent mental health disorders, especially in men again [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF]. Losing one's job is logically also associated with poorer levels of well-being [START_REF] Clark | Lags And Leads in Life Satisfaction: a Test of the Baseline Hypothesis*[END_REF], even more so considering the first consequences may be observed before lay-off actually happens [START_REF] Caroli | Does job insecurity deteriorate health?: Does job insecurity deteriorate health?[END_REF]. In any case massive and potentially recurring unemployment periods are notorious for their adverse effects on health status [START_REF] Böckerman | Unemployment and self-assessed health: evidence from panel data[END_REF][START_REF] Haan | Dynamics of health and labor market risks[END_REF][START_REF] Kalwij | Health and labour force participation of older people in Europe: what do objective health indicators add to the analysis?[END_REF]. Retirement also comes with likely negative health consequences [START_REF] Coe | Retirement effects on health in Europe[END_REF]. Nevertheless, and even if health status seems to benefit from employment overall, exposures to detrimental conditions at work are a factor of health capital deterioration. Factually, close to a third of EU27 employees declares that work affects their health status. Among these, 25% declared a detrimental impact when only 7% reported a positive role (Eurofound, 2012). Thus, in a Eurofound (2005) report on health risks in relationship to physically demanding jobs, the results of two studies (one in Austria and the other in Switzerland) were used to identify the deleterious effects of exposures on health status. In Austria, 62% of retirements are explained by work-related disabilities in the construction sector. In Switzerland, significant disparities in mortality rates exist, depending on the activity sector. On French data, [START_REF] Platts | Physical occupational exposures and health expectancies in a French occupational cohort[END_REF] show that workers who have faced physically demanding working conditions have a shorter life expectancy, in the energy industry. In addition, [START_REF] Goh | Exposure To Harmful Workplace Practices Could Account For Inequality In Life Spans Across Different Demographic Groups[END_REF] determine that 10% to 38% of disparities in life expectancy between cohorts can be attributed to exposures to poor working conditions. Which are the options? Because careers are more fragmented than they used to (see Section 1.1) with at the same time increasing and more diversified pressures at work (Section 1.2) and because careers tend to be longer (Section 1.3), health consequences are or will be even more sensible (Section 1.4). From the standpoint of policy-makers, all of this comes as new challenges, with the objective being to ensure that employment in general and the work-life in particular remain sustainable (i.e. workers being able to remain in their job throughout their career). A lot of public policies are hence targeting this objective. In Europe, the European Union is competent in dealing with Health and Safety matters, which in turn is one of the main fields of European policies. The Treaty of Functioning of the European Union allows the implementation, by the means of directives, of minimum requirements regarding "improvement of the working environment to protect workers' health and safety". Notably, employers are responsible of adapting the workplace to the workers' needs in terms of equipment and production methods, as explicated in Directive 89/391/EEC [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. In France, the legislative approach is mostly based on a curative logic. As far as the consideration of work strains is concerned, a reform in 2003 introduced explicitly the notion of Pénibilité (work drudgery), through Article 12 [START_REF] Struillou | Pénibilité et Retraite[END_REF]. This reform failed because of the difficulty to define this concept, and to determine responsibilities. A reform in 2010 followed by creating early retirement schemes related to work drudgery, with financial incentives. 3,500 workers in 2013 benefited from early retirement because of exposures to detrimental working conditions inducing permanent disabilities. Early 2014, a personal account for the prevention of work drudgery is elaborated, allowing workers to accumulate points related to different types of exposures during their career (focusing exclusively on physical strains). Reaching specific thresholds, workers are eligible to trainings in order to change job, to access part-time work paid at full rate or early retirement schemes. According to the Dares (Direction de l'animation de la recherche, des études et des statistiques -French ministry for Labour Affairs), 18.2% of employees could be affected by exposure to these factors (Sumer Survey 2010). Whatever the scheme considered (account for work drudgery, dedicated early retirement schemes and/or compensation schemes for occupational accidents and illnesses), the curative logic of ex post compensation has for a long time prevailed almost exclusively. However, more recent plans highlight the importance of prevention in the relationship between health and work. In France, three successive Health and Work Plans (Plan Santé Travail) have been instigated since 2005, with the latter (Plan Santé Travail 2016-2020) emphasising on primary prevention and work-life quality. The results of these successive plans are mixed. However, other strategies coexist, mostly focusing on reducing illness-induced inequalities on the labour market (see the Troisième Plan Cancer for an example on cancer patients), an easier insertion on the labour market of workers suffering from mental health disorders and greater support to help them remaining in their job (Plan Psychiatrie et santé mentale [2011][2012][2013][2014][2015], or any other handicap (notably a law in 1987, reinforced in 2005, binds employers from both public and private sectors to hire a minimum of 6% of disabled workers in their workforce) [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. Work-Health influences: the importance of the individual biography On the side of theoretical and empirical research, relationships between health, work and employment are particularly difficult to disentangle, because every part of the health and work cycles are linked with each other, and because their initial determinants happen very early in one's life (a summary of these interrelationships can be found in Figure I, which also highlights the specific interactions that will be studied in this Ph.D. Dissertation). First, studying such relationships is rather demanding in terms of available data. Not so many international surveys or administrative databases allow researchers to get information on professional paths, employment status, working conditions as well as health status and individual characteristics, while allowing temporal analyses. This scarcity of available data is even more pronounced when considering the French case. The need for temporal data (panel data, cohorts, etc.) is particularly important, as the relationships existing between health and professional paths are imbricated, with the weight of past experiences or shocks having potentially sizeable consequences on the decisions and on the condition of an individual, at any given point in time. Then, the first determinants of future health and professional cycles can be found as early as the childhood period. Beyond elements happening in utero (described in the latency approach -Backer, 1995), significant life events or health conditions happening during the early-life of individuals are able to explain, at least partly, later outcomes for health and employment. For instance, poor health levels or the presence of disability during childhood are found to induce detrimental consequences on mental health at older ages as well as the appearance of chronic diseases [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. The consequences are also sensible on career paths. Because healthier individuals are usually preferred at work, especially in demanding jobs, the initial health capital is bound to play a major role in employability levels, at least during the first part of one's career (see the Healthy Worker Effect) [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. Health status is not the only relevant determinant. Elements related to the socioeconomic background during childhood also benefited from several studies in the empirical literature. For instance, [START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF] demonstrated that one's environment during childhood impacts the likelihood to face, later on, occupational accidents and disabilities. Health consequences can also be expected in individuals who shortened their initial studies [START_REF] Garrouste | The Lasting Health Impact of Leaving School in a Bad Economy: Britons in the 1970s Recession[END_REF]. Early conditions, unaccounted for, hence may very well generate methodological difficulties when assessing the impact of work on health, notably because of selection effects. These initial circumstances indeed bear consequences over to the next part of one's life: the professional career and contemporary health status. Individuals facing poor conditions during childhood are then potentially more exposed to harder circumstances during their work life, for instance lower levels of employability when at the same time, facing unemployment early on in the career is found to generate ill-health. Low initial levels of Human Capital (intended as the stock of knowledge, habits, social and personality attributes that contributes to the capacity for one to produce - [START_REF] Becker | Human capital: A theoretical and empirical analysis, with special reference to education[END_REF], including health capital, impact all elements related to work and employment outcomes, ranging from increased exposures to certain types of detrimental working conditions (notably physical exposures in the lower-educated), greater probabilities to be employed part-time or in temporary contracts and overall more fragmented careers. Because of that, the health status of these originally disadvantaged individuals is likely to deteriorate even further. It is also true that contemporary health determines current employment outcomes, causing particularly detrimental vicious circles and inducing reverse causality issues. During this professionally active part of one's life, other shocks may happen. Illnesses or the death of a close relative or partner or marital separations, for instance, have a negative impact on health status [START_REF] Dalgard | Negative life events, social support and gender difference in depression: A multinational community survey with data from the ODIN study[END_REF][START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF]. Financial difficulties, are they current or older, are also often associated with the onset of common mental disorders [START_REF] Weich | Material Standard of Living, Social Class, and the Prevalence of the Common Mental Disorders in Great Britain[END_REF]. When these shocks are unobserved, disentangling the role of the career on health status from other shocks appears as particularly tricky. When considering the last part of one's career from the retirement decision onwards, the accumulation of all these circumstances throughout an individual's life cycle reinforce potential selection effects [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. The decision to retire, because it is partly based on health status and the nature of the professional career, can possibly be massively altered, as much as later levels of human capital. Retirees who faced difficult situations at work in terms of employment or working conditions are more likely to be in worst health conditions than others [START_REF] Coe | Retirement effects on health in Europe[END_REF]. Hence, originally because of poor initial life conditions (in terms of health or socioeconomic status), individuals may face radically changed professional and health paths. Moreover, at any time, elements of health status, employment or working conditions can also positively or negatively influence the rest of the life cycle, bearing repercussions until its end. Research questions Health-Work causality: theoretical background The theoretical relationships between work and health status can be analysed under the double expertise of health and labour economics. The initial model of [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF] proposes an extension of the Human Capital theory developed by [START_REF] Becker | Human capital: A theoretical and empirical analysis, with special reference to education[END_REF] by introducing the concept of Health Capital. Each individual possesses a certain level of health capital at birth. Health status, originally regarded as exogenous in the "demand for medical care" model by Newhouse and Phelpsen (1974), is supposed to be endogenous and can be both demanded (through demands for care) and produced by consumers (concept of investment in health). Individuals decide on the level of health that maximizes their utility and make trade-offs between time spent in good and poor health. In a later model for the demand of health, health capital is seen as an element allowing the output of healthy time [START_REF] Grossman | The Human Capital Model of the Demand for Health (No. w7078[END_REF]. This model offers a possibility for intertemporal analysis to study health both in terms of level and depreciation rate over the life cycle [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. If the depreciation rate of health capital mostly refers to a biological process, health care consumption, health investment and labour market characteristics also influence this rate. The time devoted to work can increase (in the case of demanding work) or decrease (in case of a high quality work life) the rate of depreciation of health capital. Notably, in the case of an individual facing a very demanding job, the depreciation rate of his/her health capital over the life cycle is progressively rising, inducing an increasing price (or shadow price as it is hardly measurable) of health, just like for the ageing process. It is particularly the case in the less educated workers, who constitute a less efficient health-producing workforce [START_REF] Grossman | The Human Capital Model of the Demand for Health (No. w7078[END_REF]. Contradictory effects can then occur simultaneously as work can also be beneficial to health status (in comparison to non-employment), but the drudgery induced by certain working conditions can accelerate its deterioration [START_REF] Strauss | Health, Nutrition and Economic Development[END_REF]. In this context, exposure to past working conditions may partly explain the differential in measured health status. Notably, the differences in wages between equally productive individuals can be explained by differences in the difficulty of work-related tasks, meaning workers with poorer working conditions are paid more than others in a perfectly competitive environment [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF]. In this framework, it is possible to imagine that health capital and wealth stock are substitutable, hence workers using their health in exchange for income [START_REF] Muurinen | The economic analysis of inequalities in health[END_REF]. Individuals can therefore decide, depending on their utility function, to substitute part of their health capital in a more remunerative work, due to harmful exposures. However, despite the hypothesis retained by [START_REF] Muurinen | Demand for health[END_REF] in an extension of [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF], working conditions are probably not exogenous. Several selection effects may exist, both in entering the labour market and in the capacity to occupy and remain in strenuous jobs for longer periods, thereby discrediting the hypothesis of exogeneity. These effects refer to characteristics of both the labour supply and demand. First, it can be assumed that the initial human capital (initial health status and level of education) of future workers will determine, in part, their entry conditions into the labour market but also the ability to "choose" a supposedly more or less strenuous job. Then, employers can also be the source of selection effects, based on criteria related to employees' health and their adaptability to demanding positions. Part of the empirical literature relying notably on testing methods testify of the existence of discriminations towards disabled individuals, including discriminations in employment [START_REF] Bouvier | Les personnes ayant des problèmes de santé ou de handicap sont plus nombreuses que les autres à faire part de comportements stigmatisants[END_REF]. Thus, whether for health or for work, the hypothesis of exogeneity does not seem to be acceptable. Health-Work causality: empirical resolution If this exogeneity hypothesis does not seem trivial in a theoretical analysis, it is even more the case in an empirical framework. First, selection biases are very common in the study of Health-Work relationships. For instance, one's health status may be determined by his/her former levels of human capital or past exposures to strenuous careers. Another example would be that the choice of a job is also made according to several characteristics, including constitutive elements of the initial human capital. Individuals may choose their job according to their own preferences, but also based on their education, health condition or childhood background. Thus, when unaccounted for, this endogenous selection may result in biased estimates in empirical studies. In particular, because healthier individuals may tend to prefer (self-selection) or to be preferred (discrimination) for more demanding jobs [START_REF] Barnay | The Impact of a Disability on Labour Market Status: A Comparison of the Public and Private Sectors[END_REF], researchers could face an overrepresentation of healthy yet exposed workers in their samples. In this case, the estimations are likely to be biased downwards because of individuals being both healthier and exposed to demanding jobs being overrepresented in the sample (inducing a Healthy Worker Effect - [START_REF] Haan | Dynamics of health and labor market risks[END_REF]. On the other hand, workers with lesser levels of initial health capital may benefit from fewer opportunities on the labour market and thus be restricted to the toughest jobs, leading in that case to an overrepresentation of initially unhealthy and exposed individuals, resulting in an upward bias of the estimates. The Health-Work relationships are also more often than not plagued with reverse causality biases. The link between health status and employment is indeed bidirectional. When studying the role of a given health condition on one's capacity to be in employment for instance, it is quite easily conceivable that employment status is also able to partly determine current health status. A lot of empirical studies face this particular issue (see [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] for an example on mental health). In particular, being unemployed may impair individuals' mental health [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. On the other hand, studying the role of employment on health status also suffers from this very same bias. In the literature, the causal role of retirement on health status has long been plagued with reverse causality, inducing that individuals with poorer levels of health capital were the ones to retire earlier. Again, most recent empirical works acknowledged this possibility [START_REF] Coe | Retirement effects on health in Europe[END_REF]. The omission of variables leads to unobserved heterogeneity, which is also potentially a source of endogeneity when measuring such relationships. Some information is very rarely available on survey or administrative data, because of the difficulty to observe or quantify it. Among numerous others, family background or personality traits [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], involvement and motivations [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF], risky health-related behaviours, subjective life expectancy, risk aversion preferences or disutility at work [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF] are mostly unobserved, thus omitted in most studies. Yet, these factors, remaining unobservable may therefore act as confounders, or as endogeneity sources when correlated with both the error term and observable characteristics. These unobserved individual or time-dependant heterogeneity sources may hence result in biased estimations [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF]. Finally, measurement errors or declarative biases can also be highlighted. When working on sometimes sensitive data like health-related matters or risky behaviours as well as some difficult work situations, individuals may be inclined to alter their declarative behaviours. For instance, individuals may alter their health status declarations in order to rationalize their choices on the labour market in front of the interviewer [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. Also, the nonparticipation to the labour market may be justified ex-post by the declaration of a worse health status. [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] and [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF], showed that economic incentives are likely to distort health status declarations. There may also be declarative social heterogeneity in terms of health status, specifically related to sex and age [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. It is often argued that men have a tendency to under declare their health condition when it is the contrary for women. Older individuals tend to consider their own health status relatively to their age, hence often overestimating their health condition. Research questions and motivation Do common mental health impairments (depression and anxiety) impact workers' ability to remain in employment (Chapter 1)? -Studies on the impact of mental health impairments on employment outcomes are numerous in the empirical literature, at an international level. This literature is diverse in its measurement of mental health: when many studies focus on heavy mental disorders such as psychoses or schizophrenia [START_REF] Greve | Useful beautiful minds-An analysis of the relationship between schizophrenia and employment[END_REF], a growing part of this literature is based on more common, less disabling disorders such as stress, anxiety or depression. This empirical literature has been focusing in more recent years on handling the inherent biases linked to the endogeneity of mental health indicators as well as declarative biases [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF][START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] in the study of the capacity of individuals suffering from mental health problems to find a job or to sustain their productivity levels. In particular, the relationship between mental health and employment appears to be bidirectional [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF], and unobserved characteristics such as risk preferences, workers' involvement at work, personality traits, family background or risky behaviours are likely to induce biased estimates of the effect of mental health on employment [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. In the economics literature accounting for these biases, it is found that mental health impairments do impact individuals' capacity to find a job. [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF] and [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] all find that individuals suffering from common mental health disorders are less likely to be in employment than others. This effect is found to vary among different groups, according to age [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] and more importantly to sex, with mixed evidence: [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] and [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] find a stronger effect on men's employment outcomes when [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] on women's. Yet this literature, while mostly focusing on one's capacity to find a job, does not provide evidence on the role of mental health conditions in individuals already in employment, on their capacity to keep their job. The specific role of physical health status is also unaccounted for in most studies when it may act as a cofounding factor when analysing the specific effect of mental health on employment outcomes. Thus the first research question of this Ph.D. Dissertation will be to understand the role of common mental impairments in the ability to remain in employment. Do varying levels of exposure to detrimental physical and psychosocial working conditions differently impact health status (Chapter 2)? -The role of working conditions on workers' health status has received considerable attention in the scientific literature, when it is not as much the case in the economic literature because of the biases it faces. First, the choice of a job by an individual is not made at random [START_REF] Cottini | Mental health and working conditions in Europe[END_REF], but the reasons and consequences of this selection bias are potentially contradictory. Healthier individuals may indeed prefer or be preferred for more arduous jobs, but it is also possible to imagine that individuals with a lesser initial health capital may be restricted to the toughest jobs. Then, unobserved characteristics (individual preferences, risk aversion behaviours, shocks, crises) may also induce biased estimates [START_REF] Bassanini | Is Work Bad for Health? The Role of Constraint versus Choice[END_REF]. Because of the lack of panel data linking both working conditions and health status indicators on longer periods, few papers actually dealt with these methodological difficulties. The economic literature generally finds strong links between exposures to detrimental working conditions and poorer health conditions. Specifically, physical strains like heavy loads, night work, repetitive work [START_REF] Case | Broken Down by Work and Sex: How Our Health Declines[END_REF][START_REF] Choo | Wearing Out -The Decline of Health[END_REF][START_REF] Debrand | Working Conditions and Health of European Older Workers[END_REF][START_REF] Ose | Working conditions, compensation and absenteeism[END_REF] as well as environmental exposures such as exposures to toxic or hazardous materials, extreme temperatures [START_REF] Datta Gupta | Work environment satisfaction and employee health: panel evidence from Denmark, France and Spain, 1994-2001[END_REF] and psychosocial risk factors like Job strain and social isolation do impact a variety of physical and mental health indicators [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF][START_REF] Cottini | Mental health and working conditions in Europe[END_REF][START_REF] De Jonge | Job strain, effort-reward imbalance and employee well-being: a large-scale cross-sectional study[END_REF]. This average instantaneous effect of exposures has been decomposed by [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] in order to account for chronic exposures, and notably by the psychosocial literature in general to account for simultaneous exposures. More often than not, this literature is plagued with inherent issues coming from selection biases into employment and individual and temporal unobserved heterogeneity. On top of that, no study accounts for cumulative effects of strains due to both potentially simultaneous and chronic exposures, nor is the possibility of delayed effects on health status accounted for. The second research question is dedicated to the heterogeneous influence of varying levels of exposures (in terms of chronic or simultaneous exposures) to detrimental physical and psychosocial working conditions on health status. What is the effect of retirement on general and mental health status in France (Chapter 3)? -Much has been said about the role of retirement on health conditions at the international level [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. A big proportion of the studies in economics accounts for the endogeneity biases related to reverse causality (health status determines the decision to retire or not -García-Gómez, 2011, or the pace of this decision - [START_REF] Alavinia | Unemployment and retirement and ill-health: a crosssectional analysis across European countries[END_REF][START_REF] Jones | Sick of work or too sick to work? Evidence on selfreported health shocks and early retirement from the BHPS[END_REF], unobserved heterogeneity and the specific role of ageing. The overall effect of retirement on health status differs greatly, depending on the outcome chosen. When the decision to retire appears beneficial to one's self-assessed health status and mental health indicators such as anxiety and depression [START_REF] Blake | Collateral effects of a pension reform in France[END_REF][START_REF] Coe | Retirement effects on health in Europe[END_REF][START_REF] Grip | Shattered Dreams: The Effects of Changing the Pension System Late in the Game*: MENTAL HEALTH EFFECTS OF A PENSION REFORM[END_REF][START_REF] Insler | The Health Consequences of Retirement[END_REF][START_REF] Neuman | Quit Your Job and Get Healthier? The Effect of Retirement on Health[END_REF], it seems to be the contrary for other mental health conditions, such as cognitive abilities [START_REF] Behncke | Does retirement trigger ill health?[END_REF][START_REF] Bonsang | Does retirement affect cognitive functioning[END_REF][START_REF] Dave | The effects of retirement on physical and mental health outcomes[END_REF][START_REF] Rohwedder | Mental Retirement[END_REF]. The reasons of the beneficial health effects of retirement have been studied more recently, notably by [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF], showing that retirement had a positive effect on being a non-smoker, a range of social and physical activities. Yet, the literature faces difficulties to accurately account for the nature of past professional careers of retirees, when it appears as one of the most important determinant of both the decision to retire and health status [START_REF] Coe | Retirement effects on health in Europe[END_REF]. It is indeed very likely that individuals relieved from arduous jobs will face the greatest improvements when it comes to their health condition after retirement. Generally speaking, single studies also rarely assess both potential heterogeneity sources and mechanisms simultaneously. This is even more the case for the French situation, where the literature on retirement and its impact on health status is very scarce. The third research question hence refers to the heterogeneous effect of retirement on general and mental health status in France. Outline My Ph.D. Dissertation relies on the use of a French panel dataset: the French Health and Professional Path survey ("Santé et Itinéraire Professionnel" -Sip). This survey was designed jointly by the French Ministries in charge of Healthcare and Labour. The panel is composed of two waves (one in 2006 and another one in 2010). Two questionnaires are proposed: the first one is administered directly by an interviewer and investigates individual characteristics, health and employment statuses. The second one is self-administered and focuses on more sensitive information such as health-related risky behaviours (weight, alcohol and tobacco consumption). Overall, more than 13,000 individuals are interviewed in 2006 and 11,000 of them in 2010 as well, making this panel survey representative of the French population. The main strength of this survey, on top of the wealth of individual data, is that it also contains a lifegrid allowing the reconstruction of a biography of individuals' lives: childhood, education, health, career and working conditions as well as major life events, from the beginning of one's life to the date of the survey. This allows for a great health and professional description, notably in terms of major work-related events. Chapter 1 aims to measure, in 4,100 French workers aged 30-55 in 2006, the causal impact of self-assessed mental health in 2006 (in the form of anxiety disorders and depressive episodes) on employment status in 2010. In order to control for endogeneity biases coming from mental health indicators, bivariate probit models, relying on childhood events and elements of social support as sources of exogeneity, are used to explain simultaneously employment and mental health outcomes. Specifications control for individual, employment, general health status, risky behaviours and professional characteristics. The results show that men suffering from at least one mental disorder (depression or anxiety) are up to 13 percentage points ( ) less likely to remain in employment. Such a relationship cannot be found in women after controlling for general health status. Anxiety disorders appear as the most impactful on men's capacity to remain in employment, as well as being exposed to both mental disorders at the same time ( ), in comparison to only one ( ). Chapter 2 estimates the causal impact of exposures to detrimental working conditions on selfdeclarations of chronic diseases. Using a rebuilt retrospective lifelong panel for 6,700 French individuals and defining indicators for physical and psychosocial strains, a mixed econometric strategy relying on difference-in-differences and matching methods taking into account for selection biases as well as unobserved heterogeneity is implemented. For men and women, deleterious effects of both types of working conditions on the declaration of chronic diseases after exposure can be found, with varying patterns of impacts according to the strains' nature and magnitude. In physically exposed men (resp. women), exposures are found to explain around (resp. between to ) of the total number of chronic diseases. Psychosocial exposures account, in men (resp. women), for (resp. ) of the total number of chronic diseases. Chapter 3 assesses the role of retirement on physical and mental health outcomes in 4,600 The issue of job retention for people with mental disorders appears to be essential for several reasons. It is established that overwork deteriorates both physical and mental health [START_REF] Bell | Work Hours Constraints and Health[END_REF]. Moreover, the intensity of work (high pace and lack of autonomy) and job insecurity lead employees to face more arduous situations. In addition part-time jobs, when not chosen, affects mental health [START_REF] Robone | Contractual conditions, working conditions and their impact on health and well-being[END_REF]. The relationship between mental health and employment has been widely documented in the literature, establishing a two-way causalities between the two. A precarious job or exposure to detrimental working conditions can affect mental health. Self-reported health indicators are also characterized by justification biases and measurement errors as well as reporting social heterogeneity [START_REF] Akashi-Ronquest | Measuring the biases in selfreported disability status: evidence from aggregate data[END_REF][START_REF] Etilé | Income-related reporting heterogeneity in self-assessed health: evidence from France[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Mental health, when subjective, is specifically associated with a measurement bias prompting to unravel the links between physical and mental health. Just like for physical health status, selection effects are also at work, an individual with mental disorders being found less often in employment. Mental health measurements are also potentially subject to a specific selection bias linked to the psychological inability to answer questionnaires. Our goal is to establish a proper causality of mental health on job retention using French data. This study is inspired by [START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF] who measure the impact of physical health and risky behaviours on leaving employment four years later. While many studies focus on the role of mental health on employability, not a lot of them acknowledge its impact on workers' capacity to remain in their jobs. We also expend on the literature by considering the endogeneity biases generated by reverse causality (effect of employment on mental health). Another addition is that we take into account for the role of physical health status which may very well act, when unaccounted for, as a cofounding factor when analysing the specific effect of mental health on employment outcomes. To our knowledge, no French study has empirically measured the specific effect of mental health on job retention while addressing these biases. To We articulate our article as follows. We expose in a literature review the main empirical results linking mental health and employment status. We then present the database and empirical strategy. A final section presents the results and concludes. The links between mental health and employment Mental health measurements The economic literature establishing the role of mental health on employment mainly retains two definitions of mental health. The first one focuses on heavy mental disorders, such as psychoses [START_REF] Bartel | Some Economic and Demographic Consequences of Mental Illness[END_REF]. Notably, many studies evaluate the ability to enter the labour market for individuals with schizophrenia [START_REF] Greve | Useful beautiful minds-An analysis of the relationship between schizophrenia and employment[END_REF]. The second one is based on more common but less disabling disorders such as stress or depression. Often used to assess mental health, these disorders are observed using standardized measures and are presented in the form of scores. Thus, the Kessler Psychological Distress Scale (K-10) allows, from 10 questions about the last 30 days, to evaluate individuals' overall mental state [START_REF] Dahal | An econometric assessment of the effect of mental illness on household spending behavior[END_REF][START_REF] Kessler | The effects of chronic medical conditions on work loss and work cutback[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. Like in the K-10 questionnaire, the Short-Form General Health Survey (SF-36) evaluates mental health over the past four weeks with questions about how individuals feel (excitement, sadness, lack of energy, fatigue, ...) [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF]. Another quite similar score was built, this time focusing on senior workers (age 50-64): the Center for Epidemiologic Studies Depression Scale (CES-D), with more specific questions such as isolation and self-esteem [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF]. However the simplification risk linked to the aggregate nature of these scores justified the setup of other indicators to better approximate the true mental health diagnosis. Indicators of generalized anxiety disorders and major depressive episodes were then used, allowing a further analysis of mental health [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF]. They allow to identify the population suffering from these disorders and their symptoms (see Appendix 1 and Appendix 2). Despite their specificity and without being perfect substitutes to a medical diagnosis, these indicators prove robust to detect common mental disorders. In addition, the subjective nature of the declaration of health in general and particularly of mental health, makes it difficult to make comparisons between two apparently similar declarations [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF], notably due to reporting biases [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF] try to assess the importance of reporting biases in mental health and unveil that a latent health condition greatly contributes to mental health: two individuals may declare different mental health conditions depending on their general and physical health status. A person with a poor general condition will indeed be more likely to report a more degraded mental health status than a person in good general health. [START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF] confirm these results and show a strong correlation between physical and mental health, particularly among women. 1.2. The influence of mental health on employment: a short literature review Methodological difficulties If the measurement of mental health from declarative data is not trivial, the relationship between mental health and employment is also tainted by endogeneity biases associated with reverse causality and omitted variables. From a structural point, we can quite easily conceive that if mental health and employment are observed simultaneously, the relationship will be bidirectional [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF]. In particular, being unemployed may impair individuals' mental health [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. The omission of variables leads to unobserved heterogeneity, which is also potentially a source of endogeneity when measuring the impact of mental health on employment. Risk preferences [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF], workers' involvement at work and the ability to give satisfaction [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF], personality traits, family background [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], risky behaviours (smoking, alcohol and overweight) are related to mental health as much as employment. These factors, remaining unobservable for some of them in household surveys, therefore act as confounders. [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] conclude, from Australian data (pooled data from the National Health Survey -NHS) and multivariate probit methods, that tobacco consumption in men and women as well as overweight in women increase the risk of reporting mental disorders. These behaviours are also shown to have a specific effect on the situation on the labour market [START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF]. Finally, it is possible to highlight some justification biases. Individuals may alter their health status declarations in order to rationalize their choices on the labour market in front of the interviewer [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. For example, the non-participation to the labour market can be justified ex-post by the declaration of a worse health status. [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] showed on Dutch panel data using fixed effects models, that economic incentives are likely to distort health status declarations. This still seems to be the case on Irish panel data and after controlling for unobserved heterogeneity [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF]. Effects of mental health on employment To address these methodological issues, the empirical literature makes use of instrumental variables and panel data models allowing to take care of unobserved heterogeneity by including fixed effects and reverse causality by a time gap between exogenous variables and the outcome. Whatever the mental health indicators, the various studies appear to converge on a detrimental role of deteriorated mental health on employment outcomes. Thus, [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF] find, using bivariate Probit models and Two-Stage Least Squares (2SLS) performed on crosssectional data, that people suffering from mental disorders (major depressive episodes and generalized anxiety disorders) in the 12 last months are much less likely to be in employment than others at the time of the survey. They do not find a significant effect of these mental conditions on the number of weeks worked and days of sick-leaves in individuals in employment after controlling for socioeconomic characteristics, chronic diseases and the area of residence in the U.S. territory. [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] show, on cross-sectional data using two-stage (2SLS and bivariate probit) and Altonji Elder and Taber modelling (AET - [START_REF] Altonji | Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools[END_REF] and taking into account unobserved heterogeneity, that these mental disorders appearing in the last 12 months reduce by an average of 15% the likelihood to be in employment at the time of the survey. An American study, resorting in instrumental variable methods, found that most people with mental disorders are in employment, but more pronounced symptoms reduce their participation to the labour market [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF]. Finally, simultaneous modelling on Taiwanese pooled data confirms that a degraded mental health decreases the probability of working, while specifying that the prevalence of these disorders is lower among workers, thus inducing a protective effect of work on mental health [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF]. [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] also confirm reverse causality in the relationship, using instrumental variables in three waves of the European Working Conditions Survey (EWCS), stressing the negative effects of poor working conditions on mental health. These average effects are heterogeneous according to age and sex. [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] conducted stratified regressions on two age groups: the 18-49 years-old on the one hand and the 50-64 years-old on the other hand and find that mental health-related discriminations on the labour market are greater in middle-aged workers than for older workers. Sex effects are also important. The role of mental disorders on employment seems stronger in men [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. However, there is no consensus on this fact in the literature. [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] show a stronger effect of mental health on women's employment, using Australian panel data (Household, Income and Labour Dynamics in Australia -HILDA) and several models, including bivariate Probit and fixed effects model. What instrument(s) for mental health? It is necessary to identify an instrument whose influence on mental health is established in the empirical literature (1.3.1) without being correlated with the error term (1.3.2). The determinants of mental health Determinants and other factors related to mental health are numerous in the literature and can be classified into three categories: social determinants, major life events and work-related factors. Social factors refer to the society role of the individual and to his/her social relationships. [START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF] identify three types of social roles being correlated with a better mental health condition: the roles of partner, parent and worker. Being in a relationship is associated with a stronger declaration of good mental health and a lower risk of depression and anxiety [START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF]. Endorsing the two roles of parent and partner seems linked to a better mental health. Professional activity can slow the depreciation rate of one's mental health capital, as shown by a study on panel data taking into account the endogenous nature of the relationship between health and employment [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. In contrast, [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF] show that unemployment is often correlated with worse mental health status among men and in women to a lesser extent. The combinations of these roles correspond to increased chances of reporting good mental health condition by 39% [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF]. Major life events also play a role in the determination of mental health. Unemployment and furthermore inactivity occurring during the beginning of professional life can induce the onset of depressive symptoms later on, as shown on U.S. panel data by [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. Using a fixed effects framework on panel data, [START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF] establish that events such as illnesses or death of a close relative or partner impairs mental health. Moreover, marital separations and serious disputes within or outside the couple seem correlated with poorer mental health [START_REF] Dalgard | Negative life events, social support and gender difference in depression: A multinational community survey with data from the ODIN study[END_REF][START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF]. Past or present financial problems are also often associated with the occurrence of common mental disorders such as depression and anxiety [START_REF] Laaksonen | Explanations for gender differences in sickness absence: evidence from middle-aged municipal employees from Finland[END_REF][START_REF] Weich | Material Standard of Living, Social Class, and the Prevalence of the Common Mental Disorders in Great Britain[END_REF], as well as the deterioration of physical health (especially in women) [START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. A poor health status or the presence of disability during childhood also bears negative consequences on mental health at older ages and on the declaration of chronic diseases, regardless of the onset age [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. Work-related factors may also have an effect on mental health. Atypical labour contracts such as part-time jobs increase the occurrence of depressive symptoms in employees [START_REF] Santin | Depressive symptoms and atypical jobs in France, from the 2003 Decennial health survey[END_REF]. [START_REF] Bildt | Gender differences in the effects from working conditions on mental health: a 4-year follow-up[END_REF] show, using multivariate models, that exposure to detrimental working conditions can have a deleterious effect on mental health four years later, with sex-related differences. Men would be most affected by changes in tasks and a lack of recognition at work when in women, other specific conditions such as the role of the lack of training and lack of motivation and support at work are highlighted. Other factors linked to sex and associated with poorer mental health are found by [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF]: the preponderance of work, contacts with the public, repetitive tasks and the lack of cooperation at work in men and the early beginning of career and involuntary interruptions in women. Instruments in the literature and choices in our study In the diversity of explanatory factors for mental health, only some of them have been retained in the economic literature as valid and relevant instruments. [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] used the death of a close friend intervened in the twelve months preceding the survey as an instrument for mental health. [START_REF] Hamilton | Better Health With More Friends: The Role of Social Capital in Producing Health: BETTER HEALTH WITH MORE FRIENDS[END_REF] used the stressful events in life, the regularity of sport and a lagged mental health indicator, the latter being also used by [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF]. The psychological status of parents [START_REF] Ettner | The Impact of Psychiatric Disorders on Labor Market Outcomes (No. w5989[END_REF][START_REF] Marcotte | The labor market effects of mental illness The case of affective disorders[END_REF], the one of children [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Ettner | The Impact of Psychiatric Disorders on Labor Market Outcomes (No. w5989[END_REF], social support [START_REF] Alexandre | Labor Supply of Poor Residents in Metropolitan Miami, Florida: The Role of Depression and the Co-Morbid Effects of Substance Use[END_REF][START_REF] Hamilton | Better Health With More Friends: The Role of Social Capital in Producing Health: BETTER HEALTH WITH MORE FRIENDS[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] were also frequently introduced. These factors were privileged because of them being valid determinants of mental health while meeting the exogeneity assumption, either because of their temporal distance from the other factors explaining employment or because of their absence of direct effects on employment. We make use of this literature by choosing proxies of mental health during childhood (violence suffered during this period and having been raised by a single parent) and an indicator for psychological status and social support during adult life (marital breakdowns), with a different approach according to sex, as suggested by the literature. Doing so, we put some temporal distance between these events and employment status (events occurring during childhood are observed up to 18 years-old whereas our working sample includes only individuals aged 30 and older; marital ruptures occur before 2006), and there is a low probability of direct effects of these instruments on the employment status of 2010, the professional route characteristics, employment at the time of the survey and risky behaviour being also controlled for. The description of the general sample is presented in Table 29 (Appendix 6). Women report more frequent physical and mental health problems: anxiety disorders (7%), depressive episodes (8%), poor perceived health status (22%) and chronic illness (28%) are more widely reported by women than by men (resp. 4%, 3 %, 18% and 25%). These response behaviours are frequently raised by the literature and testify at least for some of them of the presence of reporting biases (rather downwards for men, and rather on the rise for women), as shown notably in [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF] or by Shmueli in 2003. Conversely, risky behaviours are substantially more developed in men. It is the case for daily smoking (28% in men vs. 24% in women) but it is even more acute for alcohol consumption (46% vs. 14%) and overweight (51% vs. 29%). Empirical analysis Figure II: Prevalence of health problems in the population in employment in 2006 Reading: 6% of men and 12% of women report having at least one mental disorder (GAD or MDE) Health problems and job retention 82% of men in employment and suffering from at least one mental disorder in 2006 are still in employment in 2010, against 86% of women 1 . Anxiety disorders have the biggest influence: 79% of men are employed (vs. 88% of women). General health status indicators show fairly similar results for men and women. For risky behaviours, daily tobacco consumption showed no significant difference in employment rates between men and women while alcohol (93% vs. 90%) and overweight (93% vs. 89%) are associated with comparatively lower employment rates for women than for men (Figure III). 1 Given the weakness of some of the subsample sizes, one must be cautious about the conclusions suggested by these Mental health and general health status A strong correlation between general and mental health status is observed in the sample. About 20% of men and women suffering from at least one mental disorder also reported activity limitations against 10% in the entire sample with normal mental health condition (see Figure II ). Nearly 50% of them report poor perceived health (vs. 20% overall). Chronic diseases (45% vs. 25%) and daily tobacco consumption (30% vs. 25%) are also more common among these individuals. 53% of men and 17% of women with mental disorders declare risky alcohol consumption, against 46% and 13% resp. in the overall sample. Finally, overweight is declared by 44% of men and 31% of women with mental disorders, against resp. 51% and Econometric strategy Univariate models The econometric strategy is based on two steps to correct individual heterogeneity and the possibility of reverse causality. In a first step, we initiate binomial univariate probit models to estimate, among people in employment in 2006, the effect of mental health in 2006 on the likelihood to remain in employment in 2010 (in employment vs. unemployed -dependent variable ). Several specifications are tested and we stratify by sex for each one of them due to strong gendered differences in mental health linked to social heterogeneity in declarations [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. We take a three-step strategy to gradually add relevant variable groups in the model and thus assess the robustness of the correlation between mental health in 2006 and employment in 2010 by gradually identifying confounders. The first baseline specification (1) explains job retention by mental health status, controlling for a set of standard socioeconomic variables: (1) Mental health in 2006 ( ) is represented by a binary variable taking the value when individual is suffering from a generalized anxiety disorder or a major depressive episode, or both. Socio-economic variables are represented by the vector . They include age (in five-year increments from 30 to 55 years), marital status, presence of children, educational level, professional category, industry sector, type of employment (public, private, or independent) and part-time work. Age plays a major role on the employability of individuals and in the reporting of mental disorders [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Current marital status and the presence of children in the household can also affect employability (especially in women) and reported mental health since people in a relationship with children turn out to be in better health status [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF]. Work characteristics are also integrated [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]). An intermediate specification ( 2) is then performed with the addition of three variables from the European Mini-Module about individuals' general health status: their self-assessed health (taking the value if it is good, and for poor health), the fact that they suffer from chronic diseases or not and whether they are limited in their daily activities. These health status indicators are used in order to effectively isolate the specific effect of depression and anxiety on the position on the labour market (to disentangle it from the one of the latent general health status [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF]. This model also includes three variables of risky behaviours: being a daily smoker, a drinker at risk or overweight. The objective of these variables is to determine to which extent the role of mental health does not go partly through risky behaviours [START_REF] Butterworth | Poor mental health influences risk and duration of unemployment: a prospective study[END_REF][START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF][START_REF] Lim | Lost productivity among full-time workers with mental disorders[END_REF]. Such behaviours are indeed known to affect the reporting of activity limitations in general [START_REF] Arterburn | Relationship between obesity, depression, and disability in middle-aged women[END_REF], employability [START_REF] Paraponaris | Obesity, weight status and employability: Empirical evidence from a French national survey[END_REF], or the incidence of disease and premature mortality [START_REF] Teratani | Dose-response relationship between tobacco or alcohol consumption and the development of diabetes mellitus in Japanese male workers[END_REF] as well as work-related accidents [START_REF] Bourgkard | Association of physical job demands, smoking and alcohol abuse with subsequent premature mortality: a 9-year follow-up population-based study[END_REF][START_REF] Teratani | Dose-response relationship between tobacco or alcohol consumption and the development of diabetes mellitus in Japanese male workers[END_REF]. Finally, the last specification (3) adds two variables related to the professional route, reconstructed using retrospective information which is likely to play a role on the individual characteristics in 2006 and employment transitions observed between 2006 and 2010. The objective is to control our results of potentially unstable careers (state dependence phenomenon), leading to a greater fragility on the labour market [START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF][START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. These variables include time spent in contracts of more than 5 years and the stability of the employment path, represented by the number of transitions made between jobs over 5 years, short periods of employment, periods of unemployment of more than one year and periods of inactivity. (3) General health status variables and risky behaviours in 2006 are presented in vector and control variables on the professional route are included in the vector. Thus, the relationship between the employment status of 2010 and mental health status in 2006 is controlled for general health status, health-related risky behaviours and elements linked to the professional route. However, as widely explained in the literature, our mental health variable potentially suffers from endogeneity biases. Direct reverse causality is most likely ruled out since there is a time gap between our measure of mental health (2006) and that of employment ( 2010) and the fact that the nature of the past professional career (and status in employment in 2006 de facto) are taken into account. However, some individual characteristics (unobserved individual heterogeneity) linked not only to employment but also to mental health are not included in our model and the measurement of mental health is likely to be biased. We are in the presence of an endogenous mental health variable, due to omitted variables. Handling endogeneity biases In order to take into account this endogeneity issue, we set up a bivariate probit model. As suggested by the literature [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] dealing with biases related to mental health variables, we set up a methodology using bivariate probit modelling estimated by maximum likelihood. It is somewhat equivalent to the conventional linear two-stage approaches. The two simultaneous equations to estimate can be written as follow: (4) (5) where and are the respective residuals for esquations (4) and ( 5). Despite the inclusion of these control variables, it is likely that the residuals of these two equations are correlated, inducing . Several reasons can be stated. First, in the case of simultaneous observations of health status and employment outcomes, there is a high risk of reverse causality. In our case, to the extent that both are separated by several years, we limit this risk. However, it seems possible that there are unobserved factors that affect not only mental health condition but also the capacity to remain employed, such as individual preferences or personality traits. Notably, an unstable employment path before 2006 is one of the explanatory factors of the mental health of 2006 as well as of the employment status of 2010 (state dependence). Thus, only estimating equation ( 4) would result in omitting part of the actual model. In such a case, a bivariate probit modelling is required in the presence of binary outcome and explanatory variables [START_REF] Lollivier | Économétrie avancée des variables qualitatives[END_REF]. A new specification ( 6) is therefore implemented, taking the form of a bivariate probit model using specification (3) as the main model and simultaneously explaining mental health by three identifying variables (vector ): (6) We assume that the error terms follow a bivariate normal distribution: In theory it is possible to estimate such a model without resorting to identifying variables (exclusion condition). However it is generally preferred, in the empirical literature, to base estimates on the exclusion criterion and use identifying variables. The identifying variables used in this study are chosen in line with the literature on the determinants of mental health status and are taken from Sip's lifegrid: we use the fact of having been raised by a single parent, having suffered from violence during childhood from relatives or at school and finally having experienced many marital breakdowns. We differentiate our instruments by sex 2 : for men, we retain having suffered violence and marital breakdown; for women, having suffered violence and having been raised by a single parent. Using a binary endogenous variable of mental health, there is no real specialized test to assess the validity of our identifying variables. However, correlation tests have been conducted (presented in Table 32 andTable 33, Appendix 7) to determine if they are likely to meet the validity and relevance assumptions. According to these limited tests, our three identifying variables are not likely to breach these assumptions. This intuition also tends to be confirmed by the estimates for , the comparison of univariate and bivariate estimations for employment status (Table 1 andTable 2) and for mental health (Table 34, Appendix 7) (see section 3.2). On a more theoretical standpoint, because we only consider individuals aged 30 or more in 2006 (i.e. being in employment since some time in 2010) and because violence and the fact of being raised by a single parent relate to events occurring during childhood (before age 18), we are confident that these variables should not have a direct impact on employment status in 2010 (especially considering the stability of career paths are accounted for and because only individuals in employment are selected in our sample). On the other hand, marital breakdowns should not specifically be correlated with men's behaviour on the labour market 3 . Results A poor mental health condition decreases the likelihood to remain in employment We test three specifications of the probability of being employed in 2010 among people employed in 2006 in order to decompose the effect of mental health in 2006 but also to try taking into account for confounding factors. The baseline model presented in Table 1 for men and Table 2 for women (specification 1) shows that men and women suffering from GAD and/or MDE in 2006 are less likely to remain in employment in 2010, after controlling for the individual and employment characteristics of 2006. Men in employment and declaring suffering from at least one mental 2 Following the dedicated literature indicating strong sex-linked relationships in the determinants for mental health, we decided to differentiate our instruments for men and women. Initial estimations including all three instruments (available upon request) have still been conducted, indicating similar yet slightly less precise results. 3 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using respectively the "probit" and "biprobit" commands. disorder in 2006 are in average percentage points (pp) less likely to remain in employment in 2010 ( less likely in women). The other determinants of employment however differ between men and women in agreement with what other French studies have observed [START_REF] Barnay | Santé déclarée et cessation d'activité[END_REF]. In addition to mental health, in women, the predictors of unemployment are age (over 45), the presence of children, agricultural or industrial sectors (vs. In specification 2, we include general health status (self-assessed health, chronic diseases and activity limitations) and risky behaviours (daily tobacco consumption, risky alcohol drinking and being overweight). This new specification allows the assessment of potential indirect effects of mental health, transiting through the latent health status [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF]. In the male population, the coefficient associated with mental health declines slightly (the decline in the probability of remaining in employment falls from to ) but remains very significant. Activity limitations ( ) and daily tobacco consumption ( ) also play a role in job loss regardless of the effect of mental health. Being observed simultaneously, it is not possible to disentangle the causal relationship between general health, mental health and risky behaviours in this type of models but the explicit inclusion of these variables tends to reduce social employment inequalities in our results. In the female population, the impact of health status on employment does not seem to go through mental health as we measure it but more through a poor general health status and activity limitations ). Risky behaviours however appear to have no impact on job retention in women. Past professional career information (in terms of security and stability of employment) is added in a third specification. It allows to control for the nature of the professional career, influencing both mental health and employment. While stable job trajectories (marked by long-term, more secure jobs) favours continued employment between 2006 and 2010, the deleterious effect of poor mental health condition on employment is resilient to this third specification in men. In women, employment stability does not participate to the transitions in employment between 2006 and 2010. Just like in the empirical literature, it appears that we basically find the most conventional determinants influencing the labour market on our data. Age, the presence of children and part-time work among women, the level of education and professional category in men are found to have a significant impact on the ability for individuals to remain in employment. Mental health is found to be very significant in men but not in women, which again appears to be in line with the literature [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. The study of [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] however goes in the opposite direction, indicating a stronger effect in women which could possibly be explained by the lack of controls for general health status in this study, while the links between physical and mental health are strong in women [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. As an illustration, our regressions also find a significant effect of mental health in women when we do not take into account general health status (Table 2, specification 1). Being a daily smoker is shown to have important consequences on men's employment in 2010, in agreement with the literature [START_REF] Butterworth | Poor mental health influences risk and duration of unemployment: a prospective study[END_REF][START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF]. Alcohol and overweight do not play a significant role on employment in our regressions. Despite the decrease in the accuracy of the estimates for employment status, the use of identifying variables should enable the establishment of a causal relationship. The use of this type of models seems justified by the significance (for men) of the correlation coefficient ( ) between the residuals of the two simultaneous equations. In addition, evolutions in the results between univariate and bivariate employment and mental health models (Table 1, Table 2 and Robustness checks To assess the robustness of our results, we tested two other alternative specifications to better understand mental health (differentiating MDE and GAD and taking into account their cumulative effects), we considered other age groups5 and a shorter temporal distance between mental health and employment (it indeed may be questionable to measure the role of poor mental health on employment four years later). MDE versus GAD We first wanted to better understand the respective roles of MDE and GAD on job retention. Table 4 shows the results when considering MDE alone (specification 1), GAD alone (specification 2) and a counter of mental disorders (indicating if an individual faced one or both mental disorders at once). This decomposition of mental health disorders did not change the results in the female population: even when women report suffering from both MDE and GAD, mental health problems do not significantly affect their employment trajectory. In men, GAD marginally plays the major role on the inability to remain in employment compared to for MDE) and suffering from both mental disorders significantly deteriorates their labour market outcomes ( ). An employment indicator over the period 2007-2010 The measurement of the impact of mental health on employment outcomes is potentially subject to biases given the duration of the observation period. Career paths and mental health between 2006 and 2010 may have been significantly affected by the effects of economic conditions (notably the economic crisis of 2009) regardless of the mental health condition of 2006. To deal with this problem, we set-up a more restrictive approach by considering individuals having been at least 3 years in employment between 2007 and 2010 (and not only in employment in 2010 precisely). The results are presented in Table 5. Discussion and conclusion This study demonstrates that a degraded mental health condition directly reduces the ability of men to remain in employment four years later after controlling for socioeconomic characteristics, employment, general health status, risky behaviours and the nature of past Our study confirms the importance of mental health when considering work and employment. It appears appropriate to keep on with the implementation of public policies to support people with mental disorders starting from entry into the labour market but by extending them to common mental disorders such as depressive episodes and anxiety disorders, which prevalence is high in France. We bring new elements with respect to sex differences in the impact of mental health, after controlling for general health status. In men, activity limitations and GAD play a specific and independent role on professional paths. However in women, only general health indicators (perceived health and activity limitations) are capable of predicting future job situations. This differentiation between men and women is also confirmed in terms of mental health determinants, which is taken into account here by using different identifying variables according to sex. Consequently, accompanying measures for men at work could be helpful in keeping them on the labour market. Notably, the French 31). As a consequence, the differences we find could very well be explained, at least partly, by the fact that a man and a women both declaring facing anxiety disorders or depressive episodes depicts two different realities. Notably, it is acknowledged that men have a tendency to declare such issues when their troubles are already at a more advanced state (in terms of intensity of the symptoms) than women. Even though our indicators are relatively robust to false positives, it is not as much the case for false negatives (as explained in Appendix 5). It would also be interesting to determine the transmission channels of these differences. The distinction between GAD and MDE demonstrates the sensitivity of our results to the definition of mental health. As such, robustness checks using a mental health score to better assess the nature and intensity of mental health degradations would help to better assess its effect on employment. Yet, no such score is available in the survey. Introduction In a context of changing and increasing work pressures [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], the question of working conditions has become even more acute. Notably, a law implemented in 2015 in France fits into this logic and either offers access to training programs in order to change jobs, or gives the most exposed workers an opportunity to retire earlier. The relationship between employment, work and health status has received considerable attention in the scientific community, especially in fields such as epidemiology, sociology, management, psychology and ergonomics. From a theoretical standpoint in economics, the differences in wages between equally productive individuals can be explained by differences in the difficulty of work-related tasks, meaning workers with poorer working conditions are paid more than others in a perfectly competitive environment [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF]. In this framework, it is possible to imagine that health capital and wealth stock are substitutable, hence workers may use their health in exchange for income [START_REF] Muurinen | The economic analysis of inequalities in health[END_REF]. From an empirical point of view, the question of working conditions and their potential effects on health status becomes crucial in a general context of legal retirement age postponement being linked to increasing life expectancy and the need to maintain the financial equilibrium of the pension system. Prolonged exposures throughout one's whole career are indeed likely to prevent the most vulnerable from reaching further retirement ages, a fortiori in good health condition. However, this research area has received less attention because of important endogeneity problems such as reverse causality, endogenous selection and unobserved heterogeneity [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF] as well as the difficulty in fully embracing the diversity and magnitude of exposures. Nevertheless, a large majority of the studies agree that there is a deleterious effect on health status from detrimental working conditions. In this paper, I examine the role of physical and psychosocial working conditions as well as their interactions when declaring chronic diseases. I expand on the aforementioned literature by two means. First, I rely on a sample of around 6,700 French male and female workers who participated in the French Health and Professional Path survey (Santé et Itinéraire Professionnel -Sip), for whom it is possible to use retrospective panel data for reconstructing their entire career from their entry into the labour market to the date of the survey. This allows me to resolve the inherent endogeneity in the relationship caused by selection biases and unobserved heterogeneity using a difference-in-differences methodology combined with matching methods. My second contribution arises from being able to establish and analyze the role of progressive and differentiated types of exposures and account for potentially delayed effects on health status. I believe such a work does not exist in the literature and that it provides useful insights for policy-making, particularly in regard to the importance of considering potentially varying degrees of exposures as well as the physical and psychosocial risk factors in a career-long perspective. The paper first presents an overview of the economic literature (Section 1), the general framework of this study (Section 2), the data (Section 3) and empirical methodology (Section 4). Then, the results are presented, along with robustness checks and a discussion (Section 5, Section 6 and Section 7). 1. Literature Global effect of work strain on health status Unlike in fields such as epidemiology, working conditions and their impact on health status did not receive a lot of attention in the economic literature [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF]). Yet, this literature agrees on a deleterious average effect of work strain on workers' health capital. The numerous existing indicators used to assess this role usually classify the strains into two main categories: those related to physical or environmental burdens (expected to influence mostly physical health status) and psychosocial risk factors (supposed to have a major part to play in the deterioration of mental health). Having a physically demanding job is known to impact self-rated health [START_REF] Debrand | Working Conditions and Health of European Older Workers[END_REF]. Notably, [START_REF] Case | Broken Down by Work and Sex: How Our Health Declines[END_REF] use multiple cross-sectional data to find that manual work significantly deteriorates self-assessed health status. This result is robust to the inclusion of classical socio-demographic characteristics such as education and it varies according to the levels of pay and skills involved. This was later confirmed by [START_REF] Choo | Wearing Out -The Decline of Health[END_REF], who also used cross-sectional data, controlling for chronic diseases and risky health behaviours. Using panel data, [START_REF] Ose | Working conditions, compensation and absenteeism[END_REF] have an influence on workers' health status. In a study on U.S. workers, the impact of facing detrimental environmental working conditions (weather, extreme temperatures or moisture) is found to specifically impact young worker's self-rated health status [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF]. This result, obtained on panel data using random effects ordered probits, accounts for initial health status. Datta Gupta and Kristensen (2008) use longitudinal data and cross-country comparisons to show that a favourable work environment and high job security lead to better health conditions, after controlling for unobserved heterogeneity. Psychosocial risk factors have been studied more recently in the empirical literature [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], even though their initial formulation in the psychological field is older [START_REF] Karasek | Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign[END_REF][START_REF] Theorell | Current issues relating to psychosocial job strain and cardiovascular disease research[END_REF]. Individuals in a situation of Job strain (i.e. exposed to high job demands and low decisional latitude) are found to suffer more frequently from coronary heart diseases [START_REF] Kuper | Job strain, job demands, decision latitude, and risk of coronary heart disease within the Whitehall II study[END_REF]. [START_REF] Johnson | Combined effects of job strain and social isolation on cardiovascular disease morbidity and mortality in a random sample of the Swedish male working population[END_REF] demonstrated that social isolation combined with Job strain correlates with cardiovascular diseases (Iso-strain situation). Mental health is also potentially impaired by such exposures. [START_REF] Laaksonen | Associations of psychosocial working conditions with self-rated general health and mental health among municipal employees[END_REF] show that stress at work, job demands, weak decision latitude, lack of fairness and support are related to poorer health status. [START_REF] Bildt | Gender differences in the effects from working conditions on mental health: a 4-year follow-up[END_REF] show that being exposed to various work stressors such as weak social support and lack of pride at work may be related to a worse mental health condition, while [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF] stress the role of being in contact with the public. Improving on this ground, part of the literature focuses on the role of rewards at work and how it might help in coping with demanding jobs [START_REF] Siegrist | Adverse health effects of high-effort/low-reward conditions[END_REF]. Notably, de Jonge et al. (2000) use a large-scale cross-sectional dataset to find effects of Job demands and Effort-Reward Imbalance on workers' well-being. [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] use three waves of European data on 15 countries. They take into account the endogeneity of working conditions related to selection on the labour market based on initial health status and find that job quality (in particular job demands) affects mental health. The role of simultaneous and chronic exposures Even though the economic literature on the topic of exposure to detrimental working conditions is scarce in regard to both simultaneous exposures (multiple exposures at once) and cumulative exposures (length of exposure to given strains), other fields such as epidemiology have demonstrated their importance in terms of work strains and their impact on health status [START_REF] Michie | Reducing work related psychological ill health and sickness absence: a systematic literature review[END_REF]. By its very nature, the literature that focuses on Karasek's and Biases More often than not, the literature's assessment of the health-related consequences of exposures to working conditions is plagued with several methodological biases that can lead to potentially misleading results. First, the choice of a job is unlikely a random experience [START_REF] Cottini | Mental health and working conditions in Europe[END_REF], resulting in contradictory assumptions. In particular, healthier individuals may tend to prefer (self-selection) or to be preferred (discrimination) for more demanding jobs [START_REF] Barnay | The Impact of a Disability on Labour Market Status: A Comparison of the Public and Private Sectors[END_REF]. In this case, the estimations are likely to be biased downwards because of individuals being both healthier and exposed to demanding jobs, thus being overrepresented in the sample (inducing a Healthy Worker Effect - [START_REF] Haan | Dynamics of health and labor market risks[END_REF]. Second, it is also reasonable to assume that workers with lesser health capital may have fewer opportunities in the labour market and thus be restricted to the toughest jobs, in which case an upward bias may result. Therefore, unobserved individual and temporal heterogeneities that are unaccounted for may also result in biased estimations [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF]. Individual preferences and risk aversion behaviours as well as shocks, crises or other time-related events can cast doubt on the exogeneity hypothesis of working conditions [START_REF] Bassanini | Is Work Bad for Health? The Role of Constraint versus Choice[END_REF]. Due to a lack of panel data that includes detailed information on both work and health status over longer periods, few papers have actually succeeded in handling these biases. Notably, [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] implemented an instrumental variable strategy on repeated crosssectional data while relying on variations across countries in terms of workplace health and safety regulation, doing so in order to identify the causal effect of detrimental working conditions on mental health. In most cases, the difficulty in finding accurate and reliable instruments for working conditions leads to the question of selection biases, and unobserved heterogeneity is either treated differently or avoided altogether when working on crosssectional data. General framework The main objective of this study is to assess the role of varying levels of exposure to detrimental working conditions in declaring chronic diseases. To do so, I rely on a differencein-differences framework which considers a chronic diseases baseline period, i.e., the initial number of chronic diseases before all possible exposures to work strains, and a follow-up period after a certain degree of exposure has been sustained (the latter being called the treatment). After labour market entry, employment and working conditions are observed and the treatment may take place. To allow for more homogeneity in terms of exposure and treatment dates, as well as to ensure that exposure periods cannot be very much separated from each other, I observe working conditions within a dedicated period (starting from labour market entry year). In order to be treated, one must reach the treatment threshold within this observation period. Individuals not meeting this requirement are considered controls. Minimum durations of work are also introduced: because individuals who do not participate in the labour market are likely to be very specific in terms of labour market and health characteristics, they are at risk of not really being comparable to other workers [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. Indications: in years. Reading: For the seventh threshold ( ), an individual must reach 16 years of single exposure or 8 years of poly-exposure within the 24 years following labour market entry to be considered treated. Also, he/she must have worked at least 8 years within this period to be retained in the sample. His/her health status will be assessed by the mean number of yearly chronic diseases at baseline (the 2 years before labour market entry), and three more times (follow-up periods) after the end of the working conditions observation period. Source: Author. Nine progressive exposure levels (denoted ) have been designed in order to assess potentially varying effects of increasing strains on declaring chronic diseases. In order to take into account the cumulative effects between strains, two types of exposure are considered (see first half of Table 6): single exposure (when an individual faced only one strain at a time each year) and poly-exposure (if an individual faced two or more strains simultaneously each year). Then, the duration of exposure is accounted for by introducing varying minimum durations of exposure (thresholds). Empirically, this framework covers exposure thresholds ranging from 4 years of single exposure or 2 years of poly-exposure ( ) to, respectively, 20 and 10 years of exposure ( ), with a step of 2 years (resp. 1 year) from a threshold to another for single (resp. poly-) exposures. However, changing the treatment thresholds will, as a consequence, lead to other necessary changes in the framework, notably to the duration of the working conditions observation period and to the minimum duration at work within it (see second half of Table 6). More details about the choices made for these parameters can be found in Appendix 8. Note that only thresholds to are presented in the rest of the paper (for simplification purposes), because previous thresholds reveal no significant effect on chronic diseases from exposure to detrimental working conditions. Let us take the example of two fictive individuals, and , in the seventh threshold sample to illustrate the framework. To be a treated, individual needs to be exposed to at least 16 years of single exposures or 8 years of poly exposures during the first 24 years after labour market entry. He also needs to have worked at least 8 years within this period to be retained in the sample. Individual , in order to be in the control group, needs to have been exposed less than 16 years to single exposures and less than 8 years of poly exposures within the 24 years after labour market entry. may or may not be exposed after the 24-year observation period but in any case will still be a member of the control group for the threshold level considered ( in this example). Individual needs, just like , to have worked at least 8 years within his/her observation period to remain in the sample. All in all, the only element separating from is the fact that reached the exposure threshold within the working conditions observation period, when did not. In this study, I work with this reconstructed longitudinal retrospective dataset comprising more than 6,700 individuals, including their career and health-related data from childhood to the year of the survey. Thus, the final working sample is composed of around 3,500 men and 3,200 women, for whom complete information is available and who meet specific inclusion criteria described in Section 2 (see also Appendix 8 for more details). Data Variables of interest Working conditions: Definition of a treatment Ten The second one forms the psychosocial risk factors that include full skill usage, working under pressure, tensions with the public, reward, conciliation between work and family life and relationships with colleagues. The third one represents the global exposure to both physical and psychosocial strains (which includes all ten working conditions indicators). For each indicator, individuals must declare if they "Always", "Often", "Sometimes" or "Never" faced it during this period: I consider one individual to be exposed if he/she "Always" or "Often" declared facing these strains. Chronic diseases The indicator of health status is the annual number of chronic diseases 11 : a chronic disease is understood in the Sip survey to be an illness that lasts or will last for a long time, or an illness that returns regularly. Allergies such as hay fever or the flu are not considered chronic diseases. This definition is broader than the French administrative definition, and it is selfdeclarative. This indicator is available from childhood to the date of the survey (2006). Available chronic diseases include cardiovascular diseases, cancers, pulmonary problems, ENT disorders, digestive, mouth and teeth, bones and joints, endocrine, metabolic and ocular problems, nervous and mental illnesses, neurological problems, skin diseases and addictions. Table 7 gives a description of the sample used in the 7 th threshold described above. I chose this specific threshold because it should give an adequate representation of the average of the studied population (as it is the middle point between presented thresholds to and because it should not differ in non-treatment-related characteristics for the most part, due to the samples used for all thresholds being the same). General descriptive statistics The main conclusions of these descriptive statistics are, first, that the populations who are to be physically and globally treated in the future seem to be in a better initial health condition than their respective control groups. Such a difference cannot be found in the psychosocial sample. Second, no significant effect of the physical and global treatments is observed on subsequent numbers of chronic diseases. This is once again the opposite for the psychosocial subsample, which displays growingly significant and negative differences in the number of chronic diseases between treated and control groups, thus revealing a potentially detrimental effect on health status from psychosocial exposures. However, because the structures of the treated and control groups are very heterogeneous in terms of observed characteristics, the differences in chronic diseases for each period between the two are likely to be unreliable. Yet, for at least the physically and globally demanding jobs, there seem to be signs of a sizeable selection effect indicating that healthier individuals prefer or are preferred for these types of occupations. In a similar fashion, Table 8 below gives more detailed information about the different components of the reconstructed indicators for working conditions and chronic diseases for the 7 th threshold. The first half of the table gives the average number of years of exposure to the ten work strains used in this study. The second half of the table gives an overview of the 15 chronic diseases families used and the average number of these faced by the sample. Note that these chronic disease statistics do not hold for a specific period of time, but rather account for the entire life of the sample up until the date of the survey. What can be learned from these descriptive statistics about working conditions is that the most common types of strains, in terms of mean number of years of exposures are, first, facing a high physical load ( years), exposures to hazardous materials ( years), repetitive work ( years), work under pressure ( years) and the lack of recognition ( years). Important differences depending on the type of treatment can also be logically seen: when exposures to a high physical burden, to hazardous materials and to a repetitive work are predominant in the physically treated (resp. , and years in comparison to their control group), the lack of recognition and working under pressure are specific characteristics of the psychosocially exposed workers (resp. and years). As can be seen from the second half of the table, the individuals of the seventh threshold faced differentiated types of chronic diseases during their lives. When the average number of addictions is only , problems related to bones and joints are much more common ( ). Some expected differences between treated and control groups also appear (physically treated declaring more bone/joints or pulmonary problems; psychosocially treated more psychological issues). Yet some others are less intuitive (for instance, the physically treated group declares facing cancers less often than the control group). This is explained first by the fact that no specific period of time is targeted in these simple statistics and consequently these cancers may happen during childhood or during the work life but before an individual could reach the treatment threshold, in which cases facing such issues early (in comparison to the treatment onset) most likely reduces the probability to be a treated, especially in physical jobs. Empirical analysis Econometric strategy The general framework of the difference-in-differences methodology is given by Equation 1( [START_REF] Angrist | Mostly harmless econometrics: an empiricist's companion[END_REF]. The left-hand side member gives the observed performance difference between the treated and control groups. The first right-hand side member is the Average Treatment Effect on the Treated ( ), and the far right-hand side member is the selection bias. The latter equals when the potential performance without treatment ( ) is the same whatever the group to which one belongs (independence assumption): . (1) In practical terms, the estimation of the difference-in-differences for individual and times (baseline) and (follow-up) relies on the fixed-effects, heteroskedasticity-robust Within panel data estimator 12 for the estimation of Equation 2, which explains the mean number of chronic diseases ( ): (2) is a dummy variable taking value if the period considered is ; is a dummy variable for the treatment (taking value when individual is part of the treated 12 It is also possible to estimate such a specification using the Ordinary Least Squares estimator and group-fixed unobserved heterogeneity terms. The results should be relatively close [START_REF] Givord | Méthodes économétriques pour l'évaluation de politiques publiques[END_REF], which has been tested and is the case in this study. Yet, panel data estimators appear to be the most stable because of the increased precision of the individual fixed effects in comparison to group-fixed effects, and thus have been preferred here. group); (variable of interest) is a cross variable taking value when individual is treated in ; is a vector of covariates and , , and are their respective coefficients. and , respectively, represent the individual and temporal unobserved heterogeneities and the error term. The main objective of a difference-indifferences framework is to get rid of both and , as well as to account for the baseline situation ( , which may differ between the two groups. In order to satisfy the independence assumption, i.e., to reduce the ex-ante differences between treated and control groups as much as possible and thus handle the selection bias existing in the sample, I perform a matching method prior to the difference-in-differences setup using pre-treatment characteristics ( ) related to health status and employment elements, so that . A Coarsened Exact Matching method is implemented (CEM - [START_REF] Blackwell | cem: Coarsened Exact Matching in Stata[END_REF]. The main objective of this methodology is to allow the reduction of both univariate and global imbalances between treated and control groups according to several pre-treatment covariates [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]. CEM divides continuous variables into different subgroups based on common empirical support and can also regroup categorical variables into fewer, empirically coherent items. It then creates strata based on individuals (treated or controls) achieving the same covariate values and match them accordingly by assigning them weights 13 (unmatched individuals are weighted ). This offers two main advantages compared to other matching methods. It helps in coping effectively with the curse of dimensionality by preserving sample sizes: coarsening variables in their areas of common empirical support ensures a decent number of possible counterfactuals for each treated observation in a given stratum, and therefore decreases the number of discarded observations due to the lack of matches. In addition, CEM reduces the model dependence of the results [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]). Yet, this matching method is still demanding in terms of sample size, and only pre-treatment variables (i.e. variables determined before the exposure to detrimental working conditions) must be chosen 14 . 4.2. Matching variables and controls 13 The weight value for matched individuals equals , with representing the sample size for respectively the treated ( ) and control ( ) groups in stratum and the total sample sizes for both groups. 14 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using respectively the Coarsened Exact Matching (CEM) package and the "xtreg" procedure. Some robustness checks have also been conducted using the Diff package and the "regress procedure". Matching pre-treatment variables are chosen so that they are relevant in terms of health status and status determination in the labour market, in addition to helping cope with the (self- )selection bias (individuals sustaining high levels of exposure are bound to be particularly resilient or, in contrast, particularly deprived from better opportunities in the labour market). Individuals are matched according to their: year of entry into the labour market (in order to get rid of temporal heterogeneity related to generation/conjuncture effects); gender [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]; education level (four levels: no education, primary or secondary, equivalent to bachelor degree and superior); health status before labour market entry (heavy health problems and handicaps) to have a better assessment of their initial health status and to cope with endogenous sorting in the labour market; and important events during childhood, aggregated into two dummy variables (on the one hand, heavy health problems of relatives, death of a relative, separation from one or more parent; on the other hand, violence suffered from relatives and violence at school or in the neighbourhood), as it is pretty clear that such childhood events may impact early outcomes in terms of health status [START_REF] Case | The lasting impact of childhood health and circumstance[END_REF][START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF]. Matching the samples on such variables is bound to reduce the initial heterogeneity existing between the treated and control groups, as well as to limit the selection bias into employment and into different degrees of exposure, as part of the individuals' resilience to work strains is accounted for notably by proxy variables for their initial health capital. After reaching the treatment threshold, workers can still be exposed to varying levels of working conditions. This possibility of post-treatment exposures is accounted for by a control variable in the difference-in-differences models (taking the value at baseline and , or depending on if the individual has been exposed, respectively, hardly, a little or a lot to detrimental work strains during this post-treatment period). Health habits are also controlled for in the difference-in-differences models by adding a variable indicating if individuals, at any given time, are daily smokers or not. The idea behind this is that health-related behaviours (such as tobacco and alcohol consumption, being overweight and other health habits) are bound to be correlated with each other as well as with exposures to work strains and with the declaration of chronic diseases, all of which induce biased estimates when unaccounted for. This variable takes the value when an individual is not a daily smoker and the value if he/she is in either the baseline or follow-up periods. Matched descriptive statistics The naive results (descriptive statistics presented in Section 3.3 and the unmatched differencein-differences results presented in Section 5.1) tend to confirm the possibility of a (self- )selection bias in the sample, suggesting that people are likely to choose their job while considering their own initial health status; in any case, the results justify an approach that takes into account this possibility. In order to minimize this selection process, a matching method is used prior to the difference-in-differences models. Table 9 gives a description of the same sample used in , which was presented earlier (for comparison purposes), after CEM matching. The matching method succeeds in reducing the observed structural heterogeneity between the treated and control groups for every single pretreatment covariate. Residual heterogeneity still exists, namely for the year of entry into the labour market and age, but it is shown to be minor and, in any case, statistically nonsignificant (difference of less than a month in terms of labour market entry year and of approximately a quarter for age). It is also interesting to note that initial health status differences are also greatly reduced and that larger negative follow-up differences between treated and control groups can now be observed, making the hypothesis of a detrimental impact of working conditions on health status more credible. Results Naive results The results for unmatched difference-in-differences naive models for the five thresholds ( to ) are presented in rows in Table 35, Table 36 andTable 37 (Appendix 9), and can be interpreted as differences between groups and periods in the mean numbers of chronic diseases. Despite not taking into account for the possibility of endogenous selection in the sample nor differences in observable characteristics between the two groups' structures, these models do take care of unobserved, individual-fixed heterogeneity. As expected after considering the sample description given in Table 7, unmatched baseline differences (i.e. differences in chronic diseases between treated and control populations before labour market entry) display statistically significant negative differences between future physically treated and controls in men (Table 35). These differences cannot be witnessed in women or for the psychosocial treatment (Table 36). The possibility of endogenous sorting hence cannot be excluded. The positive follow-up differences (i.e. differences in the numbers of chronic conditions between treated and control populations after the treatment period and not accounting for initial health status) indicate that the treated population reported higher numbers of chronic diseases than the control group in average. Logically, these differences are growing in magnitude as the exposure degree itself becomes higher. Difference-in-differences results (i.e. the gap between treated and control populations, taking into account for differences in initial health status) suggest a consistent effect of detrimental work strains on the declaration of chronic conditions, which increases progressively as exposures intensify. While physical strains appear to play a role on the declaration of chronic diseases straight from in women and in men, effects after psychosocial strains seem to require higher levels of exposure to become statistically significant: in men, first significant differences appear from ( in women). For the global treatment (Table 37), first significant differences happen for in women and in men. These effects do not turn out to be short term only, as the differences tend to grow bigger when considering later periods of time. Main results The results for matched difference-in-differences models for the five thresholds are provided in Table 10, Table 11 and Table 12 below. These results, relying on matched samples, take care of the selection biases generated by endogenous sorting in the labour market and observed heterogeneity, as well as unobserved individual-fixed and time-varying heterogeneities as a result of using difference-in-differences frameworks. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e., the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e., the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. It should be noted that around 90% of the initial sample is preserved after matching in physical and psychosocial samples, and that at least 80% of the sample is preserved for the global treatment (because of the higher number of treated). Matching the samples on pretreatment variables consistently succeeds in reducing initial health status gaps between treated and control groups, to the point where none of them are still present in the matched results. It appears that men are clearly much more exposed to detrimental working conditions than women, especially for physically demanding jobs (with an average of percentage points ( ) more in men than in women), but also to a lesser extent for psychosocial risk factors in men). In comparison to women, the gender gap regarding all working conditions (global treatment) is approximately in men. A clear impact of exposures to work strains on the declaration of chronic diseases can be observed in the difference-in-differences (columns 5 and 6). Treated workers indeed seem to suffer from a quicker degradation trend in their health status than their respective control groups. This trend exists between levels of exposure (thresholds), but it is also suggested by the evolution of the number of chronic diseases by health status observation period, even though these differences in means are unlikely to be statistically significant. This main result holds for all treatment types and for both genders, and it tends to demonstrate possible long-term effects of exposures rather than only short-term consequences. In the physical sample, the first significant consequences in terms of health status degradation can be seen in women, starting from (i.e., after 12 years of single exposure or 6 years of simultaneous exposures), while this is the case much later in men, at (resp. after at least 18 or 9 years of exposure). Between and , the differences between treated and control groups in the mean number of chronic diseases in women increase from to ; while in men the differences between and range from to . In order to have an idea of the meaning of these differences, it is possible to compare them to the mean number of chronic diseases in the treated population after the treatment occurred, given in column 7. In physically exposed women (resp. men), exposures to work strains may account for 20% to 25% of their chronic diseases (resp. a little more than 10%). Psychosocial strains have a more homogenous initial impact on the declaration of chronic diseases, with sizeable health status consequences happening at in men (resp. 14 or 7 years of exposure) and in women (resp. 16 or 8 years of exposure). The difference in women (resp. in men) goes from in ( in ) to in ( in ). Thus, in psychosocially exposed women (resp. men), approximately 21% of chronic diseases in the treated population can be explained by psychosocial strains (resp. 17%). For the global treatment, effects of exposures start at in women (resp. in men) and go from to (resp. to in men). According to the results for this global type of exposure, 20% (resp. 10% to 15%) of exposed women's (resp. men's) chronic diseases come from combined physical and psychosocial job strains. The effects of the global treatment appear weaker in terms of onset and intensity, which is most likely due to the fact that the exposure thresholds are easier to reach because of the greater number of working condition indicators considered. Nevertheless, even though women are less exposed than men to work strains, it seems that their health status is more impacted by them. Robustness checks Common trend assumption In order to ensure that the results obtained using a difference-in-differences framework are robust, one needs to assess whether the treated and control groups share a common trend in terms of the number of chronic diseases before all possible exposures to detrimental working conditions, i.e., before labour market entry. samples for . The first panel represents the baseline period and stops at the mean year of labour market entry for this sample. From all three graphs, one can see that both treated and control groups share the same trend in terms of a rise in chronic diseases. This is no longer the case after labour market entry. The common trend hypothesis seems to therefore be corroborated. It should be noted that the test results on unmatched samples (available upon request) are rather close, but they are not as convincing. Model dependency I also test whether the results obtained using matched difference-in-differences could be obtained more easily by relying only on a matching method. Yet because CEM is not in itself an estimation method, I set up a simple, heteroskedasticity-robust specification that was estimated using Ordinary Least Squares on matched data with the same control variables (specification 3), followed by a comparison of the results with those obtained through difference-in-differences using specification 2 (Table 38, Appendix 11). (3) The results for all three samples on indicate that, in terms of statistical significance, the detrimental impact of exposure to work strains on the number of chronic diseases is confirmed. This is not very surprising, as CEM has the particularity to reduce the model dependence of the results [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]. Yet, the amplitude of the effect is mostly a bit higher in OLS. This could be explained by the fact that these simple OLS regressions neither account for initial differences in terms of health status, nor do they take into account individual and temporal unobserved heterogeneities when both these phenomena are going in opposite directions. As a consequence, difference-in-differences results are preferred here because of their increased stability and reliability. It should be noted that, logically, single exposures induce a weaker effect on the number of chronic diseases than poly-exposures. All the results still converge towards a positive and statistically significant effect of exposures on the declaration of chronic diseases. In addition, the differences in intensity that can be observed between individuals exposed to 16 years of single exposures and those exposed to 8 years of simultaneous exposures do not appear to be statistically significant. Health habits Even though a part of the role that health habits play in the relationship between working conditions and health (possibly generating endogeneity issues) is accounted for by controlling for the evolutions in tobacco consumption in the difference-in-differences, other behaviours are not taken into account directly (because they cannot be reconstructed in a longitudinal fashion using Sip data), even if they are likely correlated with smoking habits. Table 40 (Appendix 13) presents an exploratory analysis on the wages and risky healthrelated behaviour differences in 2006, on , between treated and control groups for all three treatments. In unmatched samples, important differences can be observed in terms of monthly wage, regular physical activities, alcohol, tobacco consumption and being overweight. The treated group on average earns less and does less sport but has more health-related risky behaviours than the control group. In matched samples, no statistically significant difference remains between the two groups in 2006 except for wages. This indicates that the treatment effects presented here should not pick up specific effects of health-related behaviours, except possibly those related to health investments (as the control groups are generally richer than the treated groups). Gender gap Important gender differences appear to exist in terms of effects from a certain degree of exposure to detrimental working conditions. In order to try and explain these differences, an exploratory analysis specifically on year 2006 has been conducted in Appendix 14. First, men and women may be employed in different activity sectors, the latter being characterized by different types of exposures to working conditions (Table 41). As expected, very large differences exist in the gender repartition as well as work strain types encountered within activity sectors. Thus, it is likely that men and women are not exposed to the same types of strains. Table 42 confirms this intuition and indicates that, for at least five out of ten working conditions indicators, a statistically significant difference exists between men and women in terms of repartition into strains. As a consequence, the explanation for this gender-gap in working conditions and health is most likely twofold. First, there might be declarative social heterogeneity between men and women. Both may not experience an objectively comparable job situation in the same way, just as they may not experience an objectively comparable health condition in the same way [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. However, what could also be true is that men and women may not be exposed to the exact same typology of working conditions within a certain treatment. Even though belonging to a specific treatment group ensures a quantitatively similar exposure (in terms of number of strains at a given time and in terms of lengths of exposures), it does not completely ensure that the types of strains are qualitatively equivalent, which in turn could explain part of the observed differences. Yet, this hypothesis should be partially relaxed by the use of two different treatment types (one handling physical demands and another for psychosocial risk factors). Discussion and conclusion In this study, I use French retrospective panel data to highlight links that physical and psychosocial working conditions -separately and combined -have with chronic diseases in exposed males and females. Workers facing gradually increasing strains in terms of duration or simultaneity of exposure are more frequently coping with rising numbers of chronic diseases. Using combined difference-in-differences and matching methods, the empirical strategy helps to handle both (self-)selection in the labour market based on health status and other observable characteristics as well as unobserved individual and temporal heterogeneity. Based on a career-long temporal horizon for exposures and health status observation periods, I find major differences in health conditions between treated and control groups, which are very likely the result of past exposures to work strains. To my knowledge, this is the first paper to work on both the simultaneous and cumulative effects of two distinct types of work strains and their combination with such a large temporal horizon, while acknowledging the inherent biases related to working conditions. However, the paper suffers from several limitations. First, working with retrospective panel data and long periods of time leads to estimates being at risk of suffering from declaration biases. The individuals are rather old at the date of the survey, and their own declarations in terms of working and health conditions are therefore likely to be less precise (recall biases) or even biased (a posteriori justification or different conceptions according to different generations). Even if it is impossible to deal completely with such a bias, matching on entry year into the labour market (i.e., their generation) and on education (one of the deciding factors when it comes to memory biases) should help in reducing recall heterogeneity. Also, simple occupational information notably tends to be recalled rather accurately, even over longer periods [START_REF] Berney | Collecting retrospective data: Accuracy of recall after 50 years judged against historical records[END_REF]). Yet, justification biases most likely remain (for instance, ill individuals may declare they faced detrimental working conditions more easily because of their health condition), especially considering the declarative nature of the data. Second, potential biases remain in the estimations. I work on exposures happening during the first half of the professional career (i.e., to relatively young workers), at a time when individuals are more resilient to these strains. This means that the impact found in this study would most likely be higher for an equivalent exposure level if an older population were targeted. I am also unable to completely account for possible positive healthcare investments in the treated population, because if the most exposed are also better paid (hedonic price theory, [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF] this wealth surplus could be used for relatively more health capital investments. Alternatively, the treated and control groups may have different health habits. Hence it is possible that the mean results I find are once again biased. Yet, even though wealth-type variables are endogenous, this hypothesis has been tested empirically with an alternative specification in the study by [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] and they were found to be irrelevant. Also, health-related risky behaviours are at least partly accounted for by implementing a variable for tobacco consumption in the difference-in-differences model. Another important point about potentially remaining biases in the estimations is that timevarying individual unobserved heterogeneity still is unaccounted for. For instance, a specific unobserved shock impacting both exposures to work strains and chronic diseases with heterogeneous effects depending on individuals cannot be accounted for (one can think for example to an economic crisis which usually degrades, in average, work quality and may also deteriorate individuals' health status -in this particular case, the estimations are at risk to be biased upwards). One must thus be careful concerning the causal interpretation of the results. Third, because of the method I use and the sample sizes I am working with, it is not possible to clearly analyse the potential heterogeneity in the effect of working conditions on health status across demographic and socio-economic categories, even though this mean effect is shown to vary [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF][START_REF] Muurinen | The economic analysis of inequalities in health[END_REF]. Fourth, part of the selection process into a certain level of exposure possibly remains. Considering that the sample is matched with elements of human and health capitals and because I consider only homogenous individuals present in the survey for at least 38 years (who worked at least 10 years and for whom the post-treatment exposures are controlled for), I should have rather similar individuals in terms of resilience to detrimental working conditions, i.e., with similar initial abilities to sustain a certain level of severity of exposure. So, to some extent at least, the selection into a certain level of treatment is acknowledged. Yet, it is impossible to directly match the samples depending on the fact that whether or not they reached a certain level of treatment (because it is endogenous). Because of that, it is likely that some degree of selection remains (notably, only the "survivors" are caught in the data, which possibly induces downward-biased estimations). It should also be noted that part of the heterogeneity of the results between men and women can still be explained by declarative social heterogeneity regarding their working and health conditions as well as qualitative differences in their exposures, both elements which cannot really be accounted for using such declarative data. Finally, I use a wide definition of chronic conditions as an indicator for health status. This indicator does not allow for direct comparisons with the literature (commonly used indicators, such as self-assessed health status or activity limitations, are not available on a yearly basis). Yet, I believe that it may represent a good proxy of general health status while at the same time being less subject to volatility in declarations compared to self-assessed health (i.e., more consistent). These results justify more preventive measures being enacted early in individuals' careers, as it appears that major health degradations (represented by the onset of chronic conditions) tend to follow exposures that occur as early as the first half of the career. These preventive measures may first focus on workers in physically demanding jobs while also targeting workers facing psychosocial risk factors, the latter still being uncommon in public policies. These targeted schemes may benefit both society in general (through higher levels of general well-being at work and reduced healthcare expenditures later in life) and firms (more productive workers and less sick leaves). It notably appears that postponing the legal age of retirement must be backed up by such preventive measures in order to avoid detrimental adverse health effects linked to workers being exposed longer while also taking into account both types of working conditions (which is not the case in the 2015 French pension law). Today, the human and financial costs of exposures to detrimental working conditions seem undervalued in comparison to the expected implementation cost of these preventive measures. Érudite) for their useful advice. Finally, I thank the two anonymous reviewers of the Health Economics journal. Introduction Traditional structural reforms for a pay-as-you go pension system in deficit rely on lower pensions, higher contributions or increases in retirement age. The latter was favoured by the indirect means of increases in the contribution period required to obtain a full rate pension (Balladur 1993 andFillon 2003 reforms) or by the direct increase in the legal age of retirement (Fillon 2010 reform) including a gradual transition from 60 to 62. However, the issue of funding pensions occults other specifics of the pension system that may play a role on health status and ultimately on the finances of the health insurance branch and the management of long-term care. Exposure to harsh working conditions and the impact of ill health on the employment of older workers, notably, are already well documented in France. The effect of transitioning into retirement has not received the same attention in the French economic literature (besides [START_REF] Blake | Collateral effects of a pension reform in France[END_REF]. Retirement in France mostly remains an absorbing state (relatively few employment situations of individuals cumulating retirement benefits and paid jobs). It can thus be seen in many cases as an irreversible shock. The sharp transition into retirement can often affect perceived health status, but the nature of the causal relationship between retirement and health can also be bidirectional due to retirement endogeneity. Before retirement, health status already appears as one of the most important non-monetary drivers in the trade-off between work and leisure in older workers [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Lindeboom | Health and Work of Older Workers[END_REF]. Although the nature of the relationship between health and employment appears obvious, studying causal impacts is complex [START_REF] Strauss | Health, Nutrition and Economic Development[END_REF]. The retirement decision may free individuals from a job strain situation. By examining the relationship between work and health, the first can indeed be beneficial to the latter, but the arduous nature of certain working conditions may also deteriorate health. The retirement decision is indeed partly motivated by health status, healthier individuals tending to remain in employment. In contrast, a poor health condition reduces labour supply and causes early exit from the labour market. Many studies have highlighted the existence of a healthy worker effect testifying of the selection on the labour market of the most resilient workers. A poor health status may speed up the retirement decision [START_REF] Alavinia | Unemployment and retirement and ill-health: a crosssectional analysis across European countries[END_REF][START_REF] Jones | Sick of work or too sick to work? Evidence on selfreported health shocks and early retirement from the BHPS[END_REF]: notably, [START_REF] Dwyer | Health problems as determinants of retirement: Are selfrated measures endogenous[END_REF] show that sick workers can advance from one or two years their plan to retire. From ECHP (European Community Household Panel), [START_REF] García-Gómez | Institutions, health shocks and labour market outcomes across Europe[END_REF] studies the effect of a health shock on employment in nine European countries. The results obtained from a matching method suggest that health shocks have a negative causal effect on the probability of being employed. People with health problems are more likely to leave employment and transit to situations of disability. Moreover, it is difficult to isolate the health-related effects of retirement from those of the natural deterioration rate related to ageing, and many unobservable individual characteristics are also able to explain not only the retirement decision behaviours, but also health status indicators (subjective life expectancy, risk aversion behaviours or the labour supply disutility). Finally retirement, considered as non-employment may be the cause of a feeling of social utility loss which can lead to declining cognitive functions and a loss in self-esteem. In this paper, we study the role of retirement on several physical and mental health status indicators. In order to take care of the inherent endogeneity biases, we set up an instrumental variable approach relying on discontinuities in the probability to retire generated by legal incentives at certain ages as a source of heterogeneity. Thanks to the Health and Professional Path survey (Santé et Itinéraire Professionnel -Sip) dataset, we are able to control for a variety of covariates, including exposures to detrimental working conditions throughout the whole career. We also acknowledge the likely heterogeneity of the effect of retirement and the possible mechanisms explaining its effects on health status. To our knowledge, no study evaluates the effect of the retirement decision on the physical and mental health conditions of retirees, after taking into account biases associated with this relationship as well as exposures to working conditions and the nature of the entire professional career. The paper is organized as follows. Section 1 is dedicated to an empirical literature review of relationships between retirement and health status. Section 2 and Section 3 then describes the database, Section 4 describes the empirical strategy. Section 5 then presents the results and Section 6 concludes. Background and literature French retirees have a rather advantageous relative position compared with other similar countries. The retirement age is comparatively lower (62 years while the standard is 65 in most other countries like Japan, Sweden, the U.K., the U.S. or Germany). The share of public expenditures devoted to the pension system is 14%, with only Italy devoting a superior part of its wealth. The net replacement rate is 68%, which places it among the most generous countries with Italy and Sweden. In contrast, the Anglo-Saxon countries relying on funded schemes have lower replacement rates and the share of individual savings in retirement is much higher than in countries where pension systems are of the pay-as-you go type. This position is convergent when considering life expectancy indicators at 65 or poverty levels. The life expectancy of a 65 year-old or more French countryman is systematically higher than the one observed in other countries (except for Japanese women, who can expect to live 24 years compared to 23.6 years in France). The poverty rate among the elderly is the lowest among all the countries mentioned here (3.8% in France compared to 12.6% on average for the OECD). Even though the issue of the links between health and work has many microeconomic and macroeconomic implications, the French economic literature is still relatively scarce compared to the number of international studies on the subject [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. The deterioration of health status contributes first to change the preferences for leisure and decreases individuals' work capacity or productivity. The [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF][START_REF] Grossman | The human capital model[END_REF] model indicates that each individual has a health capital that depreciates with age. Any health event affects the career path via the potential stock effects (instant exogenous shock) and the depreciation rate of this health capital but also, more generally, on future investments in human capital (primary or secondary prevention actions in health). Disease can lead individuals to include a reallocation of time spent between work and leisure times. Alteration of the health condition therefore reduces the labour supply. Conversely, poor working and employment conditions can affect health status and generate costs for the company (related to absenteeism). Stressful work situations can also generate an increase in healthcare consumptions and the number of daily allowances for illness. The specific relationship between non-employment and health has received very little attention in France unlike in Europe [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. In general, job loss is associated with a deterioration of well-being. Persistent unemployment and recurrent forms of non-employment have a deleterious effect on health, for example overweight and alcohol consumption [START_REF] Deb | The effect of job loss on overweight and drinking[END_REF]. Unemployment and inactivity, happening early in the professional life, can promote the onset of depressive symptoms thereafter, as shown by Mossakowski in 2009 on U.S. longitudinal data. Furthermore, job loss increases mortality [START_REF] Sullivan | Job Displacement and Mortality: An Analysis Using Administrative Data *[END_REF]. Finally, many studies agree on a negative effect of unemployment on health [START_REF] Böckerman | Unemployment and self-assessed health: evidence from panel data[END_REF][START_REF] Browning | Effect of job loss due to plant closure on mortality and hospitalization[END_REF]Eliason andStorrie, 2009a, 2009b;[START_REF] Kalwij | Health and labour force participation of older people in Europe: what do objective health indicators add to the analysis?[END_REF]. The effects of retirement on health status are not trivial. Two competing hypotheses can be advanced. Retirement can first free individuals from job strain situations and may improve their health condition in the short run. This virtuous circle will be sustainable provided that individuals have a capacity to invest in their health (income effect). Many international empirical studies show that retirement is beneficial to health status [START_REF] Blake | Collateral effects of a pension reform in France[END_REF][START_REF] Charles | Is Retirement Depressing?: Labor Force Inactivity and Psychological Well-Being in Later Life[END_REF][START_REF] Coe | Retirement effects on health in Europe[END_REF][START_REF] Grip | Shattered Dreams: The Effects of Changing the Pension System Late in the Game*: MENTAL HEALTH EFFECTS OF A PENSION REFORM[END_REF][START_REF] Insler | The Health Consequences of Retirement[END_REF][START_REF] Neuman | Quit Your Job and Get Healthier? The Effect of Retirement on Health[END_REF]. Coe and Zamarro (2011) measure the health effect of retirement and conclude that it decreases the likelihood of reporting poor perceived health (35%) after controlling for reverse causality. However, this effect is not observed with the two depression indicators. In the U.K., [START_REF] Bound | Estimating the Health Effects of Retirement[END_REF] found a positive but transitory health effect of retirement, only in men. The retirement decision can also generate a loss of social role [START_REF] Kim | Retirement transitions, gender, and psychological well-being: a life-course, ecological model[END_REF], a reduction of social capital and therefore a deterioration in mental health, strengthened in the case of a negative impact on the living standards. Other studies also reach opposite results including mental health (cognitive abilities) [START_REF] Behncke | Does retirement trigger ill health?[END_REF][START_REF] Bonsang | Does retirement affect cognitive functioning[END_REF][START_REF] Dave | The effects of retirement on physical and mental health outcomes[END_REF][START_REF] Mazzonna | Aging, Cognitive Abilities and Retirement in Europe[END_REF][START_REF] Rohwedder | Mental Retirement[END_REF]. Overall, the positive effect of retirement on health status seems to prevail, except for cognitive abilities. To our knowledge, only very few studies tried to work out the effect of transitioning into retirement on health in France and show that retirement decision improves physical health for non-qualified people. Data The individuals. In order to avoid too heterogeneous samples, we select individuals aged 50-69 in 2010 for whom we benefit from all the information needed in terms of pension and health status. Thus, we work on a sample of 4,610 individuals. 2,071 of them are retired. Descriptive statistics The general descriptive statistics on the 50-69 year-old sample are available in Table 13. First four columns grant information about the whole sample, fifth column ( ) gives the number of individuals belonging to the category in row and last three columns respectively give the average in the retired or non-retired populations and the significance of the difference between the two. The most important element to notice in these simple descriptive statistics is that retirees apparently systematically self-report a worse general health condition and a better mental health status than non-retirees. Obviously these raw statistics do not account for other characteristics, notably the 8-year difference in age between the two populations. Yet, 38% of the retired population declare poor levels of self-assessed health against 36% in the nonretired population, 50% a chronic disease (against 40%) and 26% being limited in daily activities (vs. 24%). These findings are not quite similar for mental health indicators, which indicate that the retired population suffers from less anxiety disorders (5%) and depressive episodes (6%) than the control group (resp. 8% and 9%). Exposure to harsh physical and psychosocial working conditions is much higher among retirees than among non-retirees as it is likely that the last years of professional life are marked by greater exposures. Finally, retirees are more prone to having social activities such as associations, unions, religious or artistic activities (48% vs. 38%), have more physical activities (45% vs. 40%), are less often smokers (16% vs. 27%, most likely at least partly indicating a selection effect, the most heavy smokers having a shorter life expectancy) but are more overweight (60% vs. 52%) than the rest of the population. Each point represents the proportion of retirees in the sample at a given age (starting from less than of retirees at age 50 to at age 69). Each 5-year category from age 50 to 69 has been considered and fitted separately in order to identify eventual discontinuities in the growth of the proportion at specific ages. As expected for the French case, three retirement ages seem to emerge as the most common, hence being the most effective cut points: age 55, 65 but mostly age 60, which corresponds to the legal threshold for full-rate pension. Thus, when the proportion of pensioners is only of about 45% of the sample's total at age 59, it amounts to more than 80% of the total number only a year later. Similar graphs specifically for men and women are available in Appendix 15 (Figure XI and Figure XII). Empirical strategy Biases As evidenced in the literature, determining the effect of the retirement decision on retirees' health condition is not trivial. In fact, besides taking into account the natural deterioration rate of the health capital related to ageing, estimates are subject to biases due to the endogeneity of the relationship between health status and retirement. Thus, two major sources of endogeneity may be raised. The first is the existing two-way relationship between retirement and health status. In particular, the decision to retire taken by individuals depends on their initial health condition, leading to a health-related selection bias. The second is the unobserved factors influencing not only health status but also retirement. To the extent that individuals have different characteristics, notably in terms of subjective life expectancy, risk aversion preferences or disutility at work, then the estimates are at risk of being biased. Identifying variables approach Advantages To address these methodological difficulties, we set up an identifying variable method, the objective being to determine the causal effect of retirement decision on retirees' health condition. The identification strategy of this method relies on the use of legal norms following which individuals undergo a change (decision to retire) or not, norms therefore regarded as sources of exogeneity [START_REF] Coe | Retirement effects on health in Europe[END_REF]. The general idea of this method lies in the exploitation of discontinuities in the allocation of a treatment (the retirement decision) related to laws granting incentives to retire at a certain age. To the extent that a full rate legal retirement age in France exists (60 years-old for this study, before the implementation of the Fillon reform in 2010), we use this indicator as the identifying variable for the retirement process. However, it is noteworthy that age, and more importantly reaching a certain age, is not the only element predicting the retirement decision. Using a minimum age as a source of exogeneneity, the instrumental variable method is relatively close to a Regression Discontinuity Design method (RDD) on panel data, the major difference between instrumental variables and RDD being that it is possible with the latter to establish different trends before and after reaching the threshold, which is not possible with a conventional instrumental variables method [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF]. Nevertheless, instrumental variables allow greater flexibility in estimations and do not focus exclusively on very short-term effects of retirement on health. Hypotheses The use of instrumental variable methods is based on two assumptions widely discussed in the literature. The first, called the relevance assumption induces that the identifying variable is correlated with the endogenous variable. In our case, the identifying variable being the legal age of retirement at full rate, it appears intrinsically relevant to explain the decision to retire. The second, called the validity assumption, assumes that the identifying variable is not correlated with the error term. To the extent that the legal age of retirement is decided at the level of the state and is not conditioned by health status, this hypothesis, although not directly testable, does not appear as particularly worrying especially considering this empirical strategy is very widely used in the literature. It is also to be noted that reaching a certain particular age (for instance age 60) should not specifically generate a discontinuity in the agerelated health status degradation trend. Identifying variables We consider, in the French context, three possible significant ages of retirement suggested by the legislation and by the data itself: age 55, 60 and 65. 55 is the first significant age inducing early retirements. Before the Fillon 2010 reform, age 60 is the legal age for a full pension and has the greatest discontinuity in the number of retirees. Finally, we also test age 65 to account for late retirement decisions. As evidenced in Figure VII below, 37% of retirees have done so precisely at age 60, 9% at 55 and 5% at age 65. Note that, for the rest of the paper, only the fact of being aged 60 and older will be used as an identifying variable except in some specific robustness checks. Estimation We consider first a simple specification relying on a binomial probit model, explaining health status in 2010 (vector , for health indicator and individual ) by the self-declared retirement status ( ), controlling the model by a vector of other explanatory variables ( ): (1) However, for the reasons mentioned above, this specification (1) does not appear satisfying enough to determine a causal effect of retirement on health status. This relationship is characterised by endogeneity biases related to reverse causality and unobserved heterogeneity. Formally, our identification strategy is then based on the fact that, even if achieving or Distribution Retirees exceeding a certain age does not fully determine the retirement status, it causes a discontinuity in the probability of being retired at a certain age. Therefore, in order to exploit this discontinuity, we also estimate the following equation ( 2): (2) The dummy variable takes the value when individual is at least years-old. Consequently, we estimate simultaneously a system of two equations ( 3): (3) Empirically, to estimate this simultaneous two-equation system, we set up a bivariate probit model, estimated by maximum likelihood. The use of such models is justified by the fact that both explained and explanatory variables are binary indicators [START_REF] Lollivier | Économétrie avancée des variables qualitatives[END_REF]. This method is equivalent to conventional two-stage methods in a linear case. (4) We simultaneously explain the probability of being retired and health status. We introduce the vector representing the identifying variables allowing the model's identification (4). These variables take the form of dummies, taking value if individual is at least yearsold and otherwise, the threshold depending on the legal retirement age considered. Taking the example of the full-rate age of retirement (60), the corresponding identifying variable will take value if individual is aged 60 or over, and otherwise (other thresholds 55 and 65 are determined in the same manner). Bivariate probit models also assume the correlation between Regarding our variable of interest, we use a question specifying the current occupation status at the time of the 2010 survey, and build a dummy variable equal to if the individual has reported being retired or pre-retired at this date and otherwise. We control all our results by sex, age, age squared (age plays an important role in determining health status, and this role is not necessarily linear throughout the entire life), educational level in three dummies (the more educated individuals are generally better protected in terms of health status than the less educated), having had at least one child, activity sector (public, private or self-employed, when applicable) as it is likely that some sectors are more protective than others. Relying on the retrospective part of the data, we include indicators for having spent the majority of the career in long-term jobs of more than 5 years and finally an indicator for career fragmentation (these are especially important because of their influence not only on health status but also on the age of retirement). We are also able to reconstruct, year by year, the professional path (including working conditions) of individuals since the end of their initial studies to the end of their career. Exposure to physical and psychosocial working conditions during the whole career (the fact of having been exposed 20 years to single strains or 10 years to multiple simultaneous strains of the same type) are thus accounted for. The hypothesis behind it is that individuals having faced such strains at work should be even more 15 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using the "probit" and "biprobit" commands for the main results, as well as the "ivreg2" package for linear probability models used as robustness checks. relieved by retirement, hence inducing heterogeneity in the effect of retirement on health status. The potential mechanisms explaining the role of retirement on health status will be assessed by daily social activities (associations, volunteering, unions, political, religious or artistic activities), physical activity and health-related risky behaviours (tobacco, alcohol and BMI). Results Main results Table 14 below presents the econometric results for the five health indicators first displaying naive univariate probit models and then bivariate probit models accounting for endogeneity biases using the legal age of retirement at full rate (60) as source of exogeneity. The models for the probability to be retired (first step) are available in Table 43 (Appendix 19). Naive univariate models indicate, whatever the health indicator considered, no effect of retirement on health status whatsoever. Yet, many expected results can be found: the deleterious effect of ageing (except for chronic diseases and anxiety disorders), a powerful protective effect of the level of education and from being self-employed. Having spent the majority of one's career on long-term jobs and having experienced a stable career path also play an important role. Exposures to detrimental working conditions during the whole career has an extremely strong influence on health, including higher impacts from physical constraints on perceived health status and activity limitations and larger amplitudes of psychosocial risks factors on anxiety disorders and depressive episodes. Finally, being a man appears to be very protective when considering anxiety disorders and depressive episodes. probability of being retired can also be noted. However, being self-employed seems to greatly reduce the probability of being retired ( . Finally, having been exposed to physical strains at work also appears to accelerate the retirement process ( . Comparing the results of the bivariate probit models with their univariate equivalents (the latter assuming no correlation between residuals of the two models), there is a fairly high consistency of the results for all variables but the role of retirement in the determination of health status is changing dramatically between uni-and bivariate models. Heterogeneity This mean impact of retirement on health status is bound to be heterogeneous, notably according to sex (men and women have different types of career and declarative patterns), education levels (because of the protective role of education in terms of career and health outcomes) and more importantly past exposures to detrimental working conditions (retirement seen as a relief from possibly harmful jobs). We can therefore test these assumptions by seeking for heterogeneity in the effect by sex (Table 15 andTable 16), by education levels (Table 17 andTable 18) and possible past exposures to physically (Table 19 andTable 20) or psychosocially (Table 21 andTable 22) demanding jobs. The models have also been conducted on a subsample excluding civil servants (Appendix 20, Table 44 andTable 45). All the following models make use of the fact of being aged 60 or older as a source of exogeneity. Sex Because the determinants of men's and women's health status and career outcomes may differ and because health condition suffers from declarative social heterogeneity [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF], it is first interesting to assess the possible heterogeneity of the effect of retirement on health status according to sex. The results are hence stratified by sex (results for men are presented in Table 15 and for women in Table 16 below). In the male population, retirement reduces the probability to declare activity limitations, generalized anxiety disorders and major depressive episodes. No significant effect appears on self-assessed health and chronic diseases. Among women, retirement only seems favourable for GAD and MDE. In terms of magnitude, retirement decreases the probability of activity limitations and GAD by and of MDE by in men, when in women the decrease in GAD and MDE is of respectively and . Education We then stratify our sample according to the level of education: on the one hand, we consider individuals with a primary or secondary education level (Table 17) and on the other hand, the ones that reached a level at least equivalent to the French baccalaureat (Table 18). It is to be noted that the sample sizes of the two populations are fairly different (resp. 3,045 and 1,497 individuals for the lowly and highly educated). In the lower-educated population, retirement seems beneficial in terms of daily activity limitations ( on the probability to declare activity limitations), GAD ( and MDE ( . In the higher-educated sample, the role of retirement is sensible on chronic diseases ( and even more important for mental health (resp. and for GAD and MDE). Other changes in the determinant of health status are noticeable between these two populations: having been in long term jobs as well as physical and psychosocial working conditions during the career do exhibit massive impacts on health status in 2010, when it is not as much the case in the higher-educated sample. Past work strains The beneficial effects of retirement on health status are often explained because retirement, seen as the fact of not working anymore, is considered as a relief from hard jobs in terms of working conditions. Here we test the hypothesis according to which retirement is even more beneficial on health if retirees were originally employed in harmful jobs. We stratify the sample respectively according to high and low physical exposures (Table 19 andTable 20) and high and low psychosocial exposures (Table 21 andTable 22) during the whole career. Again, despite precision-losses related to sample sizes, the most psychosocially exposed individuals during their career also experience massive improvements in all aspects of their health status (resp. , , , and for self-assessed health, chronic diseases, activity limitations, GAD and MDE). In the less exposed individuals, only GAD ( ) and MDE ( ) are affected. The massive impacts in the psychosocial subgroup specifically on self-assessed health and mental health indicators can be explained by the relief from a very stressful work-life. The impact on chronic diseases most likely depicts the role of retirement on long-term mental health deterioration as a consequence. Civil servants Because civil servants (who are included in our sample) are likely to be specific in terms of retirement requirements, we test whether or not the results vary if we only consider individuals who are/were not civil servants (it is impossible to run the regressions on civil servants only, because of sample sizes). The results indicate no major changes, and the effect of retirement on health status is confirmed by these regressions (Appendix 20, Table 44 andTable 45). Mechanisms We investigate several possible reasons (mechanisms) as of why retirement appears to have such a positive impact on retirees' health. In section 5.3.1, we acknowledge the effects of retirement on daily activities and then, in section 5.3.2, on health-related risky behaviours. All the following models make use of the fact of being aged 60 or older as a source of exogeneity. Retirement has a positive role on the probability of having daily social activities as well as on the probability to have physical activities ( ), which is in line with the literature [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF] (Table 23). Even though it is not possible to say for sure this may causally explain why retirees have a better health condition, daily social activities and sport are bound to be correlated with better health status and well-being (Ho, 2016;[START_REF] Ku | Leisure-Time Physical Activity, Sedentary Behaviors and Subjective Well-Being in Older Adults: An Eight-Year Longitudinal Research[END_REF][START_REF] Sarma | The Effect of Leisure-Time Physical Activity on Obesity, Diabetes, High BP and Heart Disease Among Canadians: Evidence from 2000/2001 to 2005/2006: THE EFFECT OF LTPA ON OBESITY[END_REF]. Retiring is also found to decrease the probability of smoking ( ) which is also in line with a general health status improvement and makes sense, because of the relief retirement generates from the stress of the work-life for instance. Yet, most likely because of the increase in spare time and despite the fact that retirees do sport more often, they are also more numerous to have a risky alcohol consumption ( ) and to be overweight ( ) (Table 24). These results are congruent with the literature, which notably shows that quitting smoking involves higher BMI levels [START_REF] Courtemanche | The Effect of Smoking on Obesity: Evidence from a Randomized Trial[END_REF], just like the fact of retiring [START_REF] Godard | Gaining weight through retirement? Results from the SHARE survey[END_REF]. ). We estimate bivariate Probit models, this time including these three thresholds in the retirement models. The main results are unchanged, and the auxiliary models show no effect of the 55-year threshold, while a strong effect can be found for the 60 and 65 thresholds, this potentially rendering them useful as identifying variables (Appendix 21, Table 46 and Table 47). Health-related risky behaviours Robustness checks We then put our results to the test of linear probability models (LPM), estimated by the generalized method of moments (GMM) with heteroscedasticity-robust standard errors, in order to take advantage of the possibility of using our two relevant identifying variables (60 and 65 years-old thresholds) by initiating different tests. The type of modelling also allows for several tests, as well as for a better handling of unobserved heterogeneity [START_REF] Angrist | Mostly harmless econometrics: an empiricist's companion[END_REF]. It also allows relaxing the hypothesis of the residuals following a bi-normal distribution (which is the case of bivariate probits). The results of the models (Appendix 21, Table 48) are resilient to LPM modelling. It is the same for the results of auxiliary retirement models, which are also stable (Appendix 21, Table 49). We performed Sargan-Hansen tests for over-identification, which show that the null hypothesis of correctly excluded instruments is never rejected in our case. Moreover, the Kleibergen-Paap test statistics are consistently well above the arbitrary critical value of 10, indicating that, with no surprise, our instruments seem relevant to explain the retirement decision. Finally, we test whether the results hold up when not controlling for several, endogenous covariates, related to the professional career. What can be noted is that the results appear as robust to this new specification, indicating that the effect of retirement was not driven by endogenous relationships with such variables (Appendix 21, Table 50 andTable 51). Discussion This study measures the causal effect of retirement on health status by mobilizing an econometric strategy allowing to take into account the endogenous nature of the retirementhealth relationship (via instrumental variables) and retrospective panel data on individual careers. We find that retirement has an average positive effect on activity limitations, GAD and MDE after controlling for reverse causality and unobserved heterogeneity. No significant effect can be found on self-assessed health and chronic diseases. It is also the case in the male population when in women, retirement benefits appear only on GAD and MDE and no effect is to be measured on physical health status. These results are particularly strong in the less educated and in the most exposed individuals to physical and psychosocial working conditions during their career, while also partly holding for the rest of the population to a lesser extent. We also find that this positive effect on health status might be explained by a greater ability for retirees to have more social and physical daily activities and smaller tobacco consumption (even though we cannot be certain of the causal relationship between these mechanisms and health status in our study). Yet, retirees are also found to be significantly more at risk for alcohol consumption and overweight. To our knowledge, this is the first study to give insights on the average effect of retirement on the whole population in France and on the mechanisms which could explain its health effects as well as describing heterogeneous impacts according to sex, education levels and past exposures to two types of working conditions during the entire career, while addressing the endogeneity biases inherent to this type of study. Yet, several limitations can be noted. As we do not rely on panel data per se, we do not have the possibility to account systematically for individual unobserved heterogeneity. Even though this should not matter because of our instrumental variables framework, panel data would have enabled RDD methods allowing the implementation of differentiated trends left and right of the thresholds, at the cost of temporal distance and sample sizes. Also, in the case of unobserved characteristics correlated with both the probability to be retired and health status, an endogeneity issue cannot be excluded, which can render our identification strategy doubtful in that respect. Another main limit lies in the fact that we cannot determine if the mean effect of retirement on health status differs according to the distance with the retirement shock. We do not know, because of our data, if this effect is majorly led by short-, mid-or long-run consequences, neither can we determine if the impact on health status happens right after retirement or in a lagged fashion. There are also several missing variables, such as the professional status before retirement and standards of living as well as elements related to retirement reforms. It is also to be noted that comparisons between stratified samples are complicated because the results hold on different samples. Some perspectives also remain to be tested. An initial selection of the sample taking into account the fact that individuals have worked during their careers or even a selection of individuals who have worked after reaching 50 would probably grant a greater homogeneity in the sample. Finally, the potentiality of some individuals being impacted by pension reforms will be assessed and further robustness checks accounting for this possibility will be conducted if necessary. General conclusion 1. Main results Because of its temporal approach, the main findings of this Ph.D. Dissertation can be summed-up in terms of occupational and health cycles. Starting from the beginning of the work life, this Ph.D. Dissertation was able to find that exposures to detrimental working conditions early on are related to higher amounts of chronic diseases in exposed men and women (Chapter 2). Based on a career-long temporal horizon both for physical and psychosocial exposures and health status, major differences in terms of health condition between the most and least exposed workers related to job strains are indeed found. Workers facing gradually increasing strains in terms of duration or simultaneity of exposure are more frequently coping with raising numbers of chronic diseases, being either physical or mental conditions. Even though these workers are supposedly more resilient to such strains being exposed during the first part of their career, sensible health status degradations are visible. Accounting for baseline characteristics including childhood important events, this result is robust to selection processes into a job and unobserved heterogeneity. In physically exposed men, around of chronic diseases can be explained by gradually increasing levels of exposures. Exposures to psychosocial strains account for of them. In women, increasing physical (resp. psychosocial) exposures explain between and (resp. ) of their number of chronic diseases, after exposure. As a consequence, women (when not being the most exposed), are found to experience the most degrading effect of such exposures. In part, workers may experience health shocks during their career, which are susceptible to deteriorate their capacity to remain in their job. Notably, mental health conditions such as depressive episodes or anxiety disorders appear as strong explanatory factors of this capacity (Chapter 1). After accounting for socioeconomic characteristics, employment, general health status, risky behaviours and most importantly the professional career, suffering from common mental disorders induces a decrease of up to in the probability of remaining in employment four years later for men at work in 2006. In the female population, no such effect can be found, as general health status remains predominant in explaining their trajectory on the labour market. This result is in line with the literature about employability of individuals facing mental health conditions in the general population, but provides insights about the capacity for ill workers to remain in employment. Considering separately depressive episodes and anxiety disorders suggests that the disabling nature of mental health goes through both indicators. In addition, the accumulation of mental disorders increases the risk of leaving employment during the period for men facing both disorders compared to for those only facing one of the two). These findings induce that individuals facing such impairments are more likely to know more fragmented careers. As a consequence retirement's role on health status differs according to the nature of past circumstances, notably related to initial human capital and job characteristics. It is indeed found to be beneficial for individuals' physical and mental health status overall, with disparities depending notably on the nature of the career. Accounting for reverse causality and unobserved heterogeneity, retirement decreases the probability to declare activity limitations ( ), anxiety disorders ( ) and depressive episodes ( ) when no significant effect can be found on self-assessed health and chronic diseases in men. In women, retirement benefits appear only on mental health outcomes (resp. in anxiety and in depression). Heterogeneity in this global effect is found, indicating a particularly strong relationship in the less educated and in the most exposed individuals to physical and psychosocial working conditions during their career, while also partly holding for the rest of the population to a lesser extent. As far as explanatory mechanisms go, a greater ability for retirees to have more social and physical daily activities ( ) and smaller tobacco consumption ( ) are likely to generate these positive health outcomes. Yet, retirees are also found to be significantly more at risk for alcohol consumption ( ) and overweight ( ). Limitations and research perspectives Every chapter of this dissertation relies on survey data. All chapters make use of the French panel data of the Santé et Itinéraire Professionnel survey (Sip). Moreover they all rely, at least partly, on retrospective information (i.e. information from the past gathered at the time of the survey, possibly much later). Thus, because of the nature of the data, biases in declarative behaviours and memory flaws cannot be excluded. It is indeed possible that, depending on some characteristics, individuals might answer a given question differently even if the objective answer would be the same [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Apart from that, a posteriori justifications or rationalisations are also likely to generate misreporting and measurement errors [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF][START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF]. Also, the indicators used in this dissertation are more or less subjective measures for the most part. Health status indicators like self-reported self-assessed health, chronic diseases, activity limitations, generalized anxiety disorders and major depressive episodes are all, to a certain extent, subjective measurements for health conditions. Yet, it is to be noted that these indicators also appear to be reliable and valid to assess individuals' health status and are standard and widely used. Self-assessed health is notoriously correlated with life expectancy [START_REF] Idler | Self-Rated Health and Mortality: A Review of Twenty-Seven Community Studies[END_REF], anxiety disorder and depressive episodes are consolidated measures coming from the when more subjective indicators better succeed in embracing the whole picture of work strains. Also, when it is understandable that the legislator seeks for objectivity in a context of potential compensations, the subjective feelings beyond objective strains appear as much more relevant when trying to assess the role of these strains on health status. Some research perspectives for Chapter 1 are possible. Results suggest very different types of impact of mental health on job retention. It would be interesting to be able to disentangle the mechanisms behind these differences. They may partly be explained by differences in social norms related to the perception of mental disorders and employability, as well as by differences in the severity of diseases. As is, it is not possible to assess such social norms or the severity of the disease. A mental health score would most likely allow for it, as well as providing a more stable indicator for mental health (as it is apparent that the amplitude of the results depends a lot on the retained definition of mental health). The results are also conditioned by the fact that the 2006-2010 period is particular in terms of economic conjuncture, asking the question of the external validity of the results. Obviously, clarifying the exact role of the economic crisis in the relationship we observe in this Chapter would allow for more detailed interpretations. Chapter 2 may also benefit from some extensions. It would first be interesting to test potential heterogeneous effects of working conditions on health, depending on the time of exposure. If there is already a sensible effect of exposure early on, when individuals are more resilient to these strains, it is definitely a possibility that exposure on older workers would imply even greater health disparities. Yet, this hypothesis needs to be tested empirically. Another interesting topic would be to establish the part between what is induced by exposures themselves and what is implied by health-related behaviours [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF]. Exposed workers may have specific behaviours in terms of tobacco or alcohol consumption for instance, or some specific features in terms of healthcare usage that would be correlated with their exposures and health status. Finally, detailed work on heterogeneity sources in the effect seems important, in terms of demographic and socioeconomic characteristics. Research perspectives for Chapter 3 include specific work to determine if the average effect of retirement on health status differs according to the distance with retirement shock. Is the effect majorly led by short-, mid-or long-run consequences? Is the impact on health status happening right after retirement or in a lagged fashion? Another question that will need to be answered is whether or not the effect of retirement on health status differs depending on the retirement profile, i.e. if the individual retires early or late. It is indeed possible that the effect might be stronger in workers retiring early (because of more detrimental exposures during their career to work strains), or stronger in workers retiring late (because of longer exposures) notably. This specific dilemma would be interesting to test. Policy implications Some recommendations can be suggested, based on this work. First, because their incapacitating nature is lower than heavier mental health disorders, depressive episodes and anxiety disorders generally received less attention from policy makers. Yet, these disorders are more widespread (6% of men and 12% of women suffer from at least one of these condition in France, according to our data), and their detrimental role on the capacity of workers to remain in employment seems verified, at least in the male population. Because of the onset of depression or anxiety, the probability of male workers to remain in employment within a timespan of four years is significantly decreased. Hence, policies should account for such conditions and increase support for workers facing them in the workplace. Policies focusing on adapting the workplace to the needs of these ill workers and making it easier for them to find a job are most likely the two most relevant kinds of frameworks that could help in reducing the role of their disease on their career outcomes organisation and hierarchical practices in order to promote mental health. Then, because of the timing of exposure (usually starting early on during the career) and considering the longlasting detrimental effects on health status (onset of chronic diseases), a greater emphasis may need to be put on preventive measures such as health and safety promotion at work and the design of a more health-preserving workplace, instead of curative frameworks. Overall, by being able to better quantify the long-term health costs of strenuous jobs, a need for a change from the currently dominating position of curative scheme to preventive measures starting from the very design of the workplace seems mandatory. The European Commission (1989) states that "work shall be adapted to individuals and not individuals to work", and insists since then on the concept of work sustainability (EU strategy 2007-2012-European Commission 2007). In a context of overall kick back of legal ages of retirement due to deficits in pension systems induced, notably, by constant increases in life expectancy, the question of the role of retirement on the determination of health status is crucial. Chapter 3 demonstrates a clear positive impact of retirement on general and mental health, both for men and women, but with variations across sex, education and exposure levels to detrimental working conditions. It appears that retirement bears even more beneficial effects for the less educated and more exposed workers during their career, especially to psychosocial strains. Postponing retirement decisions seem then all the more risky that as it is, retirement in general appears as the one tool to relieve workers from their potentially poor working conditions. In that sense, postponing legal retirement ages may not be successful in balancing pension systems, simply because there are consequences in terms of health status at old ages of these reforms, and also because exposed workers may not be able to reach these higher thresholds at work (hypothesis quite possibly at least partly verified by existing low levels of employability for senior workers). Extensions of the contribution period or the reversibility of the retiree's status (increasingly desired in Europe in recent years - [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]) should be accompanied by preventive measures for work strains (which is in line with the conclusions of Chapters 2 and 3) during the career, or at least by differentiated retirement schemes depending on the nature and intensity of the entire work life of pensioners. Because retirement generally seems to promote more healthy behaviours due to the increase of available free time but yet also suggests an increase in alcohol consumption and overweight, information campaigns and specific incentives towards retirees in that sense could be introduced. Appendices Appendix 1: Major Depressive Episodes (MDE) The MDE are identified in two stages. First, two questions making use of filters are asked: - -A positive response to two filter questions and four symptoms are listed -Two positive answers to two filter questions and three symptoms are listed Figure I : I Figure I: Summary of Work-Health relationships in the Ph.D. Dissertation ........................... Figure Figure II: Prevalence of health problems in the population in employment in 2006 ............... Figure Figure III: Employment rates in 2010 according to self-reported health status in 2006 .......... Figure Figure IV: General health status of anxious and/or depressed individuals in 2006 ................. Figure V : V Figure V: Configuration of working conditions and chronic diseases periods ........................ Figure Figure VI: Proportion of retirees in the sample according to age ............................................ Figure Figure VII: Distribution of retirement ages .............................................................................. Figure Figure VIII: Common trend assumption test -Physical sample ( ) .................................... Figure Figure IX: Common trend assumption test -Psychosocial sample ( ) ................................ Figure X : X Figure X: Common trend assumption test -Global sample ( ) ........................................... Figure Figure XI: Proportion of retirees in the male sample, according to age ................................ Figure Figure XII: Proportion of retirees in the female sample, according to age ............................ Table 1 : 1 Estimated probability of employment in 2010, male population ............................... Table 2: Estimated probability of employment in 2010, female population ............................ Table 3: Estimation of mental health in 2006 .......................................................................... Table 4: Impact of mental health in 2006 on employment in 2010 according to various measures, men and women ....................................................................................................... Table 5: Estimated probability of employment (binary variable 2007-2010) .......................... Table 6: Thresholds description ............................................................................................... Table 7: Base sample description ( ) ..................................................................................... Table 8: Working conditions and chronic diseases description ( ) ........................................ Table 9: Matched sample description ( ) ............................................................................... Table 10: Matched difference-in-differences results ( to ), physical treatment ................ Table 11: Matched difference-in-differences results ( to ), psychosocial treatment ........ Table 12: Matched difference-in-differences results ( to ), global treatment ................... Table 13: General descriptive statistics .................................................................................... Table 14: Retirement and health status .................................................................................... Table 15: Heterogeneity analysis -Male population ............................................................... Table 16: Heterogeneity analysis -Female population ........................................................... vs. attrition population according to mental health and employment status in 2006 ................................................................................... Table 29: General descriptive statistics ..................................................................................Table 30: Employment status in 2006, according to mental health condition ....................... Table 31: Mental health status in 2010 of individuals in employment and reporting mental health disorders in 2006 ......................................................................................................... Table 32: Correlations of identifying variables (men) ........................................................... Table 33: Correlations of identifying variables (women) ...................................................... Table 34: Mental Health estimations in 2006 ........................................................................Table 35: Unmatched difference-in-differences results ( to ), physical treatment ......... Table 36: Unmatched difference-in-differences results ( to ), psychosocial treatment .. Table 37: Unmatched difference-in-differences results ( to ), global treatment ............. Table 38: Specification test -Matched Diff.-in-Diff. vs. Matched Ordinary Least Squares -Physical, psychosocial and global treatments ( ) -Matched ............................................... Table 39: Thresholds tests -Normal treatment vs. Single exposures only vs. Poly-exposures only -Physical, psychosocial and global treatments ( ) -Matched .................................... Table 40: Wage and risky behaviours in 2006 -Unmatched and matched samples ............. Table 41: Gender and working conditions typologies, per activity sector in 2006 ................ Table 42: Working conditions typology, by gender in 2006 .................................................. Table 43: Auxiliary models of the probability of being retired ............................................. Table 44: Retirement and health status -No civil servants ................................................... Table 45: Auxiliary models of the probability of being retired -No civil servants .............. Table 46: Tests with three instruments (age 55, 60 and 65)................................................... Figure I : I Figure I: Summary of Work-Health relationships in the Ph.D. Dissertation 2. 1 . 1 The Santé et Itinéraire Professionnel survey The Santé et Itinéraire Professionnel (Sip) used in this study provides access to a particularly detailed individual description. Besides the usual socioeconomic variables (age, sex, activity sector, professional category, educational level, marital status), specific items are provided about physical and mental health. The survey was conducted jointly by the French Ministries in charge of Healthcare and Labour and includes two waves (2006 and 2010), conducted on the same sample of people aged 20-74 living in private households in metropolitan France. The 2010 wave was granted with an extension to better assess psychosocial risk factors. Two questionnaires are available: the first one is administered by an interviewer and accurately informs the individual and job characteristics and the current health status of the respondents. It also contains a biographical lifegrid to reconstruct individual careers and life events: childhood, education, health, career changes, working conditions and significant life events. The second one is a self-administered questionnaire targeting risky health behaviours (weight, cigarette and alcohol consumption). It informs current or past tobacco and alcohol consumption (frequency, duration, etc.). A total of 13,648 people were interviewed in 2006, and 11,016 of them again in 2010. In this study, we focus on people who responded to the survey both in 2006 and 2010, i.e. 11,016 people. We select individuals aged 30-55 years in employment in 2006 to avoid including students (see Appendix 3 and Appendix 4 for a discussion of the initial selection made on the sample in 2006 and a note on attrition between the two waves). The final sample thus consists of 4,133 individuals, including 2,004 men and 2,129 women. 2.2. Descriptive statistics 2.2.1. Health status of the employed population in 2006 To broadly understand mental health, we use major depressive episodes (MDE) and generalized anxiety disorder (GAD), from the Mini International Neuropsychatric Interview (MINI), based on the Diagnostic and Statistical Manual of Mental disorders (DSM-IV). These indicators prove particularly robust in the Sip survey (see Appendix 5). Around 6% of men and 12% of women in employment in 2006 report having at least one mental disorder (Figure II). services), belonging to the private or public sectors (vs. self-employed) and part time work. It is interesting to note that within this selected population (i.e. in employment in 2006), professional categories have no role on employment trajectory between 2006 and 2010. In men, being 50 and over in 2006, the lack of education, celibacy and professional category (blue collars are most likely to leave the labour market) are all significant factors of poor labour market performance. The only common denominator between men and women appears to be the role of mental health and age. professional careers. A decrease of up to in the probability of remaining in employment 4 years later for men at work in 2006 can be observed. In the female population, general health status remains predominant in explaining their trajectory on the labour market. Our results, in line with those of the literature, provide original perspectives on French data about the capacity of mentally-impaired workers to keep their jobs. Considering separately MDE and GAD suggests that the disabling nature of mental health goes through both indicators. In addition, the accumulation of mental disorders (MDE and GAD) greatly increases the risk of leaving employment during the period ( for men facing both disorders compared to for those only facing one of the two). These results are also supported by specific estimations on the 2007-2010 period, partly allowing to deal with the events occurring between 2006 and 2010. Psychiatry and Mental Health Plan 2011-2015 affirms the importance of job stress prevention and measures to enable easier job retention and return to work of people with mental disorders.Following this first step, several extensions could be appropriate. First, an important weakness in our identification strategy remains possible. The identifying variables used may indeed be correlated with unobservable characteristics such as instability or the lack of self-confidence which are also related to outcomes on the labour market. This can possibly render the hypothesis of exogeneity of the relationship doubtful. If such characteristics are components or consequences of our mental health indicators, then it should not be problematic as their effect would transit completely through the latter. Yet, we cannot exclude that at least part of the variance induced by these unobservable characteristics is directly related to employment, regardless of our mental health indicators. Our results demonstrate a different impact of mental health on job retention. This difference may partly result from selection related to mental health and employment in 2006, differing by sex 6 . It can also be explained by differences in social norms related to the perception of mental disorders and employability, by differences in the disease severity and differentiated paths during the 2006-2010 period (as suggested by the health status trajectories for individuals in employment and ill in 2006 -see Table Siegrist's models tend to study the results of combined exposures to several, simultaneous work stressors (job strain and iso-strain).[START_REF] De Jonge | Job strain, effort-reward imbalance and employee well-being: a large-scale cross-sectional study[END_REF] show the independent and cumulative effects of both types of models. On the matter of cumulative exposures,[START_REF] Amick | Relationship of job strain and iso-strain to health status in a cohort of women in the United States[END_REF] demonstrate, based on longitudinal data that chronic exposures to low job control is related to higher mortality in women. The study of[START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] uses panel data and analyses the role of cumulative physical and environmental exposures over five years (from 1993 to 1997) while controlling for initial health status and health-related selection. This study is very likely the closest paper in the literature to the present study. They aggregate several physical and environmental working conditions indicators and create composite scores, which they then sum over five years. They find clear impacts of these indicators, on both men and women, with variations depending on demographic subgroups. This work expands on this particular study notably by considering exposures to both physical and psychosocial risk factors as well as by taking into account exposures that occur throughout the whole career (it is easily imaginable that larger health effects may occur in cases of longer exposures). I also include the possibility of accounting for simultaneous exposures. Figure V : V Figure V: Configuration of working conditions and chronic diseases periods 3. 1 . 1 The Santé et Itinéraire Professionnel (Sip) survey I use data coming from the French Health and Professional Path survey (Santé et Itinéraire Professionnel -Sip). It has been designed jointly by the statistical departments of two French ministries in charge of Health 7 and Labour 8 . The panel is composed of two waves (2006 and 2010). Two questionnaires are proposed: the first one is administered directly by an interviewer and investigates individual characteristics, health and employment statuses. It also contains a life grid, which allows reconstructing biographies of individuals' lives: childhood, education, health, career and working conditions, as well as major life events. The second one is self-administered and focuses on more sensitive information such as health-related risky behaviours (weight, alcohol and tobacco consumption). Overall, more than 13,000 individuals were interviewed in 2006 and 11,000 in 2010, making this panel survey representative of the French population 9 .I make specific use of the biographic dimension of the 2006 survey by reconstructing workers' career and health events yearly 10 . I am therefore able to know each individual's employment status, working conditions and chronic diseases every year from their childhood to the date of the survey(2006). As far as work strains are concerned, the survey provides information about ten indicators of exposure. The intensity of exposure to these work strains is also known. Individuals' health statuses are assessed by their declaration of chronic diseases, for which the onset and end dates are available. individual annual indicators are used to assess the exposure to detrimental work strains and I regroup them into three relevant categories. The first one represents the physical load of work and includes night work, repetitive work, physical load and exposure to toxic materials. Field: Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Field: Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Field: Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Figure Figure VIII, Figure IX and Figure X (Appendix 10), respectively, present the chronic disease trends for the treated and control groups in the matched physical, psychosocial and global 6. 3 . 3 Single vs. simultaneous exposures I tested the relevance of the differentiation made between single and multiple exposures in the three working condition treatments, i.e., the relevance of considering that a certain number of single exposures are equivalent to half that number of poly-exposures (inspired from the French legislation -Sirugue et al., 2015). Table 39 (Appendix 12) presents several results. The first two columns indicate, for , the results obtained with a treatment considering 16 years of single exposures or 8 years of polyexposures (which are the main results presented in this paper). The next two columns indicate the results when considering treatment while accounting only for 16 years of single exposures. The last two columns present the results for a treatment when considering only 8 years of poly-exposures. Santé et Itinéraire Professionnel survey (Sip) used in this study provides access to particularly detailed individual descriptions. Besides the usual socioeconomic variables (age, sex, activity sector, professional category, educational level, marital status), specific items are provided about physical and mental health. The survey was designed jointly by the French Ministries in charge of Healthcare and Labour and includes two waves (2006 and 2010), conducted on the same sample of people aged 20-74 years living in private households in metropolitan France. The 2010 wave was granted with an extension to better assess psychosocial risk factors. Two questionnaires are available: the first one is administered by an interviewer and accurately informs the individual and job characteristics and the current health status of the respondents. It also contains a biographical lifegrid to reconstruct individual careers and life events: childhood, education, health, career changes, working conditions and significant life events. The second one is a self-administered questionnaire targeting risky health behaviours (weight, cigarette and alcohol consumption). It notably informs the current or past tobacco and alcohol consumption (frequency, duration, etc.). A total of 13,648 people were interviewed in 2006, and 11,016 of them again in 2010.We make use of the biographic dimension of the 2006 survey by reconstructing workers' careers yearly. We are therefore able to know, for each individual, his/her employment status and working conditions every year from their childhood to the date of the survey(2006). As far as work strains are concerned, the survey provides information about ten indicators of exposure: night work, repetitive work, physical load and exposure to toxic materials, full skill usage, work under pressure, tensions with the public, reward, conciliation between work and family life and relationships with colleagues. The intensity of exposure to these work strains is also known.In our sample, we only retain individuals present in both the2006 and 2010 waves, i.e. 11,016 Figure Figure VI shows the evolution of the proportion of retirees in the sample, depending on age. Figure VI : VI Figure VI: Proportion of retirees in the sample according to age Figure VII : VII Figure VII: Distribution of retirement ages residuals and , i.e. . In addition, residuals of this model are expected to follow a bi-normal distribution 15 : 4.4. Variables Five health status indicators are used in this study. In order to acknowledge the effect of retirement decision on general health condition, we use three indicators coming from the Mini European Health Module (see Appendix 16): self-assessed health status (dichotomized to oppose very good and good perceived health conditions on the one hand and fair, bad and very bad on the other hand), chronic illnesses (binary) and limitations in daily activities (binary). We also use two mental health indicators: suffering from Generalised Anxiety Disorders (GAD) in the six previous months or Major Depressive Episodes (MDE) over the past two weeks (see Appendix 17 and Appendix 18). First , we test other retirement thresholds, as three different thresholds are potentially relevant in the French case: years 55, 60 and 65 (see Figure VI as well as Figure XI and Figure XII in Appendix 15 Diagnostic and Statistical Manual of Mental disorders (DSM-IV) and chronic diseases and activity limitations are, by definition, less subject to volatility in declarations compared to other indicators (because of their long-lasting and particularly disabling nature), even if selfdeclared. Working conditions are also subjective and self-declared in this dissertation, and hence cannot really allow for detailed comparison to legislative frameworks, which are based on objective measures. However, these objective measures only hold on physical strains and nothing else (simply because psychosocial risk factors are, by definition, subjective feelings) Over the past two weeks, have you felt particularly sad, depressed, mostly during the day, and this almost every day? Yes/No -Over the past two weeks, have you almost all the time the feeling of having no interest in anything, to have lost interest or pleasure in things that you usually like? Yes/No Then, if one of the two filter questions receives a positive answer, a third question is then asked, in order to know the specific symptoms: Over the past two weeks, when you felt depressed and/or uninterested for most things, have you experienced any of the following situations? Check as soon as the answer is "yes", several possible positive responses. -Your appetite has changed significantly, or you have gained or lost weight without having the intention to (variation in the month of +/-5%) -You had trouble sleeping nearly every night (sleep, night or early awakenings, sleep too much) -You were talking or you moved more slowly than usual, or on the contrary you feel agitated, and you have trouble staying in place, nearly every day -You felt almost tired all the time, without energy, almost every day -You feel worthless or guilty, almost every day -You had a hard time concentrating or making decisions, almost every day -You have had several dark thoughts (such as thinking it would be better be dead), or you thought about hurting yourself Using the responses, two algorithms are then implemented in accordance with the criteria of the Diagnostic and Statistical Manual (DSM-IV). An individual suffers from MDE if: Figure X : X Figure X: Common trend assumption test -Global sample ( ) Table 17 : 17 Heterogeneity analysis -Low education attainment ............................................. Table 18 : 18 Heterogeneity analysis -High education attainment ............................................. Table 19 : 19 Heterogeneity analysis -Highly physically demanding career ............................. Table 20 : 20 Heterogeneity analysis -Lowly physically demanding career .............................. Table 21 : 21 Heterogeneity analysis -Highly psychosocially demanding career ...................... Table 22 : 22 Heterogeneity analysis -Lowly psychosocially demanding career ...................... Table 23 : 23 Mechanisms -The effect of retirement on daily activities .................................... Table 24 : 24 Mechanisms -The effect of retirement on health-related risky behaviours .......... Table 25 : 25 Selection analysis -Population in employment vs. unemployed in 2006 ............. Table 26 : 26 Selection analysis -Main characteristics of individuals reporting at least one mental disorder in 2006, according to their employment status in 2006 ........................................... Table 27 : 27 Attrition analysis -panel population (interviewed in 2006 and 2010) vs. attrition population (interviewed in 2006 and not in 2010) ................................................................. Table 28 : 28 Attrition Analysis -panel population Table 47 : 47 Auxiliary models of the probability of being retired (age 55, 60 and 65) ............. Table 48 : 48 Estimation of linear probability models (LPM) using the generalized method of moments (GMM) with two instruments (60 and 65) ............................................................. Table 49 : 49 Auxiliary models of the probability of being retired -LPM (GMM).................... Table 50 : 50 Retirement and health status -No endogenous covariates .................................... Table 51 : 51 Auxiliary models of the probability of being retired -No endogenous covariates Employment rates in 2010 according to self-reported health status in 2006 Figure III: 50 40 30 20 10 0 GAD MDE At least one mental disorder Activity limitations Men Poor general health Women Chronic disease Daily smoking Risky alcohol consumption Overweight descriptive statistics on mental disorders. GAD are faced by 88 men and 195 women and MDE respectively by 91 and 236. 150 men and 335 women declare suffering from at least one mental disorder. Reading: 82% of men in employment and suffering from at least one mental disorder (GAD or MDE) in 2006 are still in employment in 2010, against 86% of women. Field: individuals age 30-55 in employment in 2006. Source: Sip (2006), weighted and calibrated statistics. Figure IV: General health status of anxious and/or depressed individuals in 2006 95 90 85 80 75 70 GAD MDE At least one mental disorder Activity limitations Poor general health Chronic disease Daily smoking Risky alcohol consumption Overweight Overall population in employment in 2006 Employment rate in 2010 (M) Employment rate in 2010 (W) 29%. It is interesting to note that men with at least one mental disorder are less likely to report being overweight (Figure IV). Reading: 53% of men reporting mental disorders in 2006 also have risky alcohol consumption in 2006, against 17% of women. Field: individuals age 30-55 in employment in 2006 who reported having at least one mental health disorder. Source: Sip (2006), weighted and calibrated statistics. Table 1 : Estimated probability of employment in 2010, male population 1 Univar. Probit (M1) Univar. Probit (M2) Univar. Probit (M3) Bivariate Probit (IV) Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Mental health in 2006 At least one mental disorder -.09*** .02 -.07*** .02 -.07*** .02 Mental health (instr.) in 2006 At least one mental disorder -.13** .05 Ind. characteristics in 2006 Age (ref.: 30-35 years-old) -35-39 .02 .02 .01 .03 .01 .03 -.01 .03 -40-44 -.01 .02 -.03 .02 -.04 .03 -.03 .03 -45-49 -.02 .02 -.01 .03 -.03 .03 -.03 .03 -50-55 -.14*** .02 -.15*** .02 -.16*** .02 -.16*** .03 In a relationship (ref.: Single) .03** .01 .03** .01 .03** .01 .02 .02 Children (ref.: None) -.02 .02 -.01 .02 -.01 .02 -.02 .02 Education (ref.: French bac.) -No diploma -.06** .02 -.05** .02 -.05* .03 -.06** .03 -Primary -.03 .02 -.01 .02 -.01 .02 -.01 .02 -Superior -.00 .02 -.00 .02 -.00 .02 .01 .02 Employment in 2006 Act. sector (ref.: Industrial) -Agricultural -.03 .02 -.02 .03 -.02 .03 -.03 .03 -Services -.00 .01 .00 .01 .00 .01 .01 .02 Activity status (ref.: Private) -Public sector .03* .02 .02 .02 .02 .02 .01 .02 -Self-employed .04 .03 .04 .03 .03 .03 .03 .04 Prof. cat. (ref.: Blue collar) -Farmers .15*** .05 .12** .05 .12** .05 .12** .06 -Artisans .07** .04 .06* .04 .06* .04 .10** .04 -Managers .05** .02 .04** .02 .04** .02 .04* .02 -Intermediate .03* .02 .02 .02 .02 .02 .02 .02 -Employees .01 .02 .00 .02 -.00 .02 -.01 .02 Part time (ref.: Full-time) -.05 .03 -.04 .02 -.03 .03 -.01 .04 General health status in 2006 Poor perceived health status -.02 .02 -.02 .02 -.00 .02 Chronic diseases .00 .01 .00 .01 .00 .01 Activity limitations -.03* .02 -.03* .02 -.04** .02 Risky behaviours in 2006 Daily smoker -.04*** .01 -.04*** .01 -.05*** .01 Risky alcohol consumption -.00 .01 .00 .01 .01 .01 Overweight .01 .01 .01 .01 .01 .01 Professional route Maj. of empl. in long jobs .03* .02 .02 .01 Stable career path .01 .01 .00 .01 Rho Hausman test 4 .22** 1,71 .12 N 2004 2004 2004 1860 Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey, men aged 30-55 in employment in 2006. Table 2 : Estimated probability of employment in 2010, female population 2 The last column of Table1and Table2presents the results of the bivariate probit models, respectively for men and women. The results for the bivariate mental health models are summarized in Table3(complete results of univariate and bivariate probit models for mental health are available in Table34, Appendix 7). For these results, note that as explained in Univar. Probit (M1) Univar. Probit (M2) Univar. Probit (M3) Bivariate Probit (IV) Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Mental health in 2006 At least one mental disorder -.05*** .01 -.02 .02 -.02 .02 Mental health (instr.) in 2006 At least one mental disorder -.02 .09 Ind. characteristics in 2006 Age (ref.: 30-35 years-old) -35-39 .01 .02 .01 .02 .00 .02 .00 .02 -40-44 .01 .02 .01 .02 .00 .02 .00 .02 -45-49 -.04** .02 -.03 .02 -.04 .02 -.04 .02 -50-55 .10*** .02 -.10*** .02 -.10*** .02 -.10*** .02 In a relationship (ref.: Single) .00 .01 .01 .01 .01 .01 .01 .01 Children (ref.: None) -.08*** .02 -.07*** .02 -.07*** .02 -.07*** .02 Education (ref.: French bac.) -No diploma -.03 .03 -.04 .03 -.04 .03 -.04 .03 -Primary -.02 .02 -.01 .02 -.01 .02 -.01 .02 -Superior .00 .02 -.00 .02 -.01 .02 -.01 .02 Employment in 2006 Act. sector (ref.: Industrial) -Agricultural .04 .04 .04 .04 -04 .04 -.04 .04 -Services .05*** .02 .06*** .02 .06*** .02 .06*** .02 Activity status (ref.: Private) -Public sector .01 .01 .02* .01 .02 .01 .02 .01 -Self-employed .07** .04 .06* .04 .06* .04 .06* .04 Prof. cat. (ref.: Blue collar) -Farmers .02 .07 .01 .07 -.00 .07 -.00 .07 -Artisans -.02 .04 -.03 .05 -.03 .05 -.03 .05 -Managers .00 .03 -.01 .03 -.02 .03 -.02 .03 -Intermediate -.00 .02 -.01 .02 -.01 .02 -.01 .02 -Employees .01 .02 .00 .02 .00 .02 -.00 .02 Part time (ref.: Full-time) -.03** .01 -.03** .01 -.02* .01 -.02* .01 General health status in 2006 Poor perceived health status -.04** .02 -.03** .02 -.03* .02 Chronic diseases .00 .01 -.00 .01 -.00 .01 Activity limitations -.04** .02 -.04** .02 -.04* .02 Risky behaviours in 2006 Daily smoker -.01 .01 -.00 .01 -.00 .01 Risky alcohol consumption -.01 .02 -.01 .02 -.01 .02 Overweight -.02 .01 -.01 .01 -.01 .01 Professional route Maj. of empl. in long jobs .02 .01 .02 .01 Stable career path .01 .01 .01 .01 Rho .02 .36 Hausman test .00 N 2129 2129 2129 1982 Table 3 : Estimation of mental health in 2006 3 Men Women Coeff. Std. err. Coeff. Std. err. Identifying variables Raised by a single parent - - .07*** .02 Suffered from violence during childhood .09** .05 .08*** .02 Experienced many marital breakdowns .03** .01 - - After controlling for individual characteristics, employment, general health status and professional career. Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006. Table 34 34 in Appendix 7) reinforce this hypothesis. In men, the causal effect of mental health in 2006 on employment in 2010 seems corroborated by the bivariate analysis, indicating a drop of in the probability of remaining at work. It is also possible to reaffirm the direct role of smoking on the likelihood of job loss. Mental health remains non-discriminative on women's employment. Ultimately, our main results are confirmed by the bivariate analysis, and fall in line with the literature using the same methodologies. It is to be noted that the results for Hausman tests are all rendered non-significant, indicating that the indentifying variable frameworks might not be very different from naive models, hence not mandatory. Table 4 : Impact of mental health in 2006 on employment in 2010 according to various measures, men and women Men Women Coeff. Std. err. Coeff. Std. err. Instrumented mental health 4 Suffers from MDE -.08*** .02 -.01 .01 Suffers from GAD -.10*** .02 -.02 .02 Disorders counter -One disorder -.05* .02 -.02 .02 -Two simultaneous disorders -.14*** .04 -.02 .03 After controlling for individual characteristics, employment, general health status and professional career. Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey, men and women aged 30-55 in employment in 2006. Table 5 : Estimated probability of employment (binary variable 2007-2010) 5 Men Women Coeff. Std. err. Coeff. Std. err. Mental health in 2006 At least one mental disorder -.05*** .02 -.00 .02 After controlling for individual characteristics, employment, general health status and professional career. Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006. Table 6 : Thresholds description Threshold Parameter Treatment thresholds 6 Single exposure threshold 4 6 8 10 12 14 16 18 Poly-exposure threshold 2 3 4 5 6 7 8 9 Periods definition Working conditions observation period 6 9 12 15 18 21 24 27 Minimum duration at work 2 3 4 5 6 7 8 9 Table 7 : Base sample description ( ) 7 Population aged 42-74 in 2006 and present from to . 7 th threshold. Unmatched sample. Variable Mean Std. error Min Max Physical sample Treated Control Diff. Treated Control Diff. Treated Control Diff. Psychosocial sample Global sample Treatment Physical treatment .47 .50 0 1 - - - - - - - - - Psychosocial treatment .44 .50 0 1 - - - - - - - - - Global treatment .68 .47 0 1 - - - - - - - - - Health status Initial chronic diseases .12 .36 0 4.67 .10 .13 -.04*** .12 .11 .01 .11 .14 -.03** First health period .63 .93 0 9.50 .65 .62 .03 .70 .58 .12*** .64 .61 .03 Second health period .72 .99 0 9.00 .73 .70 .03 .80 .65 .15*** .73 .69 .04 Third health period .82 1.07 0 9.00 .83 .82 .02 .91 .76 .15*** .83 .81 .03 Demography Entry year at work 1963 8.65 1941 1977 1962 1965 -2.7*** 1963 1963 -0.37 1963 1965 -2.3*** Men .51 .50 0 1 .63 .41 .21*** .54 .49 .05*** .57 .39 .19*** Women .49 .50 0 1 .37 .59 -.21*** .46 .51 -.05*** .43 .61 -.19*** Age 59.67 7.67 42 74 60.20 59.20 .99*** 59.94 59.47 .47* 60.09 58.78 1.31*** No diploma .13 .33 0 1 .18 .08 .09*** .14 .11 .03** .15 .08 .07*** Inf. education .62 .48 0 1 .69 .57 .12*** .61 .64 -.03* .64 .58 .06*** Bachelor .12 .32 0 1 .07 .16 -.09*** .11 .12 -.01 .09 .17 -.07*** Sup. education .12 .32 0 1 .05 .18 -.13*** .12 .12 -.00 .10 .16 -.07*** Childhood Problems with relatives .44 .50 0 1 .47 .40 .07*** .48 .41 .07*** .46 .39 .07*** Violence .09 .29 0 1 .10 .08 .02** .12 .07 .05*** .10 .06 .04*** Severe health problems .13 .33 0 1 .13 .12 .01 .14 .12 .02* .13 .12 .02 Physical post-exposure None .57 .49 0 1 .26 .85 -.59*** .48 .65 -.17*** .43 .88 -.46*** Low .20 .40 0 1 .30 .11 .20*** .22 .18 .04*** .26 .07 .18*** High .23 .42 0 1 .44 .04 .39*** .30 .17 .13*** .32 .04 .28*** Psycho. post-exposure None .57 .49 0 1 .48 .66 -.18*** .27 .81 -.53*** .44 .85 -.41*** Low .21 .43 0 1 .25 .18 .07*** .31 .14 .18*** .26 .09 .17*** High .22 .41 0 1 .27 .17 .11*** .41 .06 .35*** .29 .05 .24*** Global post-exposure None .43 .50 0 1 .22 .62 -.39*** .23 .59 -.35*** .26 .80 -.55*** Low .18 .38 0 1 .19 .17 .03* .19 .17 .01 .22 .10 .12*** High .39 .49 0 1 .58 .21 .37*** .58 .24 .34*** .53 .10 .43*** Tobacco consumption During initial health period During 1 st health period During 2 nd health period During 3 rd health period .09 .23 .22 .21 .29 .42 .42 .41 0 0 0 0 1 1 1 1 .08 .24 .23 .22 .10 .22 .21 .20 -.03*** .03** .02 .02 .10 .23 .22 .21 .08 .23 .22 .21 .02 .01 -.00 -.00 .09 .24 .23 .21 .10 .21 .20 .19 -.01 .03** .03* .02 Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. Standard errors in italics. The average number of chronic diseases in the whole sample before labour market entry is . In the future physically treated population, this number is (which is significantly lower than for the future control group, i.e., at the 1% level). Such a difference at baseline in health statuses between future treated and control groups does not exist in the psychosocial sample. Field: Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Table 8 : Working conditions and chronic diseases description ( ) 8 The physically treated population faced nearly years of physical burden when their control group only faced one and a half. This difference of years is significant at the 1% level. For chronic diseases, the sample faced an average of cancer from the beginning of their lives to the date of the survey. Field:Population aged 42-74 in 2006 and present from to . 7 th iteration. Unmatched sample. Variable Mean Std. error Min Max Physical sample Treated Control Diff. Psychosocial sample Treated Control Diff. Global sample Treated Control Diff. Working conditions Night work 1.34 5.41 0 32 2.58 .23 2.35*** 1.88 .92 .95*** 1.87 .19 1.68*** Repetitive work 4.35 9.07 0 40 7.88 1.20 6.68*** 5.72 3.31 2.40*** 5.86 1.17 4.68*** High physical load 8.31 12.85 0 46 15.80 1.59 14.20*** 10.94 6.27 4.67*** 11.60 1.31 10.28*** Hazardous materials 4.60 9.99 0 41 8.87 .76 8.11*** 5.77 3.69 2.08*** 6.43 .69 5.74*** Lack of skill usage 1.50 4.80 0 25 1.86 1.17 .69*** 2.71 .56 2.15*** 1.92 .61 1.31*** Work under pressure 3.76 8.51 0 38 5.45 2.24 3.20*** 7.18 1.11 6.07*** 5.15 0.80 4.35*** Tension with public 1.24 5.01 0 29 1.52 .98 .53*** 2.46 .29 2.17*** 1.71 .22 1.49*** Lack of recognition 3.72 8.45 0 40 5.41 2.21 3.20*** 7.22 1.01 6.21*** 5.11 .77 4.34*** Work/private life imbalance 1.43 5.41 0 31 1.90 1.01 .89*** 2.82 .35 2.47*** 1.98 .26 1.72*** Tensions with colleagues 1.34 5.41 0 32 .37 .31 .06 .59 .14 .45*** .42 .16 .26*** Chronic diseases Cardiovascular .38 .54 0 3 .38 .38 .01 .39 .37 .02 .38 .38 .01 Cancer .09 .35 0 3 .06 .11 -.04*** .08 .09 -.01 .07 .11 -.04** Pulmonary .16 .43 0 4 .19 .13 .07*** .16 .16 .01 .17 .13 .05** ENT .12 .41 0 3 .13 .11 .02 .13 .12 .01 .13 .12 .01 Digestive .16 .46 0 4 .17 .15 .02 .17 .15 .02 .17 .15 .02 Mouth/teeth .01 .08 0 2 .01 .01 .00 .01 .01 .00 .01 .00 .00 Bones/joints .42 .67 0 3 .49 .36 .12*** .44 .40 .04 .44 .39 .05* Genital .08 .32 0 2 .08 .08 -.00 .08 .08 -.00 .08 .08 .00 Endocrine/metabolic .26 .52 0 2 .26 .27 -.00 .23 .28 -.04* .25 .29 -.04 Ocular .09 .33 0 3 .08 .10 -.02 .09 .09 .01 .08 .10 -.02 Psychological .19 .51 0 4 .18 .20 -.02 .24 .16 .08*** .20 .17 .03* Neurological .07 .31 0 2 .07 .07 .01 .08 .07 .00 .07 .08 -.00 Skin .05 .26 0 1 .05 .05 -.00 .05 .05 .01 .05 .05 .00 Addiction .02 .17 0 2 .02 .02 -.00 .02 .02 .00 .02 .02 -.00 Other .14 .44 0 4 .14 .14 -.00 .12 .15 -.03* .14 .13 .01 Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. Standard errors in italics. The individuals present in the sample faced an average of years of exposure to a high physical load at work. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Table 9 : Matched sample description ( ) 9 Variable Physical sample Treated Control Diff. Psychosocial sample Treated Control Diff. Global sample Treated Control Diff. Health status Initial chronic diseases .08 .10 -.02 .10 .10 .00 .09 .12 -.02 First health period .63 .55 .07** .68 .54 .13*** .63 .56 .07** Second health period .72 .63 .09*** .78 .62 .16*** .72 .63 .08** Third health period .82 .72 .10*** .89 .72 .17*** .83 .74 .09** Demography Entry year at work 1962 1962 -.08 1963 1963 .01 1963 1963 -.04 Men .63 .63 0 .54 .54 0 .56 .56 0 Women .37 .37 0 .46 .46 0 .44 .44 0 Age 60.02 60.31 -.28 59.82 59.61 .21 59.59 59.64 -.05 No diploma .15 .15 0 .13 .13 0 .11 .11 0 Inf. education .72 .72 0 .65 .65 0 .70 .70 0 Bachelor .06 .06 0 .10 .10 0 .09 .09 0 Sup. education .05 .05 0 .11 .11 0 .10 .10 0 Childhood Problems with relatives .45 .45 0 .46 .46 0 .41 .41 0 Violence .07 .07 0 .07 .07 0 .04 .04 0 Severe health problems .10 .10 0 .10 .10 0 .09 .09 0 Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. After matching, there is no significant difference between the future treated and control groups in terms of initial mean number of chronic diseases for physical, psychosocial and global samples. Field: Population aged 42-74 in 2006 and present from to . 7 th threshold. Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Table 10 : Matched difference-in-differences results ( to ), physical treatment 10 Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.) : being exposed to at least 12 years of single exposures or 6 years of multiple exposures Men First health period .012 .069 .036 .065 .488 Second health period -.024 .020 .012 .050 .036 .068 .500 1908/3212 Third health period Women .024 .066 .048 .047 .562 90% / 88% First health period .086 .056 .100* .052 .439 Second health period -.014 .019 .087 .058 .101** .043 .496 1226/3044 Third health period .097* .051 .111** .048 .522 : being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men First health period .016 .072 .038 .070 .497 Second health period -.022 .019 .017 .074 .039 .073 .561 1890/3196 Third health period Women .024 .076 .046 .072 .620 90% / 88% First health period .134*** .055 .148** .058 .597 Second health period -.014 .020 .142** .060 .156*** .053 .653 1162/3036 Third health period .155** .067 .169** .066 .762 : being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men First health period .024 .075 .047 .074 .607 Second health period -.023 .017 .032 .076 .055 .075 .681 1890/3226 Third health period Women .066 .078 .089 .077 .815 91% / 88% First health period .178*** .068 .185*** .064 .769 Second health period -.007 .018 .192*** .073 .199*** .069 .862 1128/3042 Third health period .196** .081 .203*** .076 .959 : being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men First health period .063 .069 .076* .052 .736 Second health period -.013 .017 .84 .070 .097** .054 .833 1820/3224 Third health period Women .87 .076 .100** .055 .946 92% / 87% First health period .193*** .072 .193** .079 .904 Second health period -.000 .019 .210*** .078 .210*** .074 .970 1064/3022 Third health period .221** .083 .221*** .068 1.044 : being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men First health period .80 .064 .087** .051 .764 Second health period -.007 .016 .110* .066 .117** .060 .871 1694/3232 Third health period Women .113* .070 .120*** .060 .986 92% / 86% First health period .225*** .075 .228*** .082 .909 Second health period -.003 .019 .229*** .086 .232*** .077 .961 970/2976 Third health period .246*** .081 .249*** .070 1.045 Table 11 : Matched difference-in-differences results ( to ), psychosocial treatment 11 Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.) : being exposed to at least 12 years of single exposures or 6 years of multiple exposures Men First health period .018 .039 .016 .035 .357 Second health period .014 .016 .046 .041 .032 .037 .408 1560/3318 Third health period Women .045 .045 .031 .042 .432 89% / 93% First health period .037 .053 .040 .048 .380 Second health period -.003 .024 .053 .054 .056 .046 .419 1354/3068 Third health period .064 .056 .067 .044 .445 : being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men First health period .089* .043 .080** .040 .464 Second health period .009 .016 .090* .046 .081** .040 .521 1534/3288 Third health period Women .139*** .047 .130*** .045 .632 90% / 91% First health period .035 .053 .047 .051 .516 Second health period -.012 .024 .053 .058 .065 .045 .569 1310/3072 Third health period .055 .062 .067 .056 .660 : being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men First health period .117** .049 .112** .046 .613 Second health period .005 .016 .118** .056 .113** .056 .664 1496/3320 Third health period Women .139** .066 .134** .067 .734 90% / 93% First health period .151*** .059 .156*** .055 .743 Second health period -.005 .023 .155*** .065 .160*** .063 .867 1272/3142 Third health period .157** .072 .172*** .061 .969 : being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men First health period .123** .050 .111** .047 .671 Second health period .012 .017 .131** .067 .119** .048 .696 1410/3290 Third health period Women .161** .069 .149** .069 .830 91% / 92% First health period .179*** .065 .181** .079 .881 Second health period -.002 .023 .204*** .072 .206*** .068 .963 1192/3106 Third health period .218*** .081 .220*** .061 1.058 : being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men First health period .127*** .053 .116** .052 .714 Second health period .011 .017 .133** .073 .122** .050 .730 1274/3272 Third health period Women .154*** .074 .143*** .053 .861 91% / 91% First health period .206*** .066 .209*** .078 .917 Second health period -.003 .023 .222*** .072 .225*** .067 1.015 1110/3098 Third health period .230*** .081 .233*** .061 1.125 Table 12 : Matched difference-in-differences results ( to ), global treatment 12 Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.) : being exposed to at least 12 years of single exposures or 6 years of multiple exposures Men First health period -.003 .067 .023 .066 .391 Second health period -.026 .022 -.003 .070 .023 .069 .401 2256/3002 Third health period Women .017 .053 .043 .049 .434 82% / 94% First health period .024 .056 .025 .051 .386 Second health period -.001 .023 .032 .054 .033 .047 .438 1850/3018 Third health period .034 .056 .035 .049 .473 : being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men First health period -.019 .073 .013 .073 .431 Second health period -.032 .021 -.010 .074 .022 .075 .491 2192/2962 Third health period Women .025 .076 .057 .076 .589 80% / 94% First health period .067 .057 .076 .054 .527 Second health period -.009 .021 .078 .054 .087 .050 .586 1734/2978 Third health period .089 .063 .098* .056 .688 : being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men First health period .018 .038 .049 .067 .588 Second health period -.031 .020 .038 .070 .069 .069 .671 2160/2978 Third health period Women .049 .074 .80 .073 .804 81% / 94% First health period .143*** .071 .148*** .067 .740 Second health period -.005 .020 .157*** .058 .162*** .054 .859 1710/3010 Third health period .167*** .063 .173*** .059 .972 : being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men First health period .058 .066 .080 .064 .703 Second health period -.022 .019 .065 .071 .087 .069 .772 2126/3024 Third health period Women .114 .074 .136* .073 .934 82% / 94% First health period .138* .083 .139* .081 .840 Second health period -.001 .019 .170** .071 .171** .068 .936 1652/3034 Third health period .180*** .064 .181*** .061 1.044 : being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men First health period .097 .063 .100* .055 .724 Second health period -.003 .017 .099 .067 .102* .056 .777 2146/3172 Third health period Women .113* .071 .116* .068 .925 86% / 93% First health period .191** .077 .190** .075 .885 Second health period .001 .019 .206*** .061 .205*** .058 .992 1586/3072 Third health period .210*** .067 .209*** .064 1.095 Table 13 : General descriptive statistics 13 Variable Mean Std. error Min. Max. N Mean Retirees Mean retirees non- Diff. Retirement Retired .42 .49 0 1 2071 - - - Aged 55 or more .74 .44 0 1 3629 .98 .55 -.44*** Aged 60 or more .45 .50 0 1 2235 .90 .13 -.77*** Aged 65 or more .18 .38 0 1 876 .40 .01 -.39*** Health status Poor perceived health .37 .48 0 1 1802 .38 .36 -.02* Chronic diseases .45 .50 0 1 2200 .50 .40 -.10*** Activity limitations .25 .43 0 1 1219 .26 .24 -.02* Anxiety disorder .07 .25 0 1 321 .05 .08 .02*** Depressive episode .08 .27 0 1 380 .06 .09 .03*** Demographics Men .46 .50 0 1 2254 .51 .42 -.08*** Age 58.79 .40 50 69 4932 63.47 55.40 -8.06*** No education .09 .28 0 1 421 .08 .09 .01 Primary/secondary .56 .50 0 1 2782 .62 .52 -.09*** Equivalent to French BAC .14 .34 0 1 679 .12 .15 .04*** Superior .19 .40 0 1 957 .17 .21 .04*** One or more children .91 .29 0 1 4466 .91 .90 -.01 Employment Public sector .18 .39 0 1 898 .12 .23 .11*** Private sector .36 .48 0 1 1772 .20 .47 .26*** Self-employed .07 .26 0 1 348 .04 .10 .06*** Career in long-term jobs .79 .41 0 1 3881 .84 .75 -.10*** Stable career .59 .49 0 1 2887 .53 .62 .10*** Poor physical working cond. .22 .41 0 1 1010 .29 .17 -.12*** Poor psychosocial working cond. .16 .37 0 1 731 .20 .13 -.07*** Mechanisms Daily social activities .42 .49 0 1 2088 .48 .38 -.10*** Sport .42 .49 0 1 2063 .45 .40 -.05*** Tobacco consumption .22 .42 0 1 1034 .16 .27 .11*** Risky alcohol consumption .24 .42 0 1 1085 .25 .23 -.02 Overweight .56 .50 0 1 2540 .60 .52 -.09*** Note: ***: significant at 1%, **: significant at 5%, *: significant at 10%. Reading: Retirees are 38% to report poor perceived health and 36% of non-retirees are in good perceived health. This difference of -2 percentage points is significant at the 10% level. Field: Santé et Itinéraire Professionnel survey, individuals aged 50-69 in 2010. Table 14 : Retirement and health status 14 When taking into account the endogenous nature of the retirement decision (i.e. reverse causality between health conditions and retirement as well as omitted variables related to these two dimensions), the results are thereby radically changed. Retirement indeed appears to have a fairly strong negative effect on the probability of reporting activity limitations ( Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired .00 .02 -.07 .05 .04 .02 -.02 .05 .00 .02 -.09** .04 -.02 .01 -.11*** .03 -.01 .01 -.10*** .03 Demographics Men .00 .00 -.00 -.00 .02 .02 -.04*** -.04*** -.03*** -.03*** (ref.: women) .01 .01 .02 .02 .01 .01 .01 .01 .01 .01 Age .06** .03 .06** .03 .03 .03 .02 .03 .07*** .03 .07*** .03 .03 .02 .03* .02 .03** .02 .04*** .02 Age² -.01** .00 -.01* .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.01* .00 -.00 .00 -.01** .00 -.01* .01 Children -.03 -.03 -.03 -.03 .01 .01 .03* .03* .03* .03* (ref.: none) .02 .02 .03 .03 .02 .02 .01 .02 .02 .02 Education < BAC -.11*** -.11*** -.03 -.03 -.04* -.04* -.02 -.02 -.04*** -.04*** (ref.: no dipl.) .02 .02 .03 .03 .02 .02 .01 .01 .01 .01 = BAC -.14*** -.14*** -.03 -.03 -.04 -.04 -.01 -.00 -.04** -.03** (ref.: no dipl.) .02 .03 .03 .03 .03 .03 .01 .02 .01 .02 > BAC -.26*** -.26*** -.08*** -.08** -.09*** -.09*** -.03** -.04** -.07*** -.07*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02 Employment Public sector -.02 -.02 -.01 -.01 -.05** -.05** .01 .01 .01 .01 (ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Self-employed -.07** -.08*** -.04 -.05 -.05* -.06** -.02 -.04** -.04* -.05** (ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Long-term jobs -.12*** -.11*** -.08*** -.08*** -.10*** -.09*** -.02** -.01 -.04*** -.03*** (ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Stable career -.02 -.02 -.01 -.01 -.02* -.02* .00 .01 -.01* -.01 (ref.: unstable) .01 .01 .02 .02 .01 .01 .01 .01 .01 .01 Physical strains .11*** .02 .12*** .02 .07*** .02 .07*** .02 .09*** .02 .10*** .02 .02*** .01 .03*** .01 .02* .01 .02** .01 Psycho. strains .07*** .02 .07*** .02 .06*** .02 .06*** .02 .04** .02 .04** .02 .03*** .01 .04*** .01 .04*** .01 .04*** .01 Rho .14 .09 .10 .08 .21** .08 .47*** .10 .41*** .12 Hausman test 16 2.33 1.71 6.75*** 10.13*** 10.13*** N 4610 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010. Table 15 : Heterogeneity analysis -Male population 15 Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired -.06 .03 -.08 .07 .01 .03 -.04 .07 -.04 .03 -.11* .06 -.02 .01 -.11*** .04 -.02 .02 -.13*** .05 Demographics Age .13*** .04 .14*** .04 .08* .05 .08* .05 .09** .04 .10** .04 -.00 .02 .02 .03 .02 .02 .06* .03 Age² -.01*** -.01*** .00 .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 .00 .00 -.00 .00 -.00 .00 -.01* .00 Children -.00 -.00 -.05 -.05 .03 .03 .02 .03 .01 .01 (ref.: none) .03 .03 .04 .03 .03 .03 .02 .02 .02 .02 Education < BAC -.10*** -.09*** .02 .02 -.03 -.03 -.01 -.01 -.04*** -.04*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .01 .01 .01 = BAC -.22*** -.22*** -.01 -.01 -.08** -.08** -.03 -.04* -.05*** -.06*** (ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02 > BAC -.27*** -.27*** -.04 -.04 -.13*** -.14*** -.02 -.03 -.06*** -.07*** (ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02 Employment Public sector -.07** -.07** -.05 -.05 -.06*** -.10*** -.01 -.01 -.00 -.01 (ref.: private) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02 Self-employed -.11*** -.11*** -.07* -.08* -.06* -.07** .01 -.01 -.02 -.04 (ref.: private) .04 .04 .04 .04 .03 .04 .02 .02 .02 .02 Long-term jobs -.15*** -.15*** -.12*** -.12*** -.10*** -.09*** -.02* -.02 -.05*** -.04** (ref.: short term) .04 .04 .04 .04 .03 .03 .01 .02 .01 .02 Stable career -.03 -.03 -.02 -.02 -0.4** -.04** -.00 .00 -.01 -.00 (ref.: unstable) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Physical strains .09*** .02 .09*** .02 -.04 .03 .04 .03 -.07*** .02 .07*** .02 .02** .01 .03** .01 .01 .01 .02 .01 Psycho. strains .07*** .03 .07*** .03 .07** .03 .08** .03 -.04 .02 -.04 .02 .02* .01 .02* .01 .04*** .01 .04*** .01 Rho .05 .13 .09 .11 .17 .12 .60*** .15 .61*** .17 Hausman test .10 .63 1.81 5.40*** 5.76*** N 2140 Table 16 : Heterogeneity analysis -Female population 16 Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired .04 .03 -.08 .07 .04 .03 -.03 .07 -.04 .03 -.06 .06 -.01 .02 -.06** .04 -.01 .02 -.09** .04 Demographics Age .01 .04 -.00 .04 -.01 .04 -.02 .04 .06* .04 .05 .04 .05** .02 .04* .03 .05* .03 .04 .03 Age² -.00 .00 .00 .00 .00 .00 .00 .00 -.01* .00 -.00 .00 -.01** .00 -.00 .00 -.01* .00 -.00 .00 Children -.04 -.04 .01 .01 .01 -.01 .03 .04 .06** .06** (ref.: none) .03 .03 .04 .04 .03 .03 .02 .02 .03 .03 Education < BAC -.13*** -.13*** -.09** -.09** -.05 -.04 -.03 -.02 -.03* -.03 (ref.: no dipl.) .03 .03 .04 .04 .03 .03 .02 .02 .02 .02 = BAC -.09** -.09** -.06 -.06 -.01 -.01 .01 .01 -.02 -.01 (ref.: no dipl.) .04 .04 .04 .04 .05 .04 .02 .02 .02 .02 > BAC -.27*** -.27*** -.12*** -.12*** -.07** -.07* -.04* -.04* -.06*** -.06*** (ref.: no dipl.) .04 .04 .04 .04 .03 .03 .02 .02 .02 .02 Employment Public sector .01 .01 .02 .01 -.02 -.02 .02 .02 .01 .01 (ref.: private) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02 Self-employed -.01 -.03 -.00 -.01 -.04 -.05 -.09** -.10** -.06* -.07** (ref.: private) .05 .05 .05 .05 .04 .04 .04 .04 .04 .04 Long-term jobs -.12*** -.10*** -.07*** -.06*** -.11*** -.10*** -.02* -.01 -.04*** -.03** (ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Stable career -.02 -.02 -.01 -.00 -.01 -.01 .01 .01 -.02 -.01 (ref.: unstable) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Physical strains .13*** .03 .14*** .03 -.10*** ..03 .10*** .03 .11*** .02 -11*** .02 .02 .02 .03 .02 .02 02 .03 .02 Psycho. strains .07** .03 .07** .03 .05* .03 .05* .03 -.03 .02 -.04 .02 .05*** .02 .05*** .02 .04** .02 .04** .02 Rho .22** .12 .13 .11 .20* .12 .34** .14 .30* .15 Hausman test 3.60 1.13 .15 2.08 5.33*** N 2470 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Women aged 50-69 in 2010. Table 17 : Heterogeneity analysis -Low education attainment 17 Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired -.01 .03 -.08 .06 .03 .03 .02 .06 -.01 .03 -.13** .05 -.02 .01 -.08** .03 -.01 .02 -.07** .04 Demographics Men .04** .05** .02 .02 .05*** .05*** -.03*** -.03*** -.03** -.02** (ref.: women) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Age .05 .04 .05 .04 .02 .04 .03 .04 .07** .03 .08** .03 .02 .02 .02 .02 .04* .02 .04* .02 Age² -.00 .00 -.00 .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.00 .00 -.00 .00 -.01* .00 -.01 .01 Children -.01 -.01 -.04 -.04 .03 .03 .03 .03* .03 .04* (ref.: none) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Employment Public sector -.02 -.02 .01 .01 -.07*** -.07*** .01 .01 .01 .01 (ref.: private) .03 .03 .03 .03 .03 .03 .01 01 .01 .02 Self-employed -.08** -.09** -.03 -.03 -.03 -.04 -.01 -.02 -.02 -.03 (ref.: private) .04 .04 .04 .04 .04 .04 .02 .02 .02 .03 Long-term jobs -.15*** -.15*** -.12*** -.12*** -.13*** -.12*** -.03*** -.03** -.06*** -.06*** (ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Stable career -.03* -.03* -.01 -.01 -.02 -.01 .00 .01 -.02 -.01 (ref.: unstable) .02 .02 .02 .02 .02 .02 .00 .01 .01 .01 Physical strains .13*** .02 .13*** .02 .05** .02 .05** .02 .10*** .02 .10*** .02 .03*** .01 .03*** .01 .03*** .01 .03*** .01 Psycho. strains .08*** .02 .08*** .02 .07*** .03 .07*** .03 .02 .02 .02 .02 .03** .01 .03** .01 .04*** .01 .04*** .01 Rho .12 .10 .03 .09 .25** .10 .32** .14 .31** .13 Hausman test 1.81 .04 9.00*** 4.50*** 3.00 N 3045 Table 18 : Heterogeneity analysis -High education attainment 18 Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired .02 .03 -.03 .09 .01 .04 -.15* .09 .04 .03 .03 .08 -.01 .02 -.14*** .05 -.03 .02 -.22*** .06 Demographics Men -.06** -.06** -.03 -.03 -.04* -.04* -.07*** -.07*** -.04*** -.05*** (ref.: women) .02 .02 .03 .03 .02 0.2 .02 .02 .01 .02 Age .06 .05 .05 .05 .01 .05 -.01 .06 .05 .04 .05 .05 .04 .03 .04 .03 .02 .03 .01 .03 Age² -.00 .00 -.00 .00 -.00 .00 .00 .00 -.00 .00 -.00 .00 -.01* .00 -.00 .00 -.00 .00 .00 .00 Children -.04 -.04 -.01 -.01 .00 .00 .03 .03 .02 .03 (ref.: none) .04 .04 .04 .04 .03 .03 .02 .03 .02 .03 Employment Public sector -.05* -.05* -.05* -.05* -.03 -.03 -.01 -.01 -.01 -.01 (ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Self-employed -.05 -.06 -.07 -.10* -.04 -.06 -.03 -.07** -.06** -.12*** (ref.: private) .04 .05 .05 .05 .03 .04 .03 .03 .03 .04 Long-term jobs -.09*** -.09*** -.00 .02 -.05** -.04 -.00 .02 -.02 .00 (ref.: short term) .03 .03 .04 .04 .02 .03 .02 .02 .02 .02 Stable career -.01 -.01 -.03 -.03 -.05** -.05** -.00 -.00 .01 -.01 (ref.: unstable) .02 .02 .03 .03 .02 .02 .01 .01 .01 .02 Physical strains .07* .04 .08* .04 .15*** .05 .17*** .05 .06* .04 .07* .04 -.01 .02 .01 .03 -.03 .03 -.01 .03 Psycho. Strains .06* .03 .06* .03 .05 .04 .05 .04 .08** .03 .08** .03 .06*** .02 .06*** .02 .04** .02 .05** .02 Rho .10 .17 .28* .15 .02 .17 .57*** .15 .77*** .14 Hausman test .35 3.94** .02 8.05*** 11.28*** N 1565 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field : Santé et Itinéraire Professionnel survey. High-educated individuals aged 50-69 in 2010. Table 19 : Heterogeneity analysis -Highly physically demanding career 19 Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired -.08 .05 -.08 .05 -.10* .05 -.13* .08 -.09* .05 -.15* .09 -.08*** .03 -.17** .08 -.04 .03 -.11** .06 Demographics Men -.02 -.02 -.02 -.02 -.01 -.01 -.04** -.04** -.03* -.03* (ref.: women) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Age .12* .07 .09 .07 .13* .07 .14** .07 .11* .06 .12 .07 -.04 .04 -.02 .04 .01 .04 .02 .04 Age² -.01* .00 -.00 .00 -.01 .00 -.01 .00 -.01* .00 -.01* .01 .00 .00 .00 .00 -.00 .00 -.00 .00 Children -.01 -.00 -.02 -.02 .06 .06 .01 .01 .01 .01 (ref.: none) .06 .06 .06 .06 .05 .05 .03 .03 .03 .04 Education < BAC -.07 -.07 .02 .02 -.00 .00 -.02 -.02 -.03 -.03 (ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02 = BAC -.17** -.18** .03 .13* -.01 -.01 -.05 -.04 -.10** -.09** (ref.: no dipl.) .07 .07 .08 .07 .07 .07 .04 .04 .05 .05 > BAC -.30*** -.30*** .03 .03 -.12 -.12 -.05 -.05 -.13** -.13** (ref.: no dipl.) .08 .08 .08 .08 .08 .08 .05 .05 .06 .06 Employment Public sector .03 .03 .01 .01 -.13** -.13** .03 .03 .05 .05 (ref.: private) .06 .06 .06 .06 .06 .06 .03 .03 .03 .03 Self-employed -.05 -.04 -.16* -.16* -.02 -.03 -.01 -.02 .02 .01 (ref.: private) .08 .08 .08 .08 .08 .08 .05 .05 .05 .05 Long-term jobs -.10** -.11** -.10** -.10** -.12*** -.11*** -.04 -.03 -.06** -.05** (ref.: short term) .05 .05 .05 .05 .04 .04 .02 .02 .02 .02 Stable career -.01 -.02 -.05 -.04 .01 -.01 -.03 -.04* -.00 .00 (ref.: unstable) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Rho -.20 .16 .06 .17 .13 .17 .41* .25 .31* .17 Hausman test .00 .23 .64 1.47 1.81 N 1010 Table 20 : Heterogeneity analysis -Lowly physically demanding career 20 Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired .02 .02 -.07 .06 .07*** .03 .02 .06 .03 .02 -.07 .05 .00 .01 -.08*** .03 -.00 .01 -.09** .04 Demographics Men .00 .01 .00 .00 .02 .02 -.05*** -.05*** -.03*** -.03*** (ref.: women) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01 Age .05 .03 .04 .03 .00 .03 .00 .04 .06** .03 .06* .03 .05*** .02 .05** .02 .05** .02 .04** .02 Age² -.00 .00 -.00 .00 .00 .00 .00 .00 -.01** .00 -.01* .00 -.01*** .00 -.01** .01 -.01** .00 -.01** .00 Children -.03 -.03 -.03 -.03 .00 .00 .03* .03* .03* .04* (ref.: none) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02 Education < BAC -.14*** -.13*** -.06* -.05* -.06** -.05** -.00 -.01 -.04*** -.04*** (ref.: no dipl.) .03 .03 .03 .03 .02 .02 .02 .01 .01 .01 = BAC -.15*** -.14*** -.07* -.07* -.05* -.05* .00 .00 -.03* -.03* (ref.: no dipl.) .03 .03 .04 .04 .03 .03 .02 .02 .02 .02 > BAC -.27*** -.27*** -.10*** -.10*** -.10*** -.10*** -.03* -.03* -.06*** -.06*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 .02 .02 .02 .01 Employment Public sector -.03 -.03 -.02 -.02 -.04* -.04* .00 .00 -.00 -.00 (ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Self-employed -.07** -.09*** -.03 -.04 -.05* -.06** -.03 -.04** -.05** -.06*** (ref.: private) .03 .03 .03 .03 .03 .02 .02 .02 .02 .0 Long-term jobs -.12*** -.10*** -.08*** -.07*** -.09*** -.08*** -.01 -.01 -.03*** -.02** (ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Stable career -.03* -.03 -.00 -.00 -.03** -.03** -.01 -.00 -.02* -.02* (ref.: unstable) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01 Rho .26** .10 .08 .09 .23** .10 .43*** .12 .39** .15 Hausman test 2.53 .93 4.76*** 8.00*** 5.40*** N 3600 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals who faced a lowly physically demanding career, aged 50-69 in 2010. Despite the loss in accuracy of the estimations due to a significantly lower sample size, individuals having faced a physically strenuous career clearly experience the most positive effects of retiring on their health condition, as every indicators but self-assessed health status are impacted (resp. , , and decreases in the probability of declaring chronic diseases, activity limitations, GAD and MDE). When it comes to individuals with lower levels of physical exposures, only mental health is improved ( and for GAD and MDE). Table 21 : Heterogeneity analysis -Highly psychosocially demanding career 21 Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired -.12** .05 -.21** .11 -.15*** .05 -.35*** .12 -.11** .05 -.19* .12 -.04 .03 -.34*** .10 -.02 .04 -.23** .09 Demographics Men .01 .01 .02 .02 .02 .02 -.06*** -.06** -.03 -.03 (ref.: women) .04 .04 .04 .04 .03 .03 .02 .02 .03 .02 Age .24*** .08 .26*** .08 .23*** .08 .25*** .08 .24*** .07 .25*** .08 .01 .05 .11 .08 .10* .06 .16** .07 Age² -.01*** -.01*** -.01*** .00 .00 .00 -.01*** .00 -.01*** .00 -.01*** .00 -.00 .00 -.00 .00 -.01* .00 -.01** .00 Children -.01 -.01 .03 .03 -.02 -.02 .05 .04 .02 -.02 (ref.: none) .06 .06 .07 .06 .06 .06 .05 .05 .05 .05 Education < BAC -.09 -.08 -.02 .01 -.00 .01 .02 .07 .01 .02 (ref.: no dipl.) .06 .06 .06 .06 .05 .06 .04 .05 .04 .04 = BAC -.19*** -.18** .01 .03 -.00 .01 .05 .08* -.01 .01 (ref.: no dipl.) .09 .07 .07 .07 .07 .07 .04 .05 .05 .05 > BAC -.32*** -.31*** -.09 -.08 -.06 -.05 -.00 .00 -.06 -.05 (ref.: no dipl.) .07 .07 .07 .07 .07 .07 .04 .05 .05 .05 Employment Public sector -.05 -.06 -.11* -.14** -.20*** -.21*** .00 -.04 -.03 -.07 (ref.: private) .06 .06 .06 .06 .06 .06 .04 .04 .04 .04 Self-employed -.09 -.07 -.14 -.14 -.01 -.01 .02 .03 .01 -.00 (ref.: private) .10 .10 .10 .10 .09 .09 .06 .06 .03 .03 Long-term jobs -.11** -.10* -.09 -.06 -.10** -.09* -.03 -.01 -.06* -.04 (ref.: short term) .05 .05 .05 .05 .05 .05 .03 .03 .03 .03 Stable career .01 .02 -.04 -.03 .03 -.03 -.05 -.07*** .01 .02 (ref.: unstable) .04 .04 .04 .04 .04 .04 .02 .02 .03 .03 Rho .16 .21 .38* .21 .16 .23 .93*** .20 .70** .23 Hausman test .84 3.36 .54 9.89*** 6.78*** N 731 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals who faced a highly psychosocially demanding career, aged 50-69 in 2010. Table 22 : Heterogeneity analysis -Lowly psychosocially demanding career 22 Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE Retired .03 .02 -.08 .05 .07*** .02 .03 .06 .03 .02 -.09* .05 -.01 .01 -.08*** .03 -.01 .01 -.09*** .03 Demographics Men .01 .02 -.00 .00 -.03* .03** -.04*** -.04*** -.03*** -.03*** (ref.: women) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01 Age .03 .03 .03 .03 .00 .00 -.00 .03 .05* .03 .04 .03 .03* .02 .03 .02 .02 .02 .02 .02 Age² -.00 .00 -.00 .00 .00 .00 -.00 .00 -.00 .00 -.00 .00 -.01* .00 -.00 .00 -.00 .00 -.00 .00 Children -.03 -.03 -.03 -.03 .02 .02 .02 .02 .03* .03** (ref.: none) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02 Education < BAC -.13*** -.12*** -.04 -.04 -.05** -.05** -.03** -.03** -.05*** -.05*** (ref.: no dipl.) .03 .03 .03 .03 .02 .02 .01 .01 .01 .01 = BAC -.16*** -.16*** -.05* -.05 -.07** -.07** -.02 -.02 -.05*** -.02*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02 > BAC -.29*** -.29*** -.09*** -.09*** -.13*** -.13*** -.05*** -.05*** -.07*** -.08*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02 Employment Public sector -.03 -.02 -.01 -.01 -.03* -.03* .01 .01 .01 .01 (ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01 Self-employed -.07** -.09*** -.03 -.04 -.05* -.07** -.03 -.04** -.02 -.03* (ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02 Long-term jobs -.11*** -.10*** -.09*** -.08*** -.10*** -.08*** -.02** -.01 -.04*** -.03*** (ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .011 Stable career -.03** -.03* -.00 -.01 -.04** -.03** -.01 -.00 -.02** -.02* (ref.: unstable) .02 .02 .02 .01 .01 .01 .01 .01 .01 .01 Rho .20** .09 .07 .09 .26*** .09 .39*** .12 .36** .14 Hausman test 5.76*** .50 6.86*** 6.13*** 8.00*** N 3879 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals who faced a lowly psychosocially demanding career, aged 50-69 in 2010. Table 24 : Mechanisms -The effect of retirement on health-related risky behaviours 24 Variable Probit Tobacco Biprobit Probit Alcohol Biprobit Overweight Probit Biprobit Retired -.04** .02 -.08** .04 .04** .02 .08** .04 .05** .02 .12** .05 Demographics Men .08*** .09*** .26*** .26*** .19*** .19*** (ref.: women) .01 .01 .01 .01 .01 .01 Age .01 .03 .00 .03 .05** .03 .05** .03 .05* .03 .06* .03 Age² -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.00 .00 -.01* .00 Children -.01 -.01 .00 .00 .01 .01 (ref.: none) .02 .02 .02 .02 .03 .03 Education < BAC -.03 -.03 .04 .04 -.01 -.01 (ref.: no dipl.) .02 .02 .02 .02 .03 .03 = BAC -.02 -.02 .04 .03 -.07** -.07** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 > BAC -.06** -.06** .04 .03 -.15*** -.15*** (ref.: no dipl.) .03 .03 .03 .03 .03 .03 Employment Public sector .00 .00 -.01 -.01 -.04* -.04* (ref.: private) .02 .02 .02 .02 .02 .02 Self-employed -.01 -.01 .02 .03 -.02 -.01 (ref.: private) .03 .03 .02 .02 .03 .03 Long-term jobs -.05*** -.05** -.03* -.04** -.02 -.03 (ref.: short term) .02 .02 .02 .02 .02 .02 Stable career -.02 -.01 .01 .01 .01 .01 (ref.: unstable) .01 .01 .01 .01 .02 .02 Physical strains .03** .02 .04** .02 -.00 .02 -.00 .02 .07*** .02 .07*** .02 Psycho. strains .02 .02 .02 .02 .00 .02 .00 .02 -.02 .02 -.02 .02 Rho .07 .10 -.09 .09 -.13 .08 Hausman test 1.33 1.33 2.33 N 4610 Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010. (which is partly suggested by the Plan Psychiatrie et santé mentale[START_REF] García-Gómez | Institutions, health shocks and labour market outcomes across Europe[END_REF] -2015, in France), in France). On the long run, positive results can be expected from these frameworks, with both increased productivity in the workplace, a greater career stability and an increased health condition for workers, likely to result in decreased healthcare expenditures at the state level (mental healthrelated expenditures currently represents around 3 to 4% of the GDP because of decreased productivity, increased sick-leaves and unemployment according to the International Labour Organisation). Current work intensification and increased pressures on employees are both likely to make this problem even more topical in the coming years. At the European level, a European Pact for Mental Health and Well-being was established in 2008 and promotes mental health and well-being at work as well as the need to help people suffering from mental health disorders to return to the labour market.Chapter 2 suggests that long exposures to detrimental physical and psychosocial working conditions can have a long-term impact on health status, through increased numbers of chronic diseases. The first significant increase being found after less than 10 years of exposure implies that work strains are relevant in terms of health degradation starting from the beginning of workers' career. The results also suggest that psychosocial risk factors are very important in the determination of workers health. When the Compte Pénibilité in France makes a step in the right direction by allowing exposed workers to objectively measured physical strains to follow trainings, to work part-time or to retire early, this study advocates that workers' feelings about their working conditions are close to equivalent in terms of magnitude in the effects on health, and thus that psychosocial strains should not be excluded from public policies even if there are intrinsically harder to quantify. At the European level, the European Pact for Mental Health and Well-being also focuses on improving work Reading: 24.2% of workers declaring at least one mental disorder in 2006 report suffering from activity limitations against 51.5% in the unemployed population in 2006. Field: Santé et Itinéraire Professionnel survey, individuals reporting at least one mental disorder and aged 30-55 in 2006. Weighted and calibrated statistics. Unemployed population in 2006 Mental Health, 2006 At least one mental disorder 5,9 22,2 11,6 21,0 No mental disorder 94,1 77,8 88,4 79,0 MDE 3,4 16,7 8,3 16,4 No MDE 96,6 83,3 91,7 83,6 GAD 3,5 13,2 6,6 13,1 No GAD 96,5 86,8 93,4 86,9 Individual characteristics, 2006 30-34 17,3 11,6 16,0 15,9 35-39 21,7 10,9 20,2 15,1 40-44 20,2 16,4 19,9 16,4 45-49 20,1 19,6 21,4 18,5 50-55 20,8 41,5 22,5 34,1 In a relationship 82,1 55,0 77,6 71,5 Single 17,9 45,0 22,4 28,5 At least one child 12,2 5,1 8,3 6,1 No child 87,8 94,9 91,7 93,9 No diploma 8,0 15,1 6,7 15,3 Primary 45,8 53,6 39,1 45,8 Equivalent to French baccalaureat 18,2 14,2 19,1 17,2 Superior 26,3 16,1 33,3 18,5 Job characteristics, 2006 Agricultural sector Industrial sector Services sector Private sector Public sector Self-employed Poor perceived health No chronic disease Chronic disease No activity limitation Activity limitations 9,0 21,0 70,0 66,7 19,1 10,9 47,2 52,8 56,6 43,4 75,8 24,2 3,1 9,1 87,7 58,9 29,1 6,6 27,1 72,9 39,1 60,9 48,5 51,5 Farmer Risky behaviours, 2006 4,7 1,2 Artisans Daily smoker 7,0 31,7 4,3 42,9 Manager Not a daily smoker 16,4 68,3 11,1 57,1 Intermediate Drinker at risk 24,1 29,2 22,2 29,6 Employee Not a drinker at risk 12,7 70,8 45,1 70,4 Blue collar Overweight 29,8 34,8 9,2 48,3 Part-time job Normal weight or underweight 3,0 65,2 30,7 51,7 Full time job Professional route 97,0 69,3 Majority of employment in long jobs General Health, 2006 Good perceived health Most of the professional route out of job 82,1 48,9 73,9 26,1 77,8 29,0 71,0 61,2 Poor perceived health Stable career path 17,9 51,1 66,7 22,2 44,0 38,8 No chronic disease Unstable career path 75,3 56,6 33,3 71,9 56,0 60,3 Chronic disease 24,7 43,4 28,1 39,7 No activity limitation 90,7 59,8 88,5 75,1 Activity limitations 9,3 40,2 11,5 24,9 Risky behaviours, 2006 Daily smoker 27,5 47,8 23,6 24,5 Not a daily smoker 72,5 52,2 76,4 75,5 Drinker at risk 46,2 42,2 13,6 13,1 Not a drinker at risk 53,8 57,8 86,4 86,9 Overweight 51,3 46,7 28,5 41,6 Normal weight or underweight 48,7 53,3 71,5 58,4 Professional route Table 27 : Attrition analysis -panel population (interviewed in 2006 and 2010) vs. attrition population (interviewed in 2006 and not in 2010) 27 Santé et Itinéraire Professionnel survey, employed individuals aged 30-55 in 2006. Weighted and calibrated statistics. Men (%) Women (%) Panel pop. Attrition pop. Panel pop. Attrition pop. Mental Health, 2006 At least one mental disorder 5,9 5,9 11,6 13,5 No mental disorder 94,1 94,1 88,4 86,5 MDE 3,4 4,4 8,3 9,0 No MDE 96,6 95,2 91,7 91,0 GAD 3,5 3,7 6,6 6,9 No GAD 96,5 96,3 93,4 93,1 Individual characteristics, 2006 30-34 17,3 18,9 16,0 15,3 35-39 21,7 21,5 20,2 23,5 40-44 20,2 21,3 19,9 21,6 45-49 20,1 17,8 21,4 18,6 50-55 20,8 20,5 22,5 21,0 In a relationship 82,1 71,7 77,6 61,8 Single 17,9 28,3 22,4 38,2 At least one child 12,2 23,8 8,3 18,4 No child 87,8 86,2 91,7 81,6 No diploma 8,0 8,0 6,7 7,8 Primary 45,8 46,7 39,1 40,4 Equivalent to French bac. 18,2 14,8 19,1 21,0 Superior 26,3 29,1 33,3 29,4 Job characteristics, 2006 Agricultural sector 9,0 4,8 3,1 3,5 Industrial sector 21,0 16,6 9,1 8,2 Services sector 70,0 78,6 87,7 88,3 Private sector 66,7 65,2 58,9 60,2 Public sector 19,1 20,7 29,1 28,4 Self-employed 10,9 10,0 6,6 5,9 Farmer 4,7 1,4 1,2 1,2 Artisans 7,0 9,6 4,3 4,3 Manager 16,4 16,8 11,1 12,0 Intermediate 24,1 20,7 22,2 22,9 Employee 12,7 12,9 45,1 44,7 Blue collar 29,8 32,4 9,2 8,0 Part-time job 3,0 4,1 30,7 25,1 Full time job 97,0 95,9 69,3 75,0 General Health, 2006 Good perceived health 82,1 79,7 77,8 74,7 Poor perceived health 17,9 20,3 22,2 25,3 No chronic disease 75,3 79,0 71,9 73,5 Chronic disease 24,7 21,1 28,1 26,5 No activity limitation 9,3 88,5 88,5 88,2 Activity limitations 90,7 11,5 11,5 11,8 Risky behaviours, 2006 Daily smoker 27,5 34,9 23,6 30,1 Not a daily smoker 72,5 65,1 76,4 69,9 Drinker at risk 46,2 44,0 13,6 14,1 Not a drinker at risk 53,8 36,0 86,4 85,9 Overweight 51,3 48,6 28,5 21,3 Normal weight or underweight 48,7 51,4 71,5 78,7 Professional route Maj. of empl. in long jobs 83,5 69,9 71,7 69,4 Most of the prof. route out of job 16,5 30,1 28,3 30,6 Stable career path 74,3 76,0 68,9 67,6 Unstable career path 25,7 24,0 31,1 32,5 Field: Table 28 : Attrition Analysis -panel population vs. attrition population according to mental health and employment status in 2006 28 Among individuals declaring in 2006 having at least one mental disorder, 18.6% were not re-interviewed in 2010, and 81.4% were. In individuals not reporting any mental disorders in 2006, 16.9% were not re-interviewed. Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics. Attrition (%) Panel (%) Interpretation: Table 29 : General descriptive statistics Men (%) Women (%) 29 Employment Employment Prevalence probability Prevalence probability (2010) (2010) Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics. Table 30 : Employment status in 2006, according to mental health condition 30 Reading: 68.6% of men with at least one mental disorder in 2006 are employed at the same date, against 64.5% of women in the same situation. Field:Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics. Men (%) Women (%) Employed Unemployed Employed Unemployed Mental Health, 2006 At least one mental disorder 68,6 31,4 64,5 35,5 No mental disorder 90,9 9,1 77,0 23,0 Table 34 : Mental Health estimations in 2006 34 Uniprobit (Men) Biprobit(Men) Uniprobit (Women) Biprobit (Women) Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Ident. variables (men) Violence during childhood .08** .04 .09** .05 Many marital breakdowns .02** .01 .03** .01 Ident. variables (women) Violence during childhood .08*** .03 .07*** .02 Raised by a single parent .07*** .02 .08*** .02 Ind. characteristics, 2006 Age (ref.: 30-35 years-old) -35-39 .05** .02 .05** .02 -.03 .03 -.03 .03 -40-44 .01 .02 .01 .02 .02 .02 .02 .02 -45-49 .02 .02 .02 .02 .00 .03 .00 .03 -50-55 .02 .02 .02 .02 .01 .03 .01 .03 In a relationship (ref.: Single) -.05*** .01 -.05*** .01 -.03** .01 -.03** .01 Children (ref: None) .02 .02 .03 .02 .01 .03 .02 .03 Education (ref.: French bac.) -No diploma -.02 .03 -.02 .03 -.03 .04 -.03 .04 -Primary .00 .02 -.00 .01 .01 .02 .01 .02 -Superior -.00 .02 -.01 .02 .00 .02 .00 .02 Employment in 2006 Act. sector (ref.: Industrial) -Agricultural .01 .03 .01 .02 -.03 .05 -.02 .05 -Services .02 .01 .02 .01 -.03 .02 -.03 .02 Activity status (ref.: Private) -Public sector -.00 .01 -.01 .01 -.04** .02 -.03** .02 -Self-employed .05** .02 .04* .02 -.04 .04 -.04 .04 Prof. cat. (ref.: Blue collar) -Farmers -.08* .05 -.08* .05 .05 .07 .05 .07 -Artisans -.02 .03 -.02 .03 .07 .05 .07 .05 -Managers .02 .02 .02 .02 .01 .03 .00 .03 -Intermediate -.00 .01 -.00 .01 -.01 .03 -.01 .03 -Employees -.03 .02 -.03 .02 .01 .02 .01 .02 Part time (ref.: Full-time) -.03 .03 -.03 .03 .02* .01 .02 .01 General health status in 2006 Poor perceived health status .09*** .01 .09*** .01 .14*** .02 .14*** .02 Chronic diseases .00 .01 .00 .01 .02 .02 .02 .02 Activity limitations .01 .02 .01 .02 .03* .02 .03 .02 Risky behaviours in 2006 Daily smoker .00 .01 .01 .01 .02 .02 .03 .02 Risky alcohol consumption .01 .01 .01 .01 .03 .02 .03 .02 Overweight -.01 .02 -.01 .01 -.02 .02 .02 .02 Professional route Maj. of empl. in long jobs -.00 .02 .00 .02 -.01 .02 -.00 .02 Stable career path -.01 .01 -.01 .01 .01 .02 .01 .02 N 1876 1860 2143 1982 Table 36 : Unmatched difference-in-differences results ( to ), psychosocial treatment 36 Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) : being exposed to at least 12 years of single exposures or 6 years of multiple exposures Men First health period .018 .035 . 004 .031 .316 Second health period .014 .015 .034 .037 .020 .033 .371 1734/3586 Third health period .035 .040 .021 .037 .396 Women First health period .90* .048 .058 .043 .445 Second health period .032 .020 .098** .049 .066 .040 .497 1554/3426 Third health period .102** .052 .070 .044 .522 : being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men First health period .086* .039 .080** .037 .442 Second health period .006 .015 .094** .041 .088** .039 .513 1690/3586 Third health period .141*** .045 .135*** .043 .641 Women First health period .091* .050 .066 .044 .567 Second health period .025 .020 .102* .053 .077 .031 .600 1480/3426 Third health period .105** .057 .080 .048 .674 : being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men First health period .101** .045 .097** .043 .613 Second health period .004 .015 .132*** .047 .128*** .045 .713 1644/3586 Third health period .154*** .050 .150*** .048 .806 Women First health period .134** .063 .107* .061 .769 Second health period .027 .020 .147** .069 .120** .050 .876 1410/3426 Third health period .160*** .057 .133** .055 .974 : being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men First health period .126*** .049 .116** .046 .700 Second health period .010 .016 .154*** .050 .144*** .048 .785 1574/3586 Third health period .186*** .054 .176*** .052 .918 Women First health period .165*** .060 .145** .066 .928 Second health period .020 .020 .194*** .065 .174*** .059 1.021 1318/3426 Third health period .209*** .071 .189*** .054 1.115 : being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups respectively before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). Field:Population aged 42-74 in 2006 and present from to . Unmatched sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. First health period .122*** .050 .111** .047 .704 Second health period .011 .016 .154*** .052 .143*** .049 .796 1412/3586 Third health period .181*** .056 .170*** .053 .923 Women First health period .196*** .062 .182*** .068 .944 Second health period .014 .020 .219*** .066 .205*** .061 1.049 1208/3426 Third health period .224*** .073 .210*** .056 1.148 Table 37 : Unmatched difference-in-differences results ( to ), global treatment 37 Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) : being exposed to at least 12 years of single exposures or 6 years of multiple exposures Men First health period -.012 .050 .027 .047 .390 Second health period -.039** .02 -.007 .035 .032 .041 .434 2796/3586 Third health period -.006 .047 .033 .044 .464 Women First health period .045 .051 .036 .044 .427 Second health period .007 .02 .051 .046 .044 .038 .481 2190/3426 Third health period .052 .048 .045 .041 .517 : being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men First health period .000 .048 .041 .045 .470 Second health period -.041** .019 .017 .050 .058 .048 .538 2770/3586 Third health period .031 .053 .072 .051 .643 Women First health period .075 .051 .073* .043 .569 Second health period .002 .020 .082* .047 .080** .039 .614 2100/3426 Third health period .091* .055 .089** .041 .705 : being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men First health period .035 .053 .078 .050 .644 Second health period -.043** .019 .058 .055 .101* .053 .729 2720/3586 Third health period .088 .057 .131** .056 .849 Women First health period .101* .064 .100* .058 .764 Second health period .001 .020 .120** .053 .121** .047 .862 2046/3426 Third health period .125** .058 .124** .052 .971 : being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men First health period .085 .056 .122** .053 .749 Second health period -.037** .018 .094* .057 .131** .055 .823 2638/3586 Third health period .132** .061 .169*** .059 .977 Women First health period .106* .067 .109* .062 .869 Second health period -.003 .020 .125** .061 .128** .055 .972 1960/3426 Third health period .133*** .055 .136*** .050 1.063 : being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups respectively before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). Field:Population aged 42-74 in 2006 and present from to . Unmatched sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. First health period .071 .054 .096* .052 .746 Second health period -.025 .017 .076 .056 .101* .054 .817 2502/3586 Third health period .103* .060 .128** .058 .965 Women First health period .140** .067 .146** .063 .897 Second health period -.006 .020 .157*** .060 .163*** .056 1.007 1826/3426 Third health period .157*** .055 .163*** .050 1.101 Table 42 : Working conditions typology, by gender in 2006 42 ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. 70% of night workers are men and 30% are women. The difference in proportions is significant at the 1% level. Field: General Santé et Itinéraire Professionnel survey sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006. Variable Men (%) Gender Women (%) Difference Men/Women (Chi² test) Working conditions Night work 70.36 29.64 *** Repetitive work 49.90 50.10 Heavy load 51.10 48.90 *** Hazardous materials 61.85 38.15 *** Cannot use skills 46.29 53.71 Work under pressure 52.25 47.75 *** Tensions with public 44.02 55.98 *** Lack of recognition 47.16 52.84 Cannot conciliate private and work lives 49.21 50.79 Bad relationships with colleagues 47.83 52.17 Interpretation: Résumé -Parcours Professionnel et de SantéL'objectif de cette thèse est de démêler quelques-unes des nombreuses interrelations entre travail, emploi et état de santé, la plupart du temps dans une logique longitudinale. Établir des relations causales entre ces trois dynamiques n'est pas chose aisée, dans la mesure où de nombreux biais statistiques entachent généralement les estimations, notamment les biais de sélection ainsi que les trois sources classiques d'endogénéité. Cette thèse se propose dans un premier chapitre d'étudier l'effet de la santé mentale sur la capacité à se maintenir en emploi des travailleurs. Le deuxième chapitre explore les possibles sources d'hétérogénéité du rôle des conditions de travail sur la santé en s'intéressant aux effets d'expositions variables en termes d'intensité et de nature en début de carrière sur les maladies chroniques. Enfin, le troisième chapitre traite de la fin de carrière et de la décision de départ en retraite. L'enquête en données de panel françaises de l'enquête Santé et itinéraire professionnel (Sip) comptant plus de 13 000 est utilisée dans cette thèse. Plusieurs méthodologies sont mises en place dans ce travail de manière à prendre en compte les biais d'endogénéité, notamment des méthodes en variables instrumentales ainsi que des méthodes d'évaluation des politiques publiques (appariement et différence-de-différences). Les résultats confirment qu'emploi, santé et travail sont intimement liés, avec respectivement des conséquences avérées des chocs de santé sur la trajectoire professionnelle, et inversement un rôle prépondérant du travail sur la santé.Mots-clés : travail ; emploi ; conditions de travail ; retraite ; santé générale ; santé mentale ; dépression ; anxiété ; maladies chroniques ; enfance ; endogénéité ; variables instrumentales ; appariement ; méthodes de données de panel ; différence-de-différences ; France. The Hausman statistic has been calculated as follow: , followed by a Chi² test. Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey, women aged 30-55 in employment in 2006. Sensitivity tests were performed by estimating the models on the 25-50, 30-50 and 25-55 years-old groups. These tests, not presented here, confirm our results in all cases. In the male population suffering from at least one mental disorder in 2006, 68.6% are employed against 90.9% in the nonaffected population. Among women, the proportions were 64.5% and 77.0% respectively (Table30). Directorate for Research, Studies, Assessment and Statistics (Drees) -Ministry of Health. Directorate for Research, Studies and Statistics (Dares) -Ministry of Labour. For a technical note on attrition management and data calibration in the Sip survey, see De Riccardis (2012). The Hausman test has been calculated as follow: , followed by a Chi² test. Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey. Men aged 50-69 in 2010. Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field :Santé et Itinéraire Professionnel survey. Low-educated individuals aged 50-69 in 2010. Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals who faced a highly physically demanding career, aged 50-69 in 2010. 12,2 19,7 20,6 22,3 25,2 72,3 27,7 12,2 87,8 5,2 49,3 18,1 26,3 19,8 16,5 15,2 15,6 32,9 59,1 40,9 8,1 91,9 18,2 47,9 13,7 14,6 General Health, 2006 Good perceived health Reading: ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006. Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010. Reading: Coefficients. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010. Acknowledgements (mostly in French) Acknowledgements The authors wish to thank for their comments on an earlier version of this article, Thibault Brodaty (Érudite, Upec), Laetitia Challe (Érudite, Upem), Roméo Fontaine (Leg, University of Burgundy), Yannick L'Horty (Érudite, Upem), Aurélia Tison (Aix Marseille University) and Yann Videau (Érudite, Upec). The authors also wish to thank Caroline Berchet (Ined), Marc Collet (Drees), Lucie Gonzalez (Haut Conseil de la Famille), Sandrine Juin (Érudite, Upec and Ined) and Nicolas de Riccardis (Drees) for responding to the requests on an earlier version. The authors obviously remain solely responsible for inaccuracies or limitations of their work. Acknowledgements The author would like to especially thank Thomas Barnay (Upec Érudite) for his constant help and advice about this study. I thank Pierre Blanchard (Upec Érudite), Emmanuelle Cambois (Ined), Eve Caroli (LEDa-LEGOS, Paris-Dauphine University) and Emmanuel Duguet (Upec Érudite) for their technical help. I am thankful to Søren Rud Kristensen (Manchester Centre for Health Economics), Renaud Legal (Drees), Maarten Lindeboom (VU University Amsterdam), Luke Munford (Manchester Centre for Health Economics), Catherine Pollak (Drees) and Matthew Sutton (Manchester Centre for Health Economics) for reviewing earlier versions of the paper. I also wish to thank Patrick Domingues (Upec Érudite), Sandrine Juin (Ined, Upec Érudite), François Legendre, Dorian Verboux and Yann Videau (Upec Acknowledgements The authors would like to thank Pierre Blanchard (Upec Érudite), Eve Caroli (LEDa-LEGOS, Paris-Dauphine University), Emmanuel Duguet (Upec Érudite), Sandrine Juin (Ined, Upec Érudite), François Legendre and Yann Videau (Upec Érudite) for their useful advice. They also thank Annaig-Charlotte Pédrant (IREGE, Savoie Mont Blanc University) and Pierre-Jean Messe (GAINS, Le Mans University) for discussing the paper during a conference. The French version of this chapter has been published as: Barnay T. and Defebvre É. (2016): « L'influence de la santé mentale sur le maintien en emploi », Économie et Statistique, Chapter II: Work strains and chronic diseases HARDER, BETTER, FASTER... YET STRONGER? WORKING CONDITIONS AND SELF-DECLARATION OF CHRONIC DISEASES Chapter III: Health status after retirement RETIRED, AT LAST? THE ROLE OF RETIREMENT ON HEALTH STATUS IN FRANCE This chapter is co-written with Thomas BARNAY (Paris-Est University) Appendix 2: Generalized Anxiety Disorder (GAD) Daily activities GAD are identified using a similar filter questions system. Three questions are asked: -Over the past six months, have you felt like you were too much concerned about this and that, have you felt overly concerned, worried, anxious about life's everyday problems, at work/at school, at home or about your relatives? Yes/No In case of positive answer: - For a person to suffer from generalized anxiety disorder, he/she must respond positively to the three filter questions, then three out of six symptoms described later. This protocol is consistent with that used by the DSM-IV. Appendix 3: Initial selection of the sample in 2006 This study does not claim to measure the impact of mental health on employment but tries to establish the causal effect of mental health on job retention. The unemployed population in 2006 is therefore discarded, even though their reported prevalence of anxiety disorders and depressive episodes is far superior to those in employment (22% vs. 6% in men and 21% vs. 12% in women; see Table 25 andTable 26, Appendix 6). In addition, such a study working on the whole sample (including the unemployed) would suffer from significant methodological biases (reverse causality and direct simultaneity). A Appendix 4: Attrition between the two waves Attrition between the 2006 and 2010 waves can induce the selection of a population with specific characteristics. There are no significant differences in demographic, socioeconomic and health characteristics of our sample between respondents and non-respondents to the 2010 survey on the basis of their first wave characteristics (see Table 27 andTable 28, Appendix 6). However, differences in the response rate to the 2010 survey exist according to perceived health status, activity limitations, the declaration of major depressive episodes and the declaration of motion or sleep disorders [START_REF] De Riccardis | Traitements de la non-réponse et calages pour l'enquête santé et itinéraire professionnel de[END_REF] While the questionnaire on mental disorders makes full use of the nomenclature proposed by the Mini, it has no diagnostic value. It can rather be seen as diagnostic interviews conducted by an interviewer, based on all the symptoms described by the DSM-IV and Cim-10. It must not lead to a medical diagnosis [START_REF] Bahu | Le choix d'indicateurs de santé : l'exemple de l'enquête SIP[END_REF]. However, it appears that according to the results of a qualitative post-survey interview about some indicators used in the Sip survey including health indicators [START_REF] Guiho-Bailly | Rapport subjectif au travail : sens des trajets professionnels et construction de la santé -Rapport final[END_REF], the over-reporting phenomenon (false positives) of mental disorders in the survey is not widespread, while in contrast under-reporting (false negative) may occur more often. In the study of the impact of mental health on job retention, this would lead to an underestimation of the effect of mental health. Appendix 6: Descriptive statistics Appendix 8: Detailed description of the parameters The nine thresholds are designed according to increasing levels of exposures to detrimental working conditions: a 2-year step for single exposures from one threshold to another. Polyexposure durations are half that of single ones, based on the requirements of the 2015 French law requiring that past professional exposures to detrimental working conditions be taken into account in pension calculations (in which simultaneous strains count twice as much as single exposures - [START_REF] Sirugue | Compte personnel de prévention de la pénibilité : propositions pour un dispositif plus simple, plus sécurisé et mieux articulé avec la prévention[END_REF]. The durations of the observation periods for working conditions are set arbitrarily to allow some time for reaching the treatment thresholds: It represents three halves of the maximum duration of exposure needed to be treated, i.e., three halves of the single exposure threshold). This way, individuals are able to reach the treatment even though their exposure years are not necessarily a continuum. The minimum duration at work during the observation period is set as the minimum exposure threshold to be treated, i.e., it equals the poly-exposure threshold. As individuals not meeting this minimum requirement are not in capacity to reach the treatment (because the bare minimum to do so is to work and be exposed enough to reach the poly exposure threshold), they are dropped from the analysis for comparability purposes. The length of observation periods for chronic diseases is set to two years in order to avoid choosing overly specific singletons (some specific isolated years may not perfectly reflect individuals' health condition) while preserving sample sizes (because the longer the intervals, the greater the sample size losses). The estimations are performed on these nine thresholds using the same sample of individuals: I keep only individuals existing in all nine of them for comparison purposes. The sample is thus based on the most demanding threshold, . This means that, in this setup, individuals must be observed for a minimal duration of 38 years (2 years before labour market entry for baseline health status, plus 30 years of observation and 6 years of follow-up health status periods, as well as a minimum of 10 years in the labour market -see Figure V). In other words, with the date of the survey being 2006, this means that the retained individuals (6,700) are those who entered the labour market before 1970 (and existing in the dataset before 1968), inducing heavily reduced sample sizes in comparison to the 13,000 starting individuals. Appendix 9: Naive unmatched difference-in-differences models Appendix 11: Specification test Appendix 13: Exploratory analysis on health habits Appendix 14: Exploratory analysis on gender-gaps Appendix 16: The Mini European Health Module The Mini European health module is intended to give a uniform measure of health status in European countries by asking a series of three questions apprehending perceived health, the existence of chronic diseases and activity limitations. It is based on Blaxter's model (1989) which identifies three semantic approaches to health: -The subjective model based on the overall perception of the individual, "How is your overall health? Very Good/Good/Average/Bad/Very bad"; -The medical model, based on disease reporting, "Do you currently have one or more chronic disease(s)? Yes/No"; -The functional model which identifies difficulties in performing frequent activities: "Are you limited for six months because of a health problem in activities people usually do? Yes/No". Appendix 17: Major Depressive Episodes (MDE) The MDE are identified in two stages. First, two questions making use of filters are asked: -Over the past two weeks, have you felt particularly sad, depressed, mostly during the day, and this almost every day? Yes/No -Over the past two weeks, have you almost all the time the feeling of having no interest in anything, to have lost interest or pleasure in things that you usually like? Yes/No Then, if one of the two filter questions receives a positive response, a third question is then asked, in order to know the specific symptoms: Over the past two weeks, when you felt depressed and/or uninterested for most things, have you experienced any of the following situations? Check as soon as the answer is "yes", several possible positive responses. For a person to suffer from generalized anxiety disorder, he/she must respond positively to the three filter questions, then three out of six symptoms described later. This protocol is consistent with that used by the DSM-IV. Appendix 20: Civil servants Appendix 19: Main auxiliary models
316,162
[ "173381" ]
[ "74242" ]
01760968
en
[ "spi" ]
2024/03/05 22:32:13
2019
https://hal.science/hal-01760968/file/manuscript_FK.pdf
Fangchen Feng Matthieu Kowalski Underdetermined Reverberant Blind Source Separation: Sparse Approaches for Multiplicative and Convolutive Narrowband Approximation We consider the problem of blind source separation for underdetermined convolutive mixtures. Based on the multiplicative narrowband approximation in the time-frequency domain with the help of Short-Time-Fourier-Transform (STFT) and the sparse representation of the source signals, we formulate the separation problem into an optimization framework. This framework is then generalized based on the recently investigated convolutive narrowband approximation and the statistics of the room impulse response. Algorithms with convergence proof are then employed to solve the proposed optimization problems. The evaluation of the proposed frameworks and algorithms for synthesized and live recorded mixtures are illustrated. The proposed approaches are also tested for mixtures with input noise. Numerical evaluations show the advantages of the proposed methods. I. INTRODUCTION A. Time model Blind source separation (BSS) recovers source signals from a number of observed mixtures without knowing the mixing system. Separation of the mixed sounds has several applications in the analysis, editing, and manipulation of audio data [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF]. In the real-world scenario, convolutive mixture model is considered to take the room echo and the reverberation effect into account: x m (t) = N n=1 a mn (t) * s n (t) + n m (t), (1) where s n is the n-th source and x m is the m-th mixture. N and M are the number of sources and microphones respectively. a mn (t) is the room impulse response (RIR) from the nth source to the m-th microphone. n m (t) is the additive white Gaussian noise at the m-th microphone. We denote also s img mn (t) = a mn (t) * s n (t) the image of the n-th source at the m-th microphone. B. Multiplicative narrowband approximation The source separation for convolutive mixtures is usually tackled in the time-frequency domain with the help of STFT (Short-Time-Fourier-Transform) [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF]. With the narrowband assumption, the separation can be performed in each Fangchen Feng is with Laboratoire Astroparticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, Sorbonne Paris Cité, 75205, Paris, France (email: [email protected]) Matthieu Kowalski is with Laboratoire des signaux et systèmes, CNRS, Centralesupélec, Université Paris-Sud, Université Paris-Saclay, 91192, Gifsur-Yvette, France (email: [email protected]) frequency band [START_REF] Kellermann | Wideband algorithms versus narrowband algorithms for adaptive filtering in the DFT domain[END_REF]. Because of the permutation ambiguity in each frequency band, the separation is then followed by a permutation alignment step to regroup the estimated frequency components that belong to the same source [START_REF] Sawada | Measuring dependence of binwise separated signals for permutation alignment in frequency-domain bss[END_REF]. In this paper, we concentrate on the separation step. The multiplicative narrowband approximation [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] deals with the convolutive mixtures in each frequency using the complex-valued multiplication in the following vector form: x(f, τ ) = N n=1 ãn (f )s n (f, τ ) + ñ(f, τ ), (2) where x(f, τ ) = [x 1 (f, τ ), . . . , xM (f, τ )] T and sn (f, τ ) are respectively the analysis STFT coeffcients of the observations and the n-th source signal. ãn (f ) = [ã 1n (f ), . . . , ãMn (f )] T is a vector that contains the Fourier transform of the RIR associated with the n-th source. ñ(f, τ ) = [ñ 1 (f, τ ), . . . , ñM (f, τ )] T consistes not only the analysis STFT coefficients of the noise, but also the error term due to the approximation. The formulation [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] approximates the convolutive mixtures by using instantaneous mixture in each frequency. This approximation therefore largely reduces the complexity of the problem and is valid when the RIR length is less than the STFT window length. The sparsity assumption is largely utilized for source separation problem [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF], [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. Based on the model [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] and by supposing that only one source is active or dominant in each time-frequency bin (f, τ ), the authors of [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] proposed to estimate the mixing matrix in each frequency by clustering, and then estimate the source in a maximum a posteriori (MAP) sense. This method is further improved by [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] where the authors proposed to use a soft masking technique to perform the separation. The idea is to classify each time-frequency bin of the observation x(f, τ ) into N class, where N is the number of sources. Based on a complex-valued Gaussian generative model for source signals, they inferred a bin-wise a posteriori probability P (C n |x(f, τ )) which represents the probability that the vector x(f, τ ) belongs to the n-th class C n . This method obtains good separation results for speech signals, however only in low reverberation scenario [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. The performance of these methods is limited by the multiplicative approximation whose approximation error increases rapidely as the reverberation time becomes long [START_REF] Kowalski | Beyond the narrowband approximation: Wideband convex methods for under-determined reverberant audio source separation[END_REF]. Moreover, the disjointness of the soures in the time-frequency domain is not realistic [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. C. Beyond the multiplicative narrowband model A generalization of the multiplicative approximation is proposed in [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] by considering the spatial covariance matrix of the source signals. By modeling the sources STFT coefficients as a phase-invariant multivariate distribution, the authors inferred that the covariance matrix of the STFT coefficients of the n-th source images s img n = s img 1n , s img 2n , . . . , s img M n T can be factorized as: R simg n (f, τ ) = v n (f, τ )R n (f ), (3) where v n (f, τ ) are scalar time-varying variances of the n-th source at different frequencies and R n (f ) are time-invariant spatial covariance matrices encoding the source spatial position and spatial spread [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF]. The multiplicative approximation forces the spatial covariance matrix to be of rank-1 and the authors of [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] exploited a generalization by assuming that the spatial covariance matrix is of full-rank and showed that the new assumption models better the mixing process because of the increased flexibility. However, as we show in this paper by experiments, the separation performance of this full-rank model is still limited in strong reverberation scenarios. Moreover, as both the bin-wise method [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] and the full-rank approach [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] do not take the error term ñ into consideration, they are therefore sensitive to additional noise. Recently, the authors of [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF] investigated the convolutive narrowband approximation for oracle source separation of convolutive mixtures (the mixing systems is known). They showed that the convolutive approximation suits better the original mixing process especially in strong reverberation scenarios. In this paper, we investigate the convolutive narrowband approximation as the generalization of the multiplicative approximation in the full blind setting (the mixing system if unknown). The contribution of the paper is three-folds: first based on the multiplicative narrowband approximation, we formulate the separation in each frequency as an optimization problem with 1 norm penalty to exploit sparsity. The proposed optimization formulation is then generalized based on the statistics of the RIR [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF] and the convolutive narrowband approximation model [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]. At last, we propose to solve the obtained optimizations with PALM (Proximal alternating linearized minimization) [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] and BC-VMFB (Block coordinatevariable metric forward backward) [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] algorithms which have convergence guarantee. The rest of the article is organized as follows. We propose the optimization framework based on multiplicative approximation with 1 norm penalty and present the corresponding algorithm in Section II. The optimization framework is then generalized in Section III based on the statistics of the RIR and the convolutive approximation. The associated algorithm is also presented. We compare the separation performance achieved by the proposed approaches to that of the state-ofthe-art in various experimental settings in Section IV. Finally, Section V concludes the paper. II. THE MULTIPLICATIVE NARROWBAND APPROXIMATION We first rewrite the formulation (2) with matrix notations by concatenating the time samples and source indexes. In each frequency f , we have: Xf = Ãf Sf + Ñf , (4) where Xf ∈ C M ×L T is the matrix of the analysis STFT coefficients of the observations at the given frequency f . Ãf ∈ C M ×N is the mixing matrix at frequency f . Sf ∈ C N ×L T is the matrix of the analysis STFT coefficients of the sources at frequency f . Ñf ∈ C M ×L T is the noise term which also contains the approximation error. In the above notations, L T is the number of time samples in the time-frequency domain. The target of the separation is to estimate Ãf and Sf from the observations Xf . However, according to the definition of the analysis STFT coefficients , the estimated Sf has to be in the image of the STFT operator (see in [START_REF] Balazs | Adapted and adaptive linear time-frequency representations: a synthesis point of view[END_REF] for more details). To avoid this additional constraint, we propose to replace the analysis STFT coefficients Sf by the synthesis STFT coefficients α f ∈ C N ×L T , which leads to: Xf = Ãf α f + Ñf . (5) In the following, we denote also α f,n the n-th source componant (row) of α f and α f,n (τ ) the scalar element at position τ in α f,n . A. Formulation of the optimization Based on the model ( 5), we propose to formulate the separation as an optimization problem as follow: min Ãf ,α f 1 2 Xf -Ãf α 2 F + λ α f 1 + ı C ( Ãf ), (6) where • F denotes the Frobenius norm and • 1 is the 1 norm of the matrix which is the sum of the absolute value of all the elements. ı C ( Ãf ) is an indicator function to avoid the trivial solution caused by the scaling ambiguity between Ãf and α f : ı C ( Ãf ) = 0, if ãf,n = 1, n = 1, 2, . . . , N + ∞, otherwise (7) with ãf,n the n-th column of Ãf . λ is the hyperparameter which balances between the data term 1 2 Xf -Ãf α f 2 F and the penalty term α f 1 . For instantaneous mixtures, the formulation (6) has been firstly proposed in [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF] and recently investigated in [START_REF] Feng | A unified approach for blind source separation using sparsity and decorrelation[END_REF]. Compared to the masking technique of separation [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], the 1 norm term exploits only sparsity which is more realistic than the disjointness assumption for speech signals. Moreover, the Lagrangian form with the data term 1 2 Xf -Ãf α f 2 F allows us to take the noise/approximation error into consideration. B. Algorithm: N-Regu The optimization problem ( 6) is non-convex with a nondifferentiable term. In this paper, we propose to solve the problem by applying the BC-VMFB (block coordinate variable metric forward-backward) [START_REF] Chouzenoux | Variable metric forwardbackward algorithm for minimizing the sum of a differentiable function and a convex function[END_REF] algorithm. This algorithm relies on the proximal operator [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF] given in the next definition. Definition 1. Let ψ be a proper lower semicontinuous function, the proximal operator associated with ψ is defined as: prox ψ := argmin y ψ(y) + 1 2 y -x 2 F . (8) When the function ψ(y) = λ y 1 , the proximal operator becomes the entry-wise soft thresholding presented in the next proposition. Proposition 2. Let α ∈ C N ×L T . Then, α = prox λ • 1 (α) := S λ (α) is given entrywise by soft-thresholding: αi = α i |α i | (|α i | -λ) + , (9) where (α) + = max(0, α). When the function ψ in Definition 1 is the indicator function ı C , the proximal operator reduces to the projection operator presented in Proposition 3. Proposition 3. Let à ∈ C M ×N . Then  = prox ı C ( Ã) := P C ( Ã) is given by the column-wise normalization projection: ân = ãn ãn , n = 1, 2, . . . , N (10) With the above proximal operators, we present the algorithm derived from BC-VMFB in Algorithm 1. We denote the data term by Q(α f , A f ) = 1 2 Xf -Ãf α f 2 F . L (j) = Ã(j+1) H f Ã(j+1) f 2 is the Lipschitz constant of ∇ α f Q(α (j) f , Ã(j) f ) with • 2 denoting the spectral norm of matrix. Details of the derivation of this algorithm and the convergence study are given in Appendix VI-A. In the following, this algorithm is referred as N-Regu (Narrowband optimization with regularization). Algorithm 1: N-Regu Initialisation : α (1) f ∈ C N ×L T , Ã(1) f ∈ C M ×N , L (1) = Ã(1) H f Ã(1) f 2 , j = 1; repeat ∇ α f Q α (j) f , Ã(j) f = - Ã(j) H f Xf - Ã(j) f α (j) f ; α (j+1) f = S λ/L (j) (α (j) f -1 L (j) ∇ α f Q(α (j) f , Ã(j) f ); Ã(j+1) f = P C ( Xf α (j+1) H f ); L (j+1) = Ã(j+1) H f Ã(j+1) f 2 ; j = j + 1; until convergence; III. THE CONVOLUTIVE NARROWBAND APPROXIMATION A. Convolutive approximation Theoretically, the multiplicative narrowband approximation ( 2) is valid only when the RIR length is less than the STFT window length. In practice, this condition is rarely varified because the STFT window length is limited to ensure the local stationarity of audio signals [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]. To avoid such limitation, the convolutive narrowband approximation was proposed in [START_REF] Avargel | System identification in the short-time fourier transform domain with crossband filtering[END_REF], [START_REF] Talmon | Relative transfer function identification using convolutive transfer function approximation[END_REF]: x(f, τ ) = N n=1 L l=1 hn (f, l)s n (f, τ -l), (11) where hn = h1n , . . . , hMn T is the vector that contains the impulse responses in the time-frequency domain associated with the n-th source. L is the length of the convolution kernel in the time-frequency domain. The convolutive approximation ( 11) is a generalization of the multiplicative approximation (2) as it considers the information diffusion along the time index. When the kernel length L = 1, it reduces to the multiplicative approximation. The convolution kernel in the time-frequency domain hmn (f, τ ) is linked to the RIR in the time domain a mn (t) by [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]: hmn (f, τ ) = [a mn (t) * ζ f (t)] | t=τ k0 , (12) which represents the convolution with respect to the time index t evaluated with a resolution of the STFT frame step k 0 with: ζ f (t) = e 2πif t/L F j ϕ(j) φ(t + j), (13) where L F is the number of frequency bands. ϕ(j) et φ(j) denote respectively the analysis and synthesis STFT window. With matrix notations, for each frequency f , the convolutive approximation ( 11) can be written as: Xf = Hf Sf + Ñf , (14) where Hf ∈ C M ×N ×L is the mixing system formed by concatenating the impulse responses of length L. In the following, we denote also hf,mn the vector that represents the impulse response at position (m, n) in Hf and hf,mn (τ ) the scalar element at position (m, n, τ ). The operator denotes the convolutive mixing process [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF]. Compared to the original mixing process in the time domain (1), the convolutive approximation ( 14) largely reduces the length of the convolution kernel, thus makes the estimation of both the mixing system and the source signals practically possible. B. Proposed optimization approach a) Basic extension of the multiplicative model: Once again, to circumvent the additional constraint brought by the analysis coefficients of the sources, we replace the analysis STFT coefficients Sf by the synthesis coefficients α f , which leads to: Xf = Hf α f + Ñf . ( 15 ) Based on (15), we generalize ( 6) by replacing the multplicative operator by the convolutive mixing operator: min Hf ,α f 1 2 Xf -Hf α f 2 F + λ α f 1 + ı Conv C ( Hf ), ( 16 ) where ı Conv C ( Hf ) is the normalisation constraint to avoid trivial solutions: ı Conv C ( Hf ) =      0, if m,τ | hf,mn (τ )| 2 = 1, n = 1, . . . , N + ∞, otherwise. (17) b) Regularization for the convolution kernel: In [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF], the authors consider the problem of estimating the RIR supposing that the mixtures and the sources are known. They formulated the estimation problem as an optimization problem and proposed a differentiable penalty for the mixing system in the time domain: m,n,t |a mn (t)| 2 2ρ 2 (t) , (18) where ρ(t) denotes the amplitude envelope of RIR which depends on the reverberation time RT 60 : ρ(t) = σ10 -3t/RT60 , (19) with σ being a scaling factor. The penalty ( 18) is designed to force an exponential decrease of the RIR which satisfaits the acoustic statistics of the RIR [START_REF] Kuttruff | Room acoustics[END_REF]. As the convolutive kernel in the time-frequency domain is linked to the RIR in time domain by [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF], in this paper, we consider the penalty in the time-frequency domain in the same form: P( Hf ) = m,n,τ | hf,mn (τ )| 2 2ρ 2 (τ ) , (20) where ρ(τ ) is the decreasing coefficients in the time-frequency domain which depends on ρ(t) and the STFT transform. Other forms of penalty are also proposed in [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF]. However, their adaption in the time-frequency domain is not straightforward. c) Final optimization problem: With the above penalty term, the formulation ( 16) can be improved as: min Hf ,α f 1 2 Xf -Hf α f 2 F + λ α f 1 + P( Hf ) + ı Conv C ( Hf ). (21) C. Algorithm: C-PALM We propose to use the Proximal Alternating Linearized Minimization (PALM) algorithm [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] to solve the problem. The derived algorithm is presented in Algorithm 2, and one can refer to Appendix VI-C for details on the derivation and the convergence study. We refer to this algorithm as C-PALM (Convolutive PALM) in the following. We denote: Q( αf , Hf ) = 1 2 X -Hf α f 2 F + P( Hf ) and the gradient of P( Hf ) is given coordinate-wise by: ∇ Hf P( Hf ) f,mnτ = hf,mn (τ ) ρ4 (τ ) . ( 22 ) In Algorithm 2, HH f and α H f are respectively the adjoint operators of the convolutive mixtures with respect to the convolution kernel and the sources. Details of derivation of these adjoint operators are given in Appendix VI-B. L (j) α f and L (j) Hf are respectively the Lipschitz constant of ∇ α f Q(α (j) f , H(j) f ) and ∇ Hf Q(α (j+1) f , H(j) f ). L (j) α f can be calculated with the power iteration algorithm [START_REF] Kowalski | Beyond the narrowband approximation: Wideband convex methods for under-determined reverberant audio source separation[END_REF] shown in Algorithm 3. L (j) Hf can be approximately estimated thanks to the next proposition. Algorithm 2: C-PALM Initialisation : α (1) f ∈ C N ×L T , H(1) f ∈ C M ×N , j = 1; repeat ∇ α f Q α (j) f , H(j) f = - H(j) H f Xf - H(j) f α (j) ; α (j+1) f = S λ/L (j) α f α (j) f -1 L (j) α f ∇ α f Q(α (j) f , H(j) f ) ; ∇ Hf Q(α (j+1) f , H(j) f ) = -( Xf - H(j) f α (j+1) f ) α (j+1) H f + ∇ Hf P( H(j) f ); H(j+1) f = P Conv ı C H(j) f -1 L (j) Hf ∇ Hf Q(α (j+1) f , H(j) f ) ; Update L (j) α f et L (j) Hf ; j = j + 1; until convergence; Algorithm 3: Power iteration for the calculation of L α f Initialisation : v f ∈ C N ×L T ; repeat W = HH f Hf v f ; L α f = W ∞ ; v f = W Lα f ; until convergence; Proposition 4. If we suppose that the source componants α f,1 , α f,2 , . . . , α f,N are mutually independant and L L T , then L Hf , the Lipschitz constant of ∇ Hf Q(α f , Hf ) can be calculated as: L Hf = max n (L f,n ) + max τ ( 1 ρ8 (τ ) ), (23) where L f,n = Γ f,n , with Γ f,n =      γ f,n (0) γ f,n (1) . . . γ f,n (L -1) γ f,n (-1) γ f,n (0) . . . γ f,n (L -2) . . . . . . . . . . . . γ f,n (1 -L) γ f,n (2 -L) . . . γ f,n (0)      , (24) and γ f,n (τ ) is the empirical autocorrelation function of α f,n : γ f,n = L T -1 =1 α f,n ( + τ )α * f,n ( ). (25) Proof. The proof is postponed in Appendix VI-D. If the independance assumption mentioned in Proposition 4 appears to be strong, it is well adapted for audio signals as it is the basic hypothesis of the FDICA (frequency domain independant component analysis) [START_REF] Sawada | A robust and precise method for solving the permutation problem of frequency-domain blind source separation[END_REF] used for source separation of determined convolutive mixtures. Although we do not have any guarantee of independence in the proposed algorithm, the experiments show that good performances are obtained. Finally, we must stress that the BC-VMFB algorithm is not suitable for [START_REF] Sawada | A robust and precise method for solving the permutation problem of frequency-domain blind source separation[END_REF] as it relies on the second derivative of Q( αf , Hf ) w.r.t Hf , which does not necessarily simplify the algorithm. IV. EXPERIMENTS A. Permutation alignment methods For the proposed approaches, we use the existing permutation alignment methods. For N-Regu, we compare the approach based on TDOA (Time Difference Of Arrival) used in Full-rank method [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] and the approach based on interfrequency correlation used in the Bin-wise approach [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. For the inter-frequency correlation permutation, we use the power ratio [START_REF] Sawada | Measuring dependence of binwise separated signals for permutation alignment in frequency-domain bss[END_REF] of the estimated source to present the source activity. For C-PALM, as the TDOA permutation is not adapted, we use only the correlation permutation. For the proposed approaches (N-Regu and C-PALM) and the reference algorithms (Bin-wise and Full-rank), we also designed an oracle permuation alignment method. In each frequency, we look for the permutation that maximizes the correlation between the estimated and the original sources. Such permutation alignment is designed to show the best permutation possible in order to have a fair comparison of the separation approaches instead of the choice made for solving the permutation problem. B. Experimental setting We first evaluated the proposed approaches with 10 sets of synthesized stereo mixtures (M = 2) containing three speech sources (N = 3) of male/female with different nationalities. The mixtures are sampled at 11 kHz and truncated to 6 s. The room impulse response were simulated via the toolbox [START_REF] Lehmann | Prediction of energy decay in room impulse responses simulated with an image-source model[END_REF]. The distance between the two microphone is 4 cm. The reverberation time is defined as 50 ms, 130 ms, 250 ms and 400 ms. The Fig. 1 illustrates the room configuration. For each mixing situation, the mean values of the evaluation results over the 10 sets of mixtures are shown. We then evaluated the algorithm C-PALM with the live recorded speech mixtures from the dataset SiSEC2011 [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF]. Music mixtures are avoided because the instrumental sources are often synchronized to each other and this situation is difficult for the permutation alignment based on inter-frequency correlation [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. An effective alternative way is to employ nonnegative matrix factorization [START_REF] Feng | Sparsity and low-rank amplitude based blind source separation[END_REF]. The parameters of STFT for the synthesized and live recorded mixtures are summarized in Table I. The STFT window length (and window shift) for synthesized mixtures are chosen to preserve local stationarity of audio sources without bringing too much computational costs. The parameters for the live recorded mixtures are the same as the reported reference algorithm Bin-wise [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. The separation performance is evaluated with the signal to distortion ratio (SDR), signal to interference ratio (SIR), source image to spatial distortion ratio (ISR) and signal to artifact ratio (SAR) [START_REF] Vincent | First stereo audio source separation evaluation campaign: data, algorithms and results[END_REF]. The SDR reveals the overall quality of each estimated source. SIR indicates the crosstalk from other sources. ISR measures the amount of spatial distortion and SAR is related to the amount of musical noise. N-Regu is initialized with Gaussian random signals. C-PALM is initialized with the results of N-Regu with 1000 iterations. This choice of initialization for C-PALM compensates the flexibility of the convolutive model (and then the number of local minima in ( 21)) without bringing too much computational cost. We use the stopping criteria α (j+1) f α j f F < 10 -4 for both algorithms. C. Tuning the parameters For the proposed methods, we chose several pre-defined hyperparameter λ and select the λ which corresponds to the best SDR. Even though such a way of choosing this hyper-parameter is not possible for real applications, such evaluation offers a "fair" comparison with the state-of-the-art approaches and gives some empirical suggestions of choosing this parameter in practice. We implement the continuation trick, also known as warm start or fixed point continuation [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF] for a fixed value of λ: we start the algorithms with a large value of λ and iteratively decrease λ to the desired value. It is also important to mention that the hyperparameter λ should be theoretically different for each frequency since the sparsity level of the source signals in each frequency can be very different (for speech signals, the high frequency componants are usually sparser than the low frequency componants). Therefore, different λ should be determined for each frequency. However, in this paper, we used a single λ for all the frequencies and the experiments show that this simplified choice can achieve acceptable results if we perform a whitening pre-processing for each frequency. For C-PALM, as the reverberation time is unknown in the blind setting, we pre-define the length of the convolution kernel in the time-frequency domain L = 3 as well as the penalty parameter ρ(τ ) = [1.75, 1.73, 1.72] T . Although these parameters should vary with the reverberation time, we show in the following that the proposed pre-defined parameters work well in different strong reverberation conditions. D. Synthesized mixtures without noise We first evaluate the algorithms with synthesized mixtures in the noiseless case as a function of the reverberation time RT 60 . The results are shown in Fig. 2. For RT 60 = 50 ms, it is clear that the Full-rank method performs the best in terms of all four indicators. Its good performance is due to the fact that the full-rank spatial covariance model suits better the convolutive mixtures than the multiplicative approximation and the fact that the TDOA permutation alignement has relatively good performance in low reverberation scenario. N-Regu outperforms Bin-wise only in SDR and SAR. It is because that N-Regu has better data fit than the masking-based Bin-wise method while Binwise obtains time-frequency domain disjoint sources which have lower inter-source interference. C-PALM is dominated by other methods in SDR, SIR and ISR. We believe it is because that the pre-defined penalty parameter ρ(τ ) does not fit the low reverberation scenario. The advantages of C-PALM can be seen in relatively stronger reverberation scenarios (especially RT 60 = 130, 250 ms) where C-PALM outperforms other methods in SDR and SIR. For RT 60 = 400 ms, all the presented algorithms have similar performance while C-PALM performs slightly better in SIR. To compare the two permutation methods used for N-Regu, TDOA permutation performs better than inter-frequency correlation permutation in SDR, SIR and SAR. Fig. 3 compares the presented algorithms with oracle permutation alignment. For RT 60 = 50 ms, once again, Full-rank has the best performance in all four indicators. This confirms the advantages of the full rank spatial covariance model. In high reverberation conditions, C-PALM performs better than others in SDR and SIR. In particular, for RT 60 = 130, 250 ms, C-PALM outperforms Full-rank by more than 1 dB in SDR and outperforms Bin-wise by about 1.2 dB in SIR. N-Regu performs slightly better than Bin-wise in SDR for all reverberation conditions. The above observations show the better data fit brought by the optimization framework used in N-Regu (and C-PALM) and confirm the advantages of convolutive narrowband approximation used in C-PALM for high reverberation conditions (especially RT 60 = 130, 250 ms). Fig. 4 illustrates the performance of the presented algorithm as a function of the sparsity level1 of the estimated synthesis coeffcients of the sources for RT 60 = 130 ms. As the sparsity level is directly linked to the hyperparameter λ in the proposed algorithms, this comparison offers some suggestions of choosing this hyperparameter. Full-rank method does not exploits sparsity, thus has 0% as sparsity level. As the number of sources N = 3, the sparsity level of the masking-based Bin-wise method is 66.6%. C-PALM performs better than N-Regu in terms of SDR, SIR and SAR when the sparsity level is less than 60% and its best performance is achieved when the sparsity level is around 40%. For N-Regu, in terms of SDR, SAR and ISR, the best performance is achieved with the least sparse result. E. Synthesized mixtures with noise In this subsection, we evaluate the proposed methods with synthesized mixtures with additive white Gaussian noise. The noise of different energy is added which leads to different input SNR. Fig. 5 reports the separation performance as a function of input SNR with the reverberation time fixed to RT 60 = 130 ms. It is clear that N-Regu with TDOA permutation outperforms other methods in terms of SDR and SIR. In particular, it performs better than others by about 1 dB in SIR for all the input SNR tested. C-PALM outperforms the state-of-the-art approaches only in SDR. We believe that it is due to the fact that the freedom degree of the convolutive narrowband approximation used in C-PALM could be sensitive to input noise. Another reason is that the inter-frequency correlation based permutation could be sensitive to input noise. The latter conjecture is supported by the observation that, in terms of SDR and SIR, the gap between N-Regu with TDOA permutation and with correlation permutation increases as the input noise becomes stronger. Further evidence can be found by the comparisons between the presented algorithm with oracle permutation alignment in Fig. 6. In Fig. 6, in terms of SDR and SIR, it is clear that the gap between N-Regu and C-PALM decreases as the input noise gets stronger. This remark confirms that the separation step of C-PALM is sensitive to input noise. Moreover, in terms of SIR, C-PALM with oracle permutation performs consistently better than N-Regu with oracle permutation, while C-PALM with correlation permutation is dominated by N-Regu with TDOA permutation by about 1 dB (Fig. 5). This observation shows that the performance of C-PALM can be largely improved for noisy mixtures if better permutation alignment method is developped. Fig. 7 reports the separation performance as a function of the sparsity level of the estimated synthesis coefficients of the sources. RT 60 = 130 ms and the input SNR is 15 dB. The results of Full-rank and Bin-wise method are also shown. In terms of SDR and SIR, N-Regu with TDOA permutation consistently outperforms the other methods and achieves its best performance when the sparsity level is about 78%. Compared to Bin-wise method, this observation coincides with the intuition that, for noisy mixtures, the coefficients of the noise in the observations should be discarded to achieve better separation. C-PALM achieves its best performance in terms of SDR and SIR when the sparsity level is about 75%. Fig. 8 illustrates the results of separation as a function of the reverberation time for a fixed input SNR (SNR=15 dB). We can see that N-Regu with TDOA permutation has the best performance in terms of SDR. F. Synthesized mixtures with different sources positions In this subsection, we tested the robusteness of the proposed algorithms w.r.t the sources positions. The same room setting as shown in Fig. 1 is used. Fig. 9 illustrates the four tested In these experiments, the reverberation time is fixed to RT 60 = 130 ms and no noise is added to the mixtures. Fig. 10 shows the separation performance. It is clear that in terms of SDR, SIR and ISR, all the presented algorithms have the worst performance in setting 3. This remark shows that having two sources close to each other and one source relatively far (setting 3) could be a more difficult situation for blind source separation than having three sources close to each other (setting 4). For C-PALM, it has the best performance in terms of SDR, SIR and ISR for all the settings. This observation shows that C-PALM (and the pre-defined penalty parameter) is robust to sources positions, G. Live recorded mixtures without noise This subsection reports the separation results of C-PALM for publicly avaiable benchmark data in SiSEC2011 [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF]. We used the speech signals (male3, female3, male4 and female4) from the first development data (dev1.zip) in "Under-determined speech and music mixtures" data sets. Table II shows the separation results. For C-PALM, we chose the hyperparameter λ such that the sparsity level of the estimated coefficients of the sources is about 20%, 60% for RT 60 = 130, 250 ms respectively. Compared to the performances reported in [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF], C-PALM obtains relatively good separation results epsecially when the number of sources N = 3. H. Computational time We terminate the expriment section by presenting the computational time of the presented algorithm for the synthesized mixtures in Table III. C-PALM is of relative big computational cost mainly because of the convolution operator in each iteration of the algorithm. V. CONCLUSION In this paper, we have developped several approaches for blind source separation with underdetermined convolutive mixtures. Based on the sparsity assumption for the source signals and the statistics of the room impulse response, we developed the N-Regu with multiplicative narrowband approximation and C-PALM with convolutive narrowband approximation. The numerical evaluations show the advantages of C-PALM for noiseless mixtures in strong reverberation scenarios. The experiments also show the good performance of N-Regu for noisy mixtures. The penalty parameter ρ(τ ) in C-PALM has to be predefined, which makes C-PALM not suitable for low reverberation condition. Future work will concentrate on the estimation of ρ(τ ). In this paper, we used inter-frequency correlation permutation alignment for C-PALM. It would be interesting to exploit TDOA based permutaiton method for convolutive narrowband approximation to improve C-PALM. VI. APPENDIX A. Derivation of N-Regu We consider the following optimization problem: F is a constant and does not change the minimizer. The reason of adding this term is purely algorithmic. We then solve the optimization [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF] with BC-VMFB [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF]. min Ãf ,α f 1 2 Xf -Ãf α f 2 F + µ 2 Ãf 2 F + λ α f 1 + ı C ( Ãf ). ( 26 Let the general optimization min x,y F (x) + Q(x, y) + G(y) , (27) where F (x) and G(y) are lower semicontinuous functions, Q(x, y) is a smooth function with Lipschitz gradient on any bounded set. BC-VMFB uses the following update rules to solve (27): x (j+1) = argmin x F (x) + x -x (j) , ∇ x Q(x (j) , y (j) ) + t 1,(j) 2 x -x (j) 2 U 2,(j) , (28) y (j+1) = argmin y G(y) + y -y (j) , ∇ y Q(x (j+1) , y (j) ) + t 2,(j) 2 y -y (j) 2 U 2,j , (29) where U 1,(j) and U 2,(j) are positive definite matrices. x 2 U denotes the variable metric norm: x 2 U = x, Ux . (30) With the variable metric norm, the proximal operator (8) can be generalized as: prox U,ψ := argmin y ψ(y) + 1 2 y -x 2 U . (31) Then ( 28) and (29) can be rewritten as follow: x (j+1) =prox U 1,(j) ,F/t 1,(j) x (j) - 1 t 1,(j) U 1,(j) -1 ∇ x Q(x (j) , y (j) ) , (32) y (j+1) =prox U 2,(j) ,G/t 2,(j) y (j) - 1 t 2,(j) U 2,(j) -1 ∇ y Q(x (j+1) , y (j) ) . (33) It is shown in [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] that the sequence generated by the above update rules converges to a critical point of the problem (27). For the problem (26), we make the following substitutions: F (α f ) = λ α f 1 , Q(α f , Ãf ) = 1 2 Xf -Ãf α f 2 F + µ 2 Ãf 2 F , G( Ãf ) = ı C ( Ãf ), (34) Denoting by L (j) the Lipschitz constant of ∇ α f Q(α (j) f , Ã (j) f ), we have chosen: U 1,(j) = L (j) I, U 2,(j) = ∂Q( Ãf , α (j+1) f ) 2 ∂ 2 Ãf = α (j+1) f α (j+1) H f + µI, t 1,(j) = t 2,(j) = 1. (35) The update step of the mixing matrix can be written as: (36) Ã(j+1/2) f = Xf α (j+1) H f (α (j+1) f α (j+1) H f + µI) -1 , Ã (j+1) As the choice of the parameter µ does not change the minimizer of [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF], by choosing µ sufficiently large, the update step of Ãf becomes: Ã(j+1/2) f = P C Xf α (j+1) H f . ( 37 ) We obtain the N-Regu as shown in Algorithm 1. B. Convolutive mixing operator and its adjoint operators Given a signal s ∈ C T , and a convolution kernel h ∈ C L , the convolution can be written under the matrix form: x = Hs = Sh , (38) H ∈ C T ×T and S ∈ C T ×L being the corresponding circulant matrices. The convolutive mixing operator can then be represented by where s 1 , s 2 , . . . , s N ∈ C T are N source signals and x 1 , x 2 , . . . , x M ∈ C T are M observations. H mn is the convolution matrix from the n-th source to the m-th microphone. Thanks to these notations, the adjoint operator of convolutive mixing with respect to the mixing system is a linear operator C M ×T → C N ×T and can be represented by the following matrix multiplication:      s 1 s 2 . . . s N      =               x 1 x 2 . . . x M      . (40) In order to coincide with the previous notations in [START_REF] Balazs | Adapted and adaptive linear time-frequency representations: a synthesis point of view[END_REF], we denote the above formulation as: S = H H X. (41) The adjoint operator of the convolutive mixture with respect to the sources can then be written as: H = X S H , (42) with h mn = S H n x m . (43) C. Derivation of C-PALM The PALM algorithm [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] is designed to solve the nonconvex optimization problem in the general form (27) by the following update rules: x (j+1) = argmin x F (x) + xx (j) , ∇ x Q(x (j) , y (j) ) + t 1,(j) 2 xx (j) 2 2 , (44) y (j+1) = argmin y G(y) + yy (j) , ∇ y Q(x (j+1) , y (j) ) + t 2,(j) 2 yy (j) 2 2 , (45 ) where j is the iteration index and t 1,(j) et t 2,(j) are two step parameters. It is shown in [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] that the sequence generated by the above update rules converges to a critical point of the problem (27). From the general optimization (27), we do the following substitutions: F (α f ) = λ α f 1 , Q(α f , Hf ) = 1 2 Xf -Hf α f 2 F + P( Hf ), G( Hf ) = ı Conv C ( Hf ), (46) and the particular choices: t 1,(j) = L 1,(j) , t 2,(j) = L 2,(j) , where L 1,(j) and L 2,(j) are respectively the Lipschitz constant of ∇ α f Q(α (47) Let Ψ n denotes the circulant matrix associated with α n . If the synthesis coefficients of different sources are independent, we have E [Ψ i Ψ j ] = 0, for i = j. Then, using similar notations as in Appendix VI-B, one can write Ĥ as: ĥmn = Ψ H n Ψ n hm , (48) Finally, Proposition 4 comes from the definition of the Lipschitz constant. Fig. 1 . 1 Fig. 1. Room configuration for synthesized mixtures Fig. 3 . 3 Fig. 3. Separation performance of different algorithms with oracle permutation alignment as a function of the reverberation time RT 60 in noiseless case Fig. 4 .Fig. 5 . 45 Fig. 4. performance of different algorithms as a function of the sparsity level in noiseless case. RT 60 = 130 ms Fig. 6 .Fig. 7 . 67 Fig. 6. Separation performance of different algorithms with oracle permutation alignment as a function of the input SNR for RT 60 = 130 ms Fig. 8 . 8 Fig. 8. Separation performance of different algorithms as a function of the reverberation time RT 60 with input SNR=15 dB 4 Fig. 9 . 49 Fig. 9. Different settings of source positions for synthesized mixtures without input noise Fig. 10 . 10 Fig. 10. Separation performance of different algorithms for different sources positions in noiseless case. RT 60 = 130 ms. ) This optimization is equivalent to the problem (6): the indicator function ı C ( Ãf ) forces the normalization on each column of Ãf , therefore the term µ 2 Ãf 2 f∈ prox U 2,(j) ,ı C ( Ã(j+1/2) f ). We obtain the C-PALM algorithm presented in Algorithm 2.D. Calculation of the Lipschitz constant in C-PALMWe present the calculation of the Lipschitz constant of the functionI( Hf ) := Hf α f α H f + ∇ Hf P( Hf ) = Ĥf + ∇ Hf P( Hf ). TABLE I I EXPERIMENTAL CONDITIONS synthesized live recorded Number of microphones M = 2 M = 2 Number of sources N = 3 N = 3, 4 Duration of signals 6 s 10 s Reverberation time (RT 60 ) 50, 130, 250, 400 ms 130, 250 ms Sample rate 11 KHz 16 kHz Microphone distance 4 cm 5 cm, 1 m STFT window type Hann Hann STFT window length 512 (46.5 ms) 2048 (128 ms) STFT window shift 256 (23.3 ms) 512 (32 ms) Fig. 2. Separation performance as a function of the reverberation time RT 60 in noiseless case 10 15 SDR (dB) 8 4 6 N-Regu with TDOA permutation N-Regu with correlation permutation C-PALM with correlation permutation Bin-wise Full-rank SIR (dB) 10 5 2 0 50 0 100 150 200 250 300 350 400 50 -5 100 150 200 250 300 350 400 RT 60 (ms) RT 60 (ms) 15 14 12 ISR (dB) 5 10 SAR (dB) 6 8 10 4 50 0 100 150 200 250 300 350 400 50 2 100 150 200 250 300 350 400 RT 60 (ms) RT 60 (ms) 10 15 8 N-Regu with oracle permutation C-PALM with oracle permutation SDR (dB) 4 6 Bin-wise with oracle permutation Full-rank with oracle permutation SIR (dB) 5 10 2 50 0 100 150 200 250 300 350 400 50 0 100 150 200 250 300 350 400 RT 60 (ms) RT 60 (ms) 16 14 14 12 ISR (dB) 8 10 12 SAR (dB) 6 8 10 6 4 50 4 100 150 200 250 300 350 400 50 2 100 150 200 250 300 350 400 RT 60 (ms) RT 60 (ms) TABLE II SEPARATION II RESULTS OF C-PALM FOR LIVE RECORDED MIXTURES FROM SISEC2011 (SDR/SIR/ISR/SAR IN DB) RT 60 = 130 ms RT 60 = 250 ms microphone space 5 cm 1 m 5 cm 1 m male3 7.65 / 11.38 / 12.10 / 10.65 7.53 / 11.27 / 11.77 / 10.58 5.20 / 7.67 / 9.01 / 8.62 4.98 / 10.62 / 6.67 / 7.04 female3 6.69 / 9.81 / 10.90 / 10.52 9.77 / 14.49 / 14.13 / 13.02 5.29 / 9.16 / 7.77 / 8.75 7.34 / 11.22 / 10.97 / 11.02 male4 3.25 / 4.65 / 6.09 / 6.01 2.34 / 2.15 / 5.16 / 5.47 2.10 / 1.79 / 4.63 / 5.49 3.08 / 4.22 / 6.00 / 6.11 female4 2.36 / 2.05 / 5.37 / 6.53 3.66/ 6.05 / 6.80 / 7.15 2.39 / 2.20 / 5.27 / 6.51 3.12 / 4.51 / 6.07 / 6.84 x-axis (m) H 11 H 12 . . . H 1N H 21 H 22 . . . H 2N  x 1     s 1      x 2 . . .     =     . . . . . . . . . . . .         s 2 . . .     , x M s N H M 1 H M 2 . . . H M N In this paper, the sparsity level is the percentage of zero elements in a vector or matrix. A higher sparsity level means a sparser vector or matrix.
48,983
[ "1030403", "5106" ]
[ "1969", "1289" ]
01648383
en
[ "math" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01648383v2/file/RIMS%2C%20final%2C%20JDE%2C%20Feb.19%2C%202018%20%281%29.pdf
Hedy Attouch email: [email protected] Alexandre Cabot email: [email protected] CONVERGENCE OF DAMPED INERTIAL DYNAMICS GOVERNED BY REGULARIZED MAXIMALLY MONOTONE OPERATORS Keywords: asymptotic stabilization, damped inertial dynamics, Lyapunov analysis, maximally monotone operators, time-dependent viscosity, Yosida regularization AMS subject classification. 37N40, 46N10, 49M30, 65K05, 65K10, 90C25 . In this last paper, the authors considered the case γ(t) = α t , which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature. Introduction Throughout this paper, H is a real Hilbert space endowed with the scalar product ., . and the corresponding norm . . Let A : H → 2 H be a maximally monotone operator. Given continuous functions γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R * + where t 0 is a fixed real number, we consider the second-order evolution equation (RIMS) γ,λ ẍ(t) + γ(t) ẋ(t) + A λ(t) (x(t)) = 0, t ≥ t 0 , where A λ = 1 λ I -(I + λA) -1 is the Yosida regularization of A of index λ > 0 (see Appendix A.1 for its main properties). The terminology (RIMS) γ,λ is a shorthand for "Regularized Inertial Monotone System" with parameters γ, λ. Thanks to the Lipschitz continuity properties of the Yosida approximation, this system falls within the framework of the Cauchy-Lipschitz theorem, which makes it a well-posed system for arbitrary Cauchy data. The above system involves two time-dependent positive parameters: the damping parameter γ(t), and the Yosida regularization parameter λ(t). We shall see that, under a suitable tuning of the parameters γ(t) and λ(t), the trajectories of (RIMS) γ,λ converge to solutions of the monotone inclusion 0 ∈ A(x). Indeed, the design of rapidly convergent dynamics and algorithms to solve monotone inclusions is a difficult problem of fundamental importance in many domains: optimization, equilibrium theory, economics and game theory, partial differential equations, statistics, among other subjects. Trajectories of Date: February 19, 2018. (RIMS) γ,λ do so in a robust manner. Indeed, when A is the subdifferential of a closed convex proper function Φ : H → R ∪ {+∞}, we will obtain rates of convergence of the values, which are comparable to the accelerated method of Nesterov. With this respect, as a main advantage of our approach, we can handle nonsmooth functions Φ. 1.1. Introducing the dynamics. The (RIMS) γ,λ system is a natural development of some recent studies concerning rapid inertial dynamics for convex optimization and monotone equilibrium problems. We will rely heavily on the techniques developed in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] concerning the general damping coefficient γ(t), and in [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] concerning the general Yosida regularization parameter λ(t). 1.1.1. General damping coefficient γ(t). Some simple observations lead to the introduction of quantities that play a central role in our analysis. Taking A = 0, then A λ = 0, and (RIMS) γ,λ boils down to the linear differential equation ẍ(t) + γ(t) ẋ(t) = 0. Let us multiply this equality by the integrating factor Throughout the paper, we always assume that condition (H 0 ) is satisfied. For s ≥ t 0 , we then define the quantity Γ(s) by The function s → Γ(s) plays a key role in the asymptotic behavior of the trajectories of (RIMS) γ,λ . This was brought to light by the authors in the potential case, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] (no regularization process was used in this work). The theorem below gathers the main results obtained in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] for a gradient operator A = ∇Φ. p(t It enlights the basic assumptions on the function γ(t) which give rates of convergence of the values. Theorem (Attouch and Cabot [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]). Let Φ : H → R be a convex function of class C 1 such that argmin Φ = ∅. Let us assume that γ : [t 0 , +∞[→ R + is a continuous function satisfying: (i) +∞ t0 ds p(s) < +∞; (ii) There exist t 1 ≥ t 0 and m < 3 2 such that γ(t)Γ(t) ≤ m for every t ≥ t 1 ; (iii) +∞ t0 Γ(s) ds = +∞. Then every solution trajectory x : [t 0 , +∞[→ H of (IGS) γ ẍ(t) + γ(t) ẋ(t) + ∇Φ(x(t)) = 0, converges weakly toward some x * ∈ argmin Φ, and satisfies the following rates of convergence: Φ(x(t)) -min as t → +∞. The (IGS) γ system was previously studied by Cabot, Engler and Gadat [START_REF] Cabot | On the long time behavior of second order differential equations with asymptotically small dissipation[END_REF][START_REF] Cabot | Second order differential equations with asymptotically small dissipation and piecewise flat potentials[END_REF] in the case of a vanishing damping coefficient γ(t) and for a possibly nonconvex potential Φ. The importance of the dynamics (IGS) γ in the case γ(t) = α/t (α > 1) was highlighted by Su, Boyd and Candés in [START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF]. They showed that taking α = 3 gives a continuous version of the accelerated gradient method of Nesterov. The corresponding rate of convergence for the values is at most of order O(1/t 2 ) as t → +∞. Let us show how this result can be obtained as a consequence of the above general theorem. Indeed, taking γ(t) = α/t gives after some elementary computation Γ(t) = t t 0 α +∞ t t 0 τ α dτ = t α τ -α+1 -α + 1 +∞ t = t α -1 . Then, the condition γ(t)Γ(t) ≤ m with m < 3 2 is equivalent to α > 3. As a consequence, for γ(t) = α/t and α > 3, we obtain the convergence of the trajectories of (IGS) γ and the rates of convergence Φ(x(t)) -min H Φ = o 1 t 2 and ẋ(t) = o 1 t as t → +∞. This result was first established in [START_REF] Attouch | Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping[END_REF] and [START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF]. Because of its importance, a rich literature has been devoted to the algorithmic versions of these results, see [START_REF] Attouch | Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping[END_REF][START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF][START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF][START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF][START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF] and the references therein. The above theorem relies on energetical arguments that are not available in the general framework of monotone operators. It ensues that the expected results in this context are weaker than in the potential case, and require different techniques. That's where the Yosida regularization comes into play. 1.1.2. General regularization parameter λ(t). Our approach is in line with Attouch and Peypouquet [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] who studied the system (RIMS) γ,λ with a general maximally monotone operator, and in the particular case γ(t) = α/t (the importance of this system has been stressed just above). This approach can be traced back to Álvarez-Attouch [START_REF] Álvarez | The heavy ball with friction dynamical system for convex constrained minimization problems, Optimization[END_REF] and Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF] who studied the equation ẍ(t) + γ ẋ(t) + A(x(t)) = 0, where A is a cocoercive operator. Several variants of the above equation were considered by Bot and Csetnek (see [START_REF] Bot | Second order forward-backward dynamical systems for monotone inclusion problems[END_REF] for the case of a time-dependent coefficient γ(t), and [START_REF] Bot | Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping[END_REF] for a linear anisotropic damping). Cocoercivity plays an important role, not only to ensure the existence of solutions, but also in analyzing their long-term behavior. Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF] proved the weak convergence of the trajectories to zeros of A if the cocoercivity parameter λ and the damping coefficient γ satisfy the condition λγ 2 > 1. Taking into account that for λ > 0, the operator A λ is λ-cocoercive and that A -1 λ (0) = A -1 (0) (see Appendix A.1), we immediately deduce that, under the condition λγ 2 > 1, each trajectory of ẍ(t) + γ ẋ(t) + A λ (x(t)) = 0 converges weakly to a zero of A. In the quest for a faster convergence, in the case γ(t) = α/t, Attouch-Peypouquet introduced a time-dependent regularizing parameter λ(•) satisfying λ(t) × α 2 t 2 > 1 for t ≥ t 0 . So doing, in the case of a general maximal monotone operator, they were able to prove the asymptotic convergence of the trajectories to zeros of A. Our approach will consist in extending these results to the case of a general damping coefficient γ(t), taking advantage of the techniques developed in the above mentioned papers [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] and [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF]. 1.2. Organization of the paper. The paper is divided into three parts. Part A concerns a general maximally monotone operator A. We show that a suitable tuning of the damping parameter and of the Yosida regularization parameter, gives the weak convergence of the trajectories. Then, we specialize our results to some important cases, including the case of the continuous version of the Nesterov method, that is, γ(t) = α t . In part B, we examine the ergodic convergence properties of the trajectories. In part C, we consider the case where A is the subdifferential of a closed convex proper function Φ : H → R ∪ {+∞}. In this case, we will obtain rates of convergence of the values. In the Appendix we have collected several lemmas related to Yosida's approximation, to Moreau's envelopes and to the study of scalar differential inequalities that play a central role in the Lyapunov analysis of our system. PART A: DYNAMICS FOR A GENERAL MAXIMALLY MONOTONE OPERATOR In this part, A : H → 2 H is a general maximally monotone operator such that zerA = ∅, and t 0 is a fixed real number. Convergence results Let us first establish the existence and uniqueness of a global solution to the Cauchy problem associated with equation (RIMS) γ,λ . x ∈ C 2 ([t 0 , +∞[, H) to equation (RIMS) γ,λ , satisfying the initial conditions x(t 0 ) = x 0 and ẋ(t 0 ) = v 0 . Proof. The argument is standard and consists in writing (RIMS) γ,λ as a first-order system in H × H. By setting X(t) = x(t) ẋ(t) and F (t, u, v) = v -γ(t)v -A λ(t) (u) , equation (RIMS) γ,λ amounts to the first-order differential system Ẋ(t) = F (t, X(t)). Owing to the To establish the weak convergence of the trajectories of (RIMS) γ,λ , we will apply Opial lemma [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF], that we recall in its continuous form. Lemma 2.2 (Opial). Let S be a nonempty subset of H, and let x : [t 0 , +∞[→ H. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists; (ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S. We associate to the continuous function γ : [t 0 , +∞[→ R + the function p : [t 0 , +∞[→ R * + given by p(t) = e t t 0 γ(τ ) dτ for every t ≥ t 0 . Under assumption (H 0 ), the function Γ : [t 0 , +∞[→ R * + is then defined by Γ(s) = +∞ s du p(u) p(s) for every s ≥ t 0 . Besides the function Γ, to analyze the asymptotic behavior of the trajectory of the system (RIMS) γ,λ we will also use the quantity Γ(s, t), which is defined by, for any s, t p(s). Suppose that there exists ε ∈]0, 1[ such that for t large enough, (H 1 ) (1 -ε)λ(t)γ(t) ≥ 1 + d dt (λ(t)γ(t)) Γ(t). Then for any global solution x(.) of (RIMS) γ,λ , we have (i) +∞ t0 λ(s)γ(s) ẋ(s) It ensues that ḧ(t) + γ(t) ḣ(t) = ẋ(t) 2 + ẍ(t) + γ(t) ẋ(t), x(t) -z = ẋ(t) 2 -A λ(t) (x(t)), x(t) -z . (4) Since z ∈ zerA = zerA λ(t) , we have A λ(t) (z) = 0. We then deduce from the λ(t)-cocoercivity of A λ(t) that A λ(t) (x(t)), x(t) -z ≥ λ(t) A λ(t) (x(t)) 2 , whence (5) ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 -λ(t) A λ(t) (x(t)) 2 . Writing that A λ(t) (x(t)) = -ẍ(t) -γ(t) ẋ(t), we have λ(t) A λ(t) (x(t)) 2 = λ(t) ẍ(t) + γ(t) ẋ(t) 2 = λ(t) ẍ(t) 2 + λ(t)γ(t) 2 ẋ(t) 2 + 2 λ(t)γ(t) ẍ(t), ẋ(t) ≥ λ(t)γ(t) 2 ẋ(t) 2 + λ(t)γ(t) d dt ẋ(t) 2 = λ(t)γ(t) 2 - d dt (λ(t)γ(t)) ẋ(t) 2 + d dt (λ(t)γ(t) ẋ(t) 2 ). In view of (5), we infer that ḧ(t) + γ(t) ḣ(t) ≤ -λ(t)γ(t) 2 - d dt (λ(t)γ(t)) -1 ẋ(t) 2 - d dt (λ(t)γ(t) ẋ(t) 2 ). Let's use Lemma B.1 (i) with g(t) = -λ(t)γ(t) 2 -d dt (λ(t)γ(t)) -1 ẋ(t) 2 -d dt (λ(t)γ(t) ẋ(t) 2 ). Set- ting k(t) := h(t 0 ) + ḣ(t 0 ) t t0 du p(u) , we obtain for every t ≥ t 0 , h(t) ≤ k(t) - t t0 Γ(s, t) λ(s)γ(s) 2 - d ds (λ(s)γ(s)) -1 ẋ(s) 2 + d ds (λ(s)γ(s) ẋ(s) 2 ) ds = k(t) - t t0 Γ(s, t) λ(s)γ(s) 2 - d ds (λ(s)γ(s)) -1 ẋ(s) 2 ds -Γ(s, t)λ(s)γ(s) ẋ(s) 2 t t0 + t t0 d ds Γ(s, t) λ(s)γ(s) ẋ(s) 2 ds. Let us observe that Γ(t, t) = 0 and that d ds Γ(s, t) = d ds t s du p(u) p(s) = -1 + γ(s)Γ(s, t). Then it follows from the above inequality that h(t) ≤ k(t) - t t0 λ(s)γ(s) -Γ(s, t) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds +Γ(t 0 , t)λ(t 0 )γ(t 0 ) ẋ(t 0 ) 2 . Since Γ(t 0 , t) ≤ Γ(t 0 ) and h(t) ≥ 0, we deduce that (6) t t0 λ(s)γ(s) -Γ(s, t) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds ≤ C 1 , with C 1 := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + Γ(t 0 )λ(t 0 )γ(t 0 ) ẋ(t 0 ) 2 . Now observe that Γ(s, t) 1 + d ds (λ(s)γ(s)) ≤ Γ(s, t) 1 + d ds (λ(s)γ(s)) ≤ Γ(s) 1 + d ds (λ(s)γ(s)) . We then infer from (6) that t t0 λ(s)γ(s) -Γ(s) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds ≤ C 1 . By assumption, inequality (H 1 ) holds true for t large enough, say t ≥ t 1 . It ensues that for t ≥ t 1 , t t1 ελ(s)γ(s) ẋ(s) 2 ds ≤ C 1 -C 2 , with C 2 = t1 t0 λ(s)γ(s) -Γ(s) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds. Taking the limit as t → +∞, we find +∞ t1 λ(s)γ(s) ẋ(s) 2 ds ≤ 1 ε (C 1 -C 2 ) < +∞. By using again (H 1 ), we deduce that +∞ t1 Γ(s) ẋ(s) 2 ds < +∞. (ii) Let us come back to inequality [START_REF] Attouch | Asymptotic behavior of coupled dynamical systems with multiscale aspects[END_REF]. Using Lemma B.1 (i) with g(t) = ẋ(t) 2 -λ(t) A λ(t) (x(t)) 2 , we obtain for every t ≥ t 0 , h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s, t) ẋ(s) 2 -λ(s) A λ(s) (x(s)) 2 ds. Since h(t) ≥ 0 and Γ(s, t) ≤ Γ(s), we deduce that t t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s) ẋ(s) 2 ds. Recalling from (i) that +∞ t0 Γ(s) ẋ(s) 2 ds < +∞, we infer that for every t ≥ t 0 , t t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 , where we have set C 3 := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + +∞ t0 Γ(s) ẋ(s) 2 ds. Since Γ(s, t) = 0 for s ≥ t, this yields in turn +∞ t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 . Letting t tend to +∞, the monotone convergence theorem then implies that +∞ t0 Γ(s)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 < +∞. (iii) From inequality (5), we derive that ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 on [t 0 , +∞[. Recall from (i) that +∞ t0 Γ(s) ẋ(s) 2 ds < +∞. Applying Lemma B.1 (ii) with g(t) = ẋ(t) 2 , we infer that lim t→+∞ h(t) exists. Thus, we have obtained that lim t→+∞ x(t) -z exists for every z ∈ zerA, whence in particular the boundedness of the trajectory x(•). (iv) Using that the operator A λ(t) is 1 λ(t) -Lipschitz continuous and that A λ(t) (z) = 0, we obtain that This proves the first inequality of (iv). For the second one, take the norm of each member of the equality ẍ(t) = -γ(t) ẋ(t) -A λ(t) (x(t)). The triangle inequality yields (7) A λ(t) (x(t)) ≤ 1 λ(t) x(t) -z ≤ C 4 λ(t ẍ(t) ≤ γ(t) ẋ(t) + A λ(t) (x(t)) . The announced majorization of ẍ(t) then follows from ( 7) and ( 8). (v) Recall the estimate of (ii) that we write as ( 9) +∞ t0 Γ(s) λ(s) u(s) 2 ds < +∞, with the function u : [t 0 , +∞[→ H defined by u(t) = λ(t)A λ(t) (x(t)) . By applying [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF]Lemma A.4] with γ = λ(t), δ = λ(s), x = x(t) and y = x(s) with s, t ≥ t 0 , we find λ(t)A λ(t) (x(t)) -λ(s)A λ(s) (x(s)) ≤ 2 x(t) -x(s) + 2 x(t) -z |λ(t) -λ(s)| λ(t) . This shows that the map t → λ(t)A λ(t) (x(t)) is locally Lipschitz continuous, hence almost everywhere differentiable on [t 0 , +∞[. Dividing by t -s with t = s, and letting s tend to t, we infer that u(t) = d dt (λ(t)A λ(t) (x(t))) ≤ 2 ẋ(t) + 2 x(t) -z | λ(t)| λ(t) , for almost every t ≥ t 0 . In view of (8), we deduce that for almost every t large enough, u(t) ≤ 2 C 5 p(t) t t0 p(s) λ(s) ds + 2 C 4 | λ(t)| λ(t) , with C 4 = sup t≥t0 x(t) -z < +∞. Recalling the assumption (H 2 ), we obtain the existence of C 6 ≥ 0 such that for almost every t large enough u(t) ≤ C 6 Γ(t) λ(t) . Then we have d dt u(t) 3 ≤ 3 u(t) u(t) 2 ≤ 3 C 6 Γ(t) λ(t) u(t) 2 . Taking account of estimate [START_REF] Baillon | Une remarque sur le comportement asymptotique des semigroupes non linéaires[END_REF], this shows that d dt u(t) 3 + ∈ L 1 (t 0 , +∞). From a classical result, this implies that lim t→+∞ u(t) 3 exists, which entails in turn that lim t→+∞ u(t) exists. Using again the estimate ( 9), together with the assumption (H 3 ), we immediately conclude that lim t→+∞ u(t) = 0. (vi) To prove the weak convergence of x(t) as t → +∞, we use the Opial lemma with S = zerA. Item (iii) shows the first condition of the Opial lemma. For the second one, let t n → +∞ be such that x(t n ) x weakly as n → +∞. By (v), we have lim n→+∞ λ(t n )A λ(tn) (x(t n )) = 0 strongly in H. Since the function λ is minorized by some positive constant on [t 0 , +∞[, we also have lim n→+∞ A λ(tn) (x(t n )) = 0 strongly in H. Passing to the limit in A λ(tn) (x(t n )) ∈ A x(t n ) -λ(t n )A λ(tn) (x(t n )) , and invoking the graph-closedness of the maximally monotone operator A for the weak-strong topology in H × H, we find 0 ∈ A(x). This shows that x ∈ zerA, which completes the proof. (vii) Let us now assume that +∞ t0 Γ(s) λ(s) ds < +∞. Recalling inequality [START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF], we deduce that +∞ t0 Γ(s) A λ(s) (x(s)) ds < +∞. By applying Lemma B.2 with F (t) = -A λ(t) (x(t)), we obtain that +∞ t0 ẋ(s) ds < +∞, and hence x(t) converges strongly as t → +∞ toward some x ∞ ∈ H. Remark 2.4. When +∞ t0 Γ(s) λ(s) ds < +∞, the trajectories of (RIMS) γ,λ have a finite length, and hence are strongly convergent. However, the limit point is not a zero of the operator A in general. Let us now particularize Theorem 2.3 to the case of a constant parameter λ > 0. In this case, the operator arising in equation (RIMS) γ,λ is constant and equal to the λ-cocoercive operator A λ . On the other hand, it is well-known that every λ-cocoercive operator B : H → H can be viewed as the Yosida regularization A λ of some maximally monotone operator A : H → 2 H , see [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.20]. This leads to the following statement. (1 -ε)λγ(s) ≥ (1 + λ| γ(s)|)Γ(s). (10) Then for any global solution x(.) of ẍ(t) + γ(t) ẋ(t) + B(x(t)) = 0, t ≥ t 0 , we have (i) +∞ t0 γ(s) ẋ(s) 2 ds < +∞, and as a consequence +∞ t0 Γ(s) ẋ(s) 2 ds < +∞. (ii) +∞ t0 Γ(s) B(x(s)) 2 ds < +∞. (iii) For any z ∈ zerB, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. (iv) There exists C ≥ 0 such that for t large enough, ẋ(t) ≤ C ∆(t) and ẍ(t) ≤ C γ(t)∆(t) + C. Assuming that +∞ t0 Γ(s) ds = +∞, and that ∆(t) = O(Γ(t)) as t → +∞, the following holds (v) lim t→+∞ B(x(t)) = 0. (vi) There exists x ∞ ∈ zerB such that x(t) x ∞ weakly in H as t → +∞. Finally assume that +∞ t0 Γ(s) ds < +∞. Then we obtain (vii) +∞ t0 ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H. Assume now that the function γ is constant, say γ(t) ≡ γ > 0. In this case, it is easy to check that ( 11) Γ(t) ∼ 1 γ and ∆(t) ∼ 1 γ as t → +∞, see Proposition 3.1. As a consequence of Corollary 2.5, we then obtain the following result that was originally discovered by Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF]. Corollary 2.6 (Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF]). Let λ > 0 and let B : H → H be a λ-cocoercive operator such that zerB = ∅. Let γ > 0 be such that λγ 2 > 1. Then for any global solution x(.) of (12) ẍ(t) + γ ẋ(t) + B(x(t)) = 0, t ≥ t 0 , we have (i) +∞ t0 ẋ(s) 2 ds < +∞. (ii) +∞ t0 B(x(s)) 2 ds < +∞. (iii) For any z ∈ zerB, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. (iv) lim t→+∞ ẋ(t) = 0 and lim t→+∞ ẍ(t) = 0. (v) lim t→+∞ B(x(t)) = 0. (vi) There exists x ∞ ∈ zerB such that x(t) x ∞ weakly in H as t → +∞. Proof. Since γ(t) ≡ γ > 0, we have the equivalences [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF] as t → +∞. It ensues that condition [START_REF] Balti | Asymptotic for the perturbed heavy ball system with vanishing damping term[END_REF] is guaranteed by λγ 2 > 1. All points are then obvious consequences of Corollary 2.5, except for (iv). Corollary 2.5 (iv) shows that the acceleration ẍ is bounded on [t 0 , +∞[. Taking account of (i), we deduce classically that lim t→+∞ ẋ(t) = 0. In view of equation ( 12) and the fact that lim t→+∞ B(x(t)) = 0 by (v), we conclude that lim t→+∞ ẍ(t) = 0. Application to particular classes of functions γ and λ We now look at special classes of functions γ and λ, for which we are able to estimate precisely the quantities +∞ t ds p(s) and t t0 p(s) λ(s) ds as t → +∞. This consists of the differentiable functions γ, λ : [t 0 , +∞[→ R * + satisfying (13) lim t→+∞ γ(t) γ(t) 2 = -c and lim t→+∞ d dt (λ(t)γ(t)) λ(t)γ(t) 2 = -c , for some c ∈ [0, 1[ and c > -1. Some properties of the functions γ satisfying the first condition above were studied by Attouch-Cabot [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF], in connection with the asymptotic behavior of the inertial gradient system (IGS) γ . The next proposition extends some of these properties. We now show that the key condition (H 1 ) of Theorem 2.3 takes a simple form for functions γ and λ satisfying conditions [START_REF] Bot | Second order forward-backward dynamical systems for monotone inclusion problems[END_REF]. Proposition 3.2. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions (13) for some c ∈ [0, 1[ and c ∈] -1, 1[ such that |c | < 1 -c. Then condition (H 1 ) is equivalent to (15) lim inf t→+∞ λ(t)γ(t) 2 > 1 1 -c -|c | . Proof. The inequality arising in condition (H 1 ) can be rewritten as (16) (1 -ε)λ(t) γ(t) Γ(t) - d dt (λ(t)γ(t)) ≥ 1. The assumption lim t→+∞ γ(t) γ(t) 2 = -c implies that Γ(t) ∼ 1 (1-c) γ(t) as t → +∞, see Proposition 3.1 (i). It ensues that (17) λ(t) γ(t) Γ(t) = (1 -c)λ(t)γ(t) 2 + o(λ(t)γ(t) 2 ) as t → +∞. On the other hand, we deduce from the second condition of ( 13) that (18) d dt (λ(t)γ(t)) = |c |λ(t)γ(t) 2 + o(λ(t)γ(t) 2 ) as t → +∞. In view of ( 17) and ( 18), inequality [START_REF] Brézis | Nonlinear ergodic theorems[END_REF] amounts to λ(t)γ(t) 2 [(1 -ε)(1 -c) -|c | + o(1)] ≥ 1 as t → +∞. Therefore condition (H 1 ) is equivalent to the existence of ε ∈]0, 1 -c -|c |[ such that λ(t)γ(t) 2 [1 -c -|c | -ε ] ≥ 1, for t large enough. This last condition is equivalent to [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF], which ends the proof. ∈] -1, 1[ such that |c | < 1 -c. Assume moreover that lim inf t→+∞ λ(t)γ(t) 2 > 1 1 -c -|c | . Then for any global solution x(.) of (RIMS) γ,λ , we have (i) +∞ t0 λ(s)γ(s) ẋ(s) 2 ds < +∞. (ii) +∞ t0 λ(s) γ(s) A λ(s) (x(s)) 2 ds < +∞. (iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. (iv) ẋ(t) = O 1 λ(t)γ(t) and ẍ(t) = O 1 λ(t) as t → +∞. Assuming that +∞ t0 1 λ(s)γ(s) ds = +∞ and that | λ(t)| = O 1 γ(t) as t → +∞, the following holds (v) lim t→+∞ λ(t)A λ(t) (x(t)) = 0. (vi) Γ(t) ∼ 1 (1 -c) γ(t) and 1 p(t) t t0 p(s) λ(s) ds ∼ 1 (1 + c )λ(t)γ(t) as t → +∞. It ensues that the first condition of (H 2 ) is automatically satisfied, while the second one is given by | λ(t)| = O 1 γ(t) as t → +∞. Condition (H 3 ) is implied by the assumption +∞ t0 1 λ(s)γ(s) ds = +∞. Items (i)-(vii) follow immediately from the corresponding points in Theorem 2.3. Let us now particularize to the case γ(t) = α t q and λ(t) = β t r , for some α, β > 0, q ≥ -1 and r ∈ R. Corollary 3.4. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Assume that γ(t) = α t q and λ(t) = β t r for every t ≥ t 0 > 0. Suppose that (q, r) ∈ ] -1, +∞[×R is such that 2q + r ≥ 0, and that (α, β) ∈ R * + × R * + satisfies α 2 β > 1 if 2q + r = 0 (no condition if 2q + r > 0) . Then for any global solution x(.) of (RIMS) γ,λ , we have (i) +∞ t0 s q+r ẋ(s) 2 ds < +∞. (ii) +∞ t0 s r-q A λ(s) (x(s)) 2 ds < +∞. (iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. (iv) ẋ(t) = O 1 t q+r and ẍ(t) = O 1 t r as t → +∞. Assuming that q + r ≤ 1, the following holds (v) lim t→+∞ t r A λ(t) (x(t)) = 0. (vi) If r ≥ 0, there exists x ∞ ∈ zerA such that x(t) x ∞ weakly in H as t → +∞. Finally assume that q + r > 1. Then we obtain (vii) +∞ t0 ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H. Proof. Since q > -1, the first (resp. second) condition of ( 13) is satisfied with c = 0 (resp. c = 0). On the other hand, we have λ(t)γ(t ) 2 = α 2 β t 2q+r , hence lim t→+∞ λ(t)γ(t) 2 = +∞ if 2q + r > 0 α 2 β if 2q + r = 0. It ensues that the condition lim inf t→+∞ λ(t)γ(t) 2 > 1 is guaranteed by the hypotheses of Corollary 3.4. Conditions When q = r = 0, the functions γ and λ are constant: γ(t) ≡ α > 0 and λ(t) ≡ β > 0. We then recover the result of [6, Theorem 2.1] with the key condition α 2 β > 1. To finish, let us consider the case q = -1, thus leading to a damping parameter of the form γ(t) = α t . This case was recently studied by Attouch and Peypouquet [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] in the framework of Nesterov's accelerated methods. Corollary 3.5. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let r ≥ 2, α > r and β ∈ R * + be such that β > 1 α(α-r) if r = 2 (no condition on β if r > 2). Assume that γ(t) = α t and λ(t) = β t r for every t ≥ t 0 > 0. Then for any global solution x(.) of (RIMS) γ,λ , we have (i) +∞ t0 s r-1 ẋ(s) 2 ds < +∞. (ii) +∞ t0 s r+1 A λ(s) (x(s)) 2 ds < +∞. (iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. (iv) ẋ(t) = O 1 t r-1 and ẍ(t) = O 1 t r as t → +∞. Assuming that r = 2, the following holds (v) lim t→+∞ t 2 A λ(t) (x(t)) = 0. (vi) There exists x ∞ ∈ zerA such that x(t) x ∞ weakly in H as t → +∞. Finally assume that r > 2. Then we obtain (vii) +∞ t0 ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H. Proof. The first (resp. second) condition of ( 13) is satisfied with c = 1 α (resp. c = 1-r α ). Since r ≥ 2 and α > r, we have c ∈]0, 1/2[ and |c | = r -1 α < α -1 α = 1 -c. On the other hand, observe that λ(t)γ(t ) 2 = α 2 β t r-2 , hence lim t→+∞ λ(t)γ(t) 2 = +∞ if r > 2 α 2 β if r = 2. Condition lim inf t→+∞ λ(t)γ(t) 2 > 1 1-c-|c | is automatically satisfied if r > 2, while it amounts to α 2 β > 1 1 -1 α -r-1 α = α α -r ⇐⇒ β > 1 α(α -r) if r = 2. Items (i)-(vii) follow immediately from the corresponding points in Corollary 3.3. Taking r = 2 in the previous corollary, we recover the result of [8, Theorem 2.1] as a particular case. PART B: ERGODIC CONVERGENCE RESULTS Let A : H → 2 H be a maximally monotone operator. The trajectories associated to the semigroup of contractions generated by A are known to converge weakly in average toward some zero of A, cf. the seminal paper by Brezis and Baillon [START_REF] Baillon | Une remarque sur le comportement asymptotique des semigroupes non linéaires[END_REF]. Our purpose in this part of the paper is to study the ergodic convergence of the solutions of the system (RIMS) γ,λ . When the regularizing parameter λ(•) is minorized by some positive constant, it is established in part A that the trajectories of (RIMS) γ,λ do converge weakly toward a zero of A, see Theorem 2.3 (vi). Our objective is to show that weak ergodic convergence can be expected when the regularization parameter λ(t) tends toward 0 as t → +∞. The key ingredient is the use of some suitable ergodic variant of the Opial lemma. 4. Weak ergodic convergence of the trajectories 4.1. Ergodic variants of Opial's lemma. Ergodic versions of the Opial lemma were derived by Brézis-Browder [START_REF] Brézis | Nonlinear ergodic theorems[END_REF] and Passty [START_REF] Passty | Ergodic convergence to a zero of the sum of monotone operators in Hilbert space[END_REF] Λ(s, t) x(s) ds. Lemma B.4 in the appendix shows that the map x is well-defined, bounded and that convergence of x(t) as t → +∞ implies convergence of x(t) toward the same limit (Cesaro property). The extension of Opial lemma to a general averaging process satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and ( 20) is given hereafter. This result was established in [START_REF] Attouch | Asymptotic behavior of coupled dynamical systems with multiscale aspects[END_REF] for the particular case corresponding to Λ(s, t) = 1 t if s ≤ t and Λ(s, t) = 0 if s > t. Proposition 4.1. Let S be a nonempty subset of H and let x : [t 0 , +∞[→ H be a continuous map, supposed to be bounded on [t 0 , +∞[. Let Λ : [t 0 , +∞[×[t 0 , +∞[→ R + be a measurable function satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF], and let x : [t 0 , +∞[→ H be the averaged trajectory defined by [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF]. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists; (ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S. Proof. From Lemma B.4 (i), the map x is bounded, therefore it is enough to establish the uniqueness of weak limit points. Let ( x(t n )) and ( x(t m )) be two weakly converging subsequences satisfying respectively x(t n ) x 1 as n → +∞ and x(t m ) x 2 as m → +∞. From (ii), the weak limit points x 1 and x 2 belong to S. In view of (i), we deduce that lim t→+∞ x(t) -x 1 2 and lim t→+∞ x(t) -x 2 2 exist. Writing that x(t) -x 1 2 -x(t) -x 2 2 = 2 x(t) - x 1 + x 2 2 , x 2 -x 1 , we infer that lim t→+∞ x(t), x 2 -x 1 exists. Observe that x(t), x 2 -x 1 = +∞ t0 Λ(s, t) x(s) ds, x 2 -x 1 = +∞ t0 Λ(s, t) x(s), x 2 -x 1 ds. By applying Lemma B.4 (ii) to the real-valued map t → x(t), x 2 -x 1 , we deduce that lim t→+∞ x(t), x 2x 1 exists. This implies that lim n→+∞ x(t n ), x 2 -x 1 = lim m→+∞ x(t m ), x 2 -x 1 , which entails that x 1 , x 2 -x 1 = x 2 , x 2 -x 1 . We conclude that x 2 -x 1 2 = 0, which ends the proof. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists; (ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S. Proof. Just check that conditions [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF] Γ(s) ds = +∞ we deduce that lim t→+∞ t t0 Γ(u, t) du = +∞, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. We deduce from the above inequality that lim t→+∞ T t0 Λ(s, t) ds = 0, hence property (20) is satisfied. It ensues that Proposition 4.1 can be applied, which ends the proof. 4.2. Ergodic convergence of the trajectories. To each solution x(.) of (RIMS) γ,λ , we associate the averaged trajectory x(.) defined by x(t) = 1 t t0 Γ(s, t) ds t t0 Γ(s, t) x(s) ds. We show that under suitable conditions, every averaged trajectory x(.) converges weakly as t → +∞ toward some zero of the operator A. x ∞ weakly in H as n → +∞. Let us fix (z, q) ∈ gphA and define the function h : [t 0 , +∞[→ R + by h(t) = 1 2 x(t) -z 2 . Since q ∈ A(z) and A λ(t) (x(t)) ∈ A x(t) -λ(t)A λ(t) (x(t)) , the monotonicity of A implies that x(t) -λ(t)A λ(t) (x(t)) -z, A λ(t) (x(t)) -q ≥ 0, hence x(t) -z, A λ(t) (x(t)) ≥ λ(t) A λ(t) (x(t)) 2 + x(t) -λ(t)A λ(t) (x(t)) -z, q ≥ x(t) -λ(t)A λ(t) (x(t)) -z, q . Recalling equality (4), we obtain for every t ≥ t 0 , ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 -x(t) -λ(t)A λ(t) (x(t)) -z, q . Using Lemma B.1 (i) with g(t) = ẋ(t) 2 -x(t) -λ(t)A λ(t) (x(t)) -z, q , we obtain for every t ≥ t 0 , h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s, t) ẋ(s) 2 -x(s) -λ(s)A λ(s) (x(s)) -z, q ds. Since h(t) ≥ 0 and Γ(s, t) ≤ Γ(s), we deduce that t t0 Γ(s, t) x(s) -λ(s)A λ(s) (x(s)) -z, q ds ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s) ẋ(s) 2 ds. Recalling the assumption +∞ t0 du p(u) < +∞ and the estimate +∞ t0 Γ(s) ẋ(s) 2 ds < +∞ (see Theorem 2.3 (i)), we infer that for every t ≥ t 0 , [START_REF] Nesterov | A method of solving a convex programming problem with convergence rate O(1/k 2 )[END_REF] t t0 Γ(s, t) x(s) -λ(s)A λ(s) (x(s)) -z, q ds ≤ C, where we have set C := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + +∞ t0 Γ(s) ẋ(s) 2 ds. It ensues that t t0 Γ(s, t) x(s) -z, q ds ≤ C + t t0 Γ(s, t) λ(s)A λ(s) (x(s)), q ds ≤ C + q t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds. This can be rewritten as t t0 Γ(s, t)(x(s) -z) ds, q ≤ C + q t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds. Dividing by t t0 Γ(s, t) ds, we find [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF] x(t) -z, q ≤ C t t0 Γ(s, t) ds + q t t0 Γ(s, t) ds t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds. The assumption +∞ t0 Γ(s) ds = +∞ implies that lim t→+∞ t t0 Γ(s, t) ds = +∞, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. On the other hand, we have lim t→+∞ λ(t)A λ(t) (x(t)) = 0 by Theorem 2.3 (v). From the Cesaro property, we infer that 1 t t0 Γ(s, t) ds t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds → 0 as t -→ +∞, see Lemma B.4 (ii). Taking the limit as t → +∞ in inequality [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF], we then obtain lim sup t→+∞ x(t) -z, q ≤ 0. Recall that the sequence (t n ) is such that x(t n ) x ∞ weakly in H as n → +∞, hence x(t n ) -z, q → x ∞ -z, q as n → +∞. From what precedes, we deduce that x ∞ -z, q ≤ 0 for every (z, q) ∈ gphA. Since the operator A is maximally monotone, we infer that 0 ∈ A(x ∞ ). We have proved that x ∞ ∈ zerA, which shows that condition (ii) of Corollary 4.2 is satisfied. Let us now consider the alternate averaged trajectory x defined by In view of assumption ( 25), we then obtain [START_REF] Passty | Ergodic convergence to a zero of the sum of monotone operators in Hilbert space[END_REF]. By applying Lemma B.5, we infer that lim t→+∞ x(t)x(t) = 0. On the other hand, Theorem 4.3 shows that there exists x ∞ ∈ zerA such that x(t) x ∞ weakly in H as t → +∞. We then conclude that x(t) x(t) = 1 t t0 Γ( x ∞ weakly in H as t → +∞. Now assume that the function Γ : [t 0 , +∞[→ R + is such that Γ(s) ∼ Γ(s) λ(t)γ(t) 2 > 1; (b) | λ(t)| = O (1/γ(t)) as t → +∞; (c) +∞ t0 ds λ(s)γ(s) = +∞; (d) +∞ t0 ds γ(s) = +∞. Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that Let us now particularize to the case γ(t) = α t q and λ(t) = β t r , for some α, β > 0, q ∈] -1, 1] and r ∈ R. Corollary 4.6. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Assume that γ(t) = α t q and λ(t) = β t r for every t ≥ t 0 > 0. Let (q, r) ∈ ] -1, 1] × R be such that q + r ≤ 1 and 2q + r ≥ 0, and let (α, β) ∈ R * + × R * + be such that α 2 β > 1 if 2q + r = 0 (no condition if 2q + r > 0) . Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that 1 t t0 ds s q t t0 x(s) s q ds x ∞ weakly in H as t → +∞. Proof. The conditions of ( 27) are guaranteed by q > -1. On the other hand, we have λ(t)γ(t) = +∞ amount respectively to q + r ≤ 1, which holds true by assumption. The condition +∞ t0 ds γ(s) = +∞ is implied by q ≤ 1. Then just apply Corollary 4.5. PART C: THE SUBDIFFERENTIAL CASE Let us particularize our study to the case A = ∂Φ, where Φ : H → R ∪ {+∞} is a convex lower semicontinuous proper function. Then A λ = ∇Φ λ is equal to the gradient of Φ λ : H → R, which is the Moreau envelope of Φ of index λ > 0. Let us recall that, for all x ∈ H (30) Φ λ (x) = inf ξ∈H Φ(ξ) + 1 2λ x -ξ 2 . In this case, we will study the rate of convergence of the values, when the time t goes to +∞, of the trajectories of the second-order differential equation (RIGS) γ,λ ẍ(t) + γ(t) ẋ(t) + ∇Φ λ(t) (x(t)) = 0, called the Regularized Inertial Gradient System with parameters γ, λ. As a main feature, the above system involves two time-dependent positive parameters: the Moreau regularization parameter λ(t), and the damping parameter γ(t). System (RIGS) γ,λ comes as a natural development of several recent studies concerning fast inertial dynamics and algorithms for convex optimization. Indeed, when Φ is a smooth convex function, it was highlighted that the fact of taking a vanishing damping coefficient γ(t) in system (IGS) γ ẍ(t) + γ(t) ẋ(t) + ∇Φ(x(t)) = 0, is a key property for obtaining fast optimization methods. Precisely Su, Boyd and Candès [START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF] showed that, in the particular case γ(t) = 3 t , (IGS) γ is a continuous version of the fast gradient method initiated by Nesterov [START_REF] Nesterov | A method of solving a convex programming problem with convergence rate O(1/k 2 )[END_REF], with Φ(x(t)) -min H Φ = O( 1 t 2 ) in the worst case. Attouch and Peypouquet [START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF] and May [START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF] have improved this result by showing that Φ(x(t)) -min H Φ = o( 1 t 2 ) for γ(t) = α t with α > 3. Recently, in the case of a general damping function γ(•), the study of the speed of convergence of trajectories of (IGS) γ was developed by Attouch-Cabot in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. Note that a main advantage of (RIGS) γ,λ over (IGS) γ is that Φ is just assumed to be lower semicontinuous (not necessarily smooth). In line with these results, by jointly adjusting the tuning of the two parameters in (RIGS) γ,λ , we will obtain fast convergence results for the values. Convergence rates and weak convergence of the trajectories The following assumptions and notations will be needed throughout this section:            Φ : H → R ∪ {+∞} convex, lower semicontinuous, proper, bounded from below, argmin Φ = ∅; γ : [t 0 , +∞[→ R + continuous, with t 0 ∈ R; λ : [t 0 , +∞[→ R * + continuously differentiable, nondecreasing; x : [t 0 , +∞[→ H the solution to (RIGS) γ,λ , with initial conditions x(t 0 ) = x 0 , ẋ(t 0 ) = v 0 ; ξ(t) = prox λ(t)Φ (x(t)) for t ≥ t 0 . 5.1. Preliminaries on Moreau envelopes. For classical facts about the Moreau envelopes we refer the reader to [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF][START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF][START_REF] Parikh | Proximal algorithms[END_REF][START_REF] Peypouquet | Convex optimization in normed spaces: theory, methods and examples[END_REF]. We point out the following properties that will be useful in the sequel: (i) λ ∈]0, +∞[ → Φ λ (x) is nonincreasing for all x ∈ H; (ii) inf H Φ = inf H Φ λ for all λ > 0; (iii) argmin Φ = argmin Φ λ for all λ > 0. It turns out that it is convenient to consider the Moreau envelope as a function of the two variables x ∈ H and λ ∈]0, +∞[. Its differentiability properties with respect to (x, λ) play a crucial role in our analysis. a. Let us first recall some classical facts concerning the differentiability properties with respect to x of the Moreau envelope x → Φ λ (x). The infimum in ( 30) is achieved at a unique point (31) prox λΦ (x) = argmin ξ∈H Φ(ξ) + 1 2λ x -ξ 2 , which gives Φ λ (x) = Φ(prox λΦ (x)) + 1 2λ x -prox λΦ (x) 2 . Writing the optimality condition for (31), we get prox λΦ (x) + λ∂Φ (prox λΦ (x)) x, that is prox λΦ (x) = (I + λ∂Φ) -1 (x). Thus, prox λΦ is the resolvent of index λ > 0 of the maximally monotone operator ∂Φ. As a consequence, the mapping prox λΨ : H → H is firmly expansive. For any λ > 0, the function x → Φ λ (x) is continuously differentiable, with ∇Φ λ (x) = 1 λ (x -prox λΦ (x)) . Equivalently ∇Φ λ = 1 λ I -(I + λ∂Φ) -1 = (∂Φ) λ which is the Yosida approximation of the maximally monotone operator ∂Φ. As such, ∇Φ λ is Lipschitz continuous, with Lipschitz constant 1 λ , and Φ λ ∈ C 1,1 (H). b. A less known result is the C 1 -regularity of the function λ → Φ λ (x), for any x ∈ H. Its derivative is given by (32) d dλ Φ λ (x) = - 1 2 ∇Φ λ (x) 2 . This result is known as the Lax-Hopf formula for the above first-order Hamilton-Jacobi equation, see [2, Remark 3.32; Lemma 3.27], and [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF]. A proof is given in Lemma A.1 for the convenience of the reader. As a consequence of the semi-group property satisfied by the orbits of the autonomous evolution equation (32), for any x ∈ H, λ > 0 and µ > 0, (Φ λ ) µ (x) = Φ (λ+µ) (x). (33) 5.2. Preliminary estimates. Let us introduce functions W , h z , of constant use in this section. Global energy. The global energy of the system W : [t 0 , +∞[→ R + is given by W (t) = 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H Φ. Since inf H Φ = inf H Φ λ , we have W ≥ 0. From (RIGS) γ,λ and property (32), we immediately obtain the following equality Ẇ (t) = -γ(t) ẋ(t) 2 - λ(t) 2 ∇Φ λ(t) (x(t)) 2 . ( 34 ) As a direct consequence of (34), we obtain the following results. Proposition 5.1. The function W is nonincreasing, and hence W ∞ := lim t→+∞ W (t) exists. In addition, sup t≥t0 ẋ(t) < +∞, ∞ t0 γ(t) ẋ(t) 2 dt < +∞ and ∞ t0 λ(t) ∇Φ λ(t) (x(t)) 2 dt < +∞. Proof. From (34), and λ nondecreasing, we deduce that Ẇ (t) ≤ 0. Hence, W is nonincreasing. Since W is nonnegative, W ∞ := lim t→+∞ W (t) exists. After integrating (34) from t 0 to t, we get W (t) -W (t 0 ) + t t0 γ(s) ẋ(s) 2 ds + 1 2 t t0 λ(s) ∇Φ λ(s) (x(s)) 2 ds ≤ 0. By definition of W , and using again that inf H Φ = inf H Φ λ , it follows that 1 2 ẋ(t) 2 + t t0 γ(s) ẋ(s) 2 ds + 1 2 t t0 λ(s) ∇Φ λ(s) (x(s)) 2 ds ≤ W (t 0 ). This being true for any t ≥ t 0 , we get the conclusion. Φ) -Γ(t) ∇Φ λ(t) (x(t)), x(t) -x = 2Γ(t) Γ(t)(Φ λ(t) (x(t)) -min H Φ) -Γ(t) ∇Φ λ(t) (x(t)), x(t) -x . In the above calculation, we have neglected the term -Γ(t) 2 λ(t) 2 ∇Φ λ(t) (x(t)) 2 which is less or equal than zero, because λ(•) is a nondecreasing function. To obtain the last equality, we have used again the equality -Γ(t)γ(t) + Γ(t) + 1 = 0. Let us now use the convexity of Φ λ(t) and equality (37) to obtain Ė(t) ≤ -(Γ(t) -2Γ(t) Γ(t)) (Φ λ(t) (x(t)) -min H Φ) = -Γ(t)(3 -2γ(t)Γ(t)) (Φ λ(t) (x(t)) -min H Φ). When (K 1 ) is satisfied, we have 3 -2γ(t)Γ(t) ≥ 0. Since Γ(t) and Φ λ(t) (x(t)) -min H Φ are nonnegative, we deduce that Ė(t) ≤ 0. (i) For every t ≥ t 1 , we have 2 , we obtain that lim t→+∞ h(t) exists. This shows the first point of the Opial lemma. Let us now verify the second point. Let x(t k ) converge weakly to x ∞ as k → +∞. Point (i) implies that ξ(t k ) also converges weakly to x ∞ as k → +∞. Since the function Φ is convex and lower semicontinuous, it is semicontinuous for the weak topology, hence satisfies Φ λ(t) (x(t)) -min H Φ ≤ E(t 1 ) Γ(t) [t 0 , +∞[→ R + defined by g(t) = ẋ(t) Φ(x ∞ ) ≤ lim inf t→+∞ Φ(ξ(t k )) = lim t→+∞ Φ(ξ(t)) = min H Φ, cf. the last point of Theorem 5.6. It ensues that x ∞ ∈ argmin Φ, which establishes the second point of the Opial lemma, and ends the proof. This will immediately give our result, since, by the derivation chain rule, d dλ Φ λ (x) = d dλ 1 λ × λΦ λ (x) = 1 λ Φ(J λ (x)) - 1 λ 2 λΦ λ (x) = - 1 λ Φ λ (x) -Φ(J λ (x)) = - 1 2λ 2 x -J λ (x) 2 = - 1 2 ∇Φ λ (x) 2 . To obtain (47), take two values λ 1 and λ 2 of the parameter λ, and compare the corresponding values of the function λ → λΦ λ (x). By the formulation (46) of λΦ λ (x) as an infimal value, we have λ 1 Φ λ1 (x) -λ 2 Φ λ2 (x) ≤ λ 1 Φ(J λ2 (x)) + 1 2 x -J λ2 (x) 2 -λ 2 Φ(J λ2 (x)) - 1 2 x -J λ2 (x) 2 = (λ 1 -λ 2 )Φ(J λ2 (x)). Exchanging the roles of λ 1 and λ 2 , we obtain (λ 1 -λ 2 )Φ(J λ1 (x)) ≤ λ 1 Φ λ1 (x) -λ 2 Φ λ2 (x) ≤ (λ 1 -λ 2 )Φ(J λ2 (x)). Then note that the mapping λ → Φ(J λ (x)) is continuous. This follows from (46) and the continuity of the mappings λ → Φ λ (x) and λ → J λ (x). Indeed, these mappings are locally Lipschitz continuous. This is a direct consequence of the resolvent equations (33), see [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.28] for further details. Then divide the above formula by λ 1 -λ 2 (successively examining the two cases λ 1 < λ 2 , then λ 2 < λ 1 ). Letting λ 1 → λ 2 , and using the continuity of λ → Φ(J λ (x)) gives the differentiability of the mapping λ → λΦ λ (x), and formula (47). Then, writing Φ λ (x) = 1 λ (λΦ λ (x)), and applying the derivation chain rule gives (45). The continuity of λ → ∇Φ λ (x) gives the continuous differentiability of λ → Φ λ (x). Appendix B. Some auxiliary results In this section, we present some auxiliary lemmas that are used throughout the paper. The following result allows us to establish some majorization and also the convergence as t → +∞ of a real-valued function satisfying some differential inequality. Γ(s) w(s) ds = 0. Lemma The conclusion follows from the two above relations. Given a Banach space (X , . ) and a bounded map x : [t 0 , +∞[→ X , the next lemma gives basic properties of the averaged trajectory x defined by [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF]. Lemma B.4. Let us give (X , . ) a Banach space, Λ : [t 0 , +∞[×[t 0 , +∞[→ R + a measurable function satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF], and x : [t 0 , +∞[→ X a bounded map. Then we have (i) For every t ≥ t 0 , the vector x(t) = +∞ t0 Λ(s, t) x(s) ds is well-defined. The map x is bounded and sup t≥t0 x(t) ≤ sup t≥t0 x(t) . (ii) Assume moreover that the function Λ satisfies [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF]. If lim t→+∞ x(t) = x ∞ for some x ∞ ∈ X , then lim t→+∞ x(t) = x ∞ . Proof. (i) Let us set M = sup t≥t0 x(t) < +∞. In view of [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF], observe that for every t ≥ t 0 , Λ(s, t) x(s) ds, and hence x(t) ≤ M in view of (54). (ii) Assume that lim t→+∞ x(t) = x ∞ for some x ∞ ∈ X . Observe that for every t ≥ t 0 , x(t) -x ∞ = +∞ t0 Λ(s, t) (x(s) -x ∞ ) ds by using ( 19) ≤ +∞ t0 Λ(s, t) x(s) -x ∞ ds. (55) Fix ε > 0 and let T ≥ t 0 be such that x(t) -x ∞ ≤ ε for every t ≥ T . From (55), we obtain x(t) -x ∞ ≤ sup t∈[t0,T ] x(t) -x ∞ T t0 Corollary 2 . 5 . 25 Let λ > 0 and let B : H → H be a λ-cocoercive operator such that zerB = ∅. Given a differentiable function γ : [t 0 , +∞[→ R + satisfying (H 0 ), let Γ, ∆ : [t 0 , +∞[→ R + be the functions respectively defined by Γ(s) = p(s) u) du . Assume that there exists ε ∈]0, 1[ such that for s large enough, )γ(s) = +∞ and | λ(t)| = O (1/γ(t)) as t → +∞ amount respectively to q + r ≤ 1. Items (i)-(vii) are immediate consequences of the corresponding points in Corollary 3.3. Corollary 5 . 4 . 54 Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ) and (K 1 ). +∞ t0 Λ t0 (s, t) x(s) ds ≤ M +∞ t0 Λ(s, t) ds = M.Since X is complete, we classically deduce that the integral +∞ t0 Λ(s, t) x(s) ds is convergent. From the definition of x(t), we then have x(t) ≤ +∞ t0 ) = e We obtain p(t) ẋ(t) = ẋ(t 0 ) for every t ≥ t 0 . By integrating again, we findx(t) = x(t 0 ) + t t 0 γ(τ ) dτ and integrate on [t 0 , t]. t t0 ds p(s) ẋ(t 0 ). +∞ t0 ds p(s) < +∞. It ensues immediately that the trajectory x(.) converges if and only if ẋ(t 0 ) = 0 or (H 0 ) Proposition 2.1. Let A : H → 2 H be a maximally monotone operator, and let γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R * + be continuous functions. Then, for any x 0 ∈ H, v 0 ∈ H, there exists a unique global solution + be the function defined by Γ(s) = monotone convergence theorem then implies that t +∞ (3) lim t→+∞ t0 Γ(s, t) ds = lim t→+∞ t0 Γ(s, t) ds = +∞ du s p(u) ∈ [t 0 , +∞[, (2) Γ(s, t) = s t du p(u) p(s) if s ≤ t, and Γ(s, t) = 0 if s > t. For each s ∈ [t 0 , +∞[, the quantity Γ(s, t) tends increasingly toward Γ(s) as t → +∞. The +∞ t0 Γ(s) ds, since Γ(s, t) = 0 for s ≥ t. Let us state the main result of this section. Theorem 2.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R * + be differentiable functions. Assuming (H 0 ), let Γ : [t 0 , +∞[→ R 2 ds < +∞, and as a consequence +∞ Γ(s) ẋ(s) 2 ds < +∞. t0 (ii) (iv) There exists a positive constant C such that for t large enough, ẋ(t) ≤ C p(t) t t0 p(s) λ(s) ds and ẍ(t) ≤ C γ(t) p(t) t t0 p(s) λ(s) ds + C λ(t) . Assuming that (H 2 ) λ(t) p(t) t t0 p(s) λ(s) ds = O(Γ(t)) and | λ(t)| = O(Γ(t)) as t → +∞, (H 3 ) +∞ t0 Γ(s) λ(s) ds = +∞, the following holds (v) lim t→+∞ λ(t)A λ(t) (x(t)) = 0. (vi) If λ(•) is minorized by some positive constant on [t 0 , +∞[, then there exists x ∞ ∈ zerA such that x(t) x ∞ weakly in H as t → +∞. Finally assume that (H 3 ) is not satisfied, i.e. +∞ t0 Γ(s) λ(s) ds < +∞. Then we obtain +∞ (vii) ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H. t0 Proof. (i) Let z ∈ zerA, and let us set h(t) = 1 2 x(t) -z 2 for every t ≥ t 0 . By differentiating, we find for every t ≥ t 0 , ḣ(t) = ẋ(t), x(t) -z and ḧ(t) = ẋ(t) 2 + ẍ(t), x(t) -z . +∞ t0 λ(s)Γ(s) A λ(s) (x(s)) 2 ds < +∞. (iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded. Combining Theorem 2.3 and Propositions 3.1 and 3.2, we obtain the following result. Corollary 3.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions (13) for some c ∈ [0, 1[ and c in a discrete setting. In order to give a continuous ergodic version, let us consider a measurable function Λ : [t 0 , +∞[×[t 0 , +∞[→ R + satisfying the following assumptions To each bounded map x : [t 0 , +∞[→ H, we associate the averaged map x : [t 0 , +∞[→ H by +∞ (19) Λ(s, t) ds = 1 for every t ≥ t 0 , t0 T (20) lim t→+∞ t0 Λ(s, t) ds = 0 for every T ≥ t 0 . +∞ (21) x(t) = t0 of Proposition 4.1 are satisfied for the function Λ : [t 0 , +∞[×[t 0 , +∞[→ R + given by[START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF]. Property[START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] clearly holds true. Observe that for every T ≥ t 0 , ds is finite and independent of t. On the other hand, from the assumption T t0 Λ(s, t) ds = T t0 Γ(s, t) ds t t0 Γ(u, t) du ≤ T t0 Γ(s) ds t0 Γ(u, t) du t . The quantity t0 Γ(s) +∞ T t0 + satisfies (H 0 ). For s, t ≥ t 0 , let Γ(s) and Γ(s, t) be the quantities respectively defined by (1) and (2). Assume that conditions (H 1 )-(H 2 )-(H 3 ) hold, together with +∞ t0 Theorem 4.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅ and let λ : [t 0 , +∞[→ R * + be a differentiable function. Suppose that the differentiable function γ : [t 0 , +∞[→ R Γ(s) ds = +∞. Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that x(t) = 1 t t0 Γ(s, t) ds t t0 Γ(s, t)x(s) ds x ∞ weakly in H as t → +∞. Proof. We apply Corollary 4.2 with S = zerA. Condition (i) of Corollary 4.2 is realized in view of Theorem 2.3 (iii). Let us now assume that there exist x ∞ ∈ H and a sequence (t n ) such that t n → +∞ and x(t n ) Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that The latter result still holds true if the function Γ in the above quotient is replaced with a function Γ : [t 0 , +∞[→ R + such that Γ(s) ∼ Γ(s) as s → +∞.Proof. We are going to show that lim t→+∞ x(t) -x(t) = 0, where x is the averaged trajectory of Theorem 4.3. For that purpose, we use Lemma B.5 with the functions Λ 1 , Λ 2 : [t 0 , +∞[×[t 0 , +∞[→ R + x(t) = x respectively defined by t 1 t t0 Γ(s) ds t0 Γ(s)x(s) ds Λ 1 (s, t) = Γ(s, t) t0 Γ(u, t) du t , Λ 2 (s, t) = Γ(s) t0 Γ(u) du t , if s ≤ t, and Λ 1 (s, t) = Λ 2 (s, t) = 0 if s > t. The functions Λ 1 and Λ 2 clearly satisfy property (19). Let us now check that +∞ (26) lim t→+∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds = 0. t t0 Γ(u) du + Γ(s, t) -Γ(s) t t0 Γ(u) du and hence |Λ +∞ t ds p(s) t0 Γ(s) ds t t0 p(s) ds t . Theorem 4.4. Under the hypotheses of Theorem 4.3, assume moreover that (25) t +∞ ds p(s) t t0 p(s) ds = o t t0 Γ(s) ds as t → +∞. s) ds t t0 Γ(s)x(s) ds, for every t ≥ t 0 . The next result gives sufficient conditions that ensure the weak convergence of x(t) as t → +∞ toward a zero of A. ∞ weakly in H as t → +∞. For s ≤ t, we have Λ 1 (s, t) -Λ 2 (s, t) = Γ(s, t) t t0 Γ(u, t) du t t0 (Γ(u) -Γ(u, t)) du 1 (s, t) -Λ 2 (s, t)| ≤ Γ(s, t) t t0 Γ(u, t) du t t0 |Γ(u) -Γ(u, t)| du t t0 Γ(u) du + |Γ(s, t) -Γ(s)| t t0 Γ(u) du . By integrating on [t 0 , t], we find t t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 |Γ(s, t) -Γ(s)| ds t t0 Γ(s) ds . Recalling that Λ 1 (s, t) = Λ 2 (s, t) = 0 for s > t, this implies that +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 |Γ(s, t) -Γ(s)| ds t t0 Γ(s) ds . From the expression of Γ(s) and Γ(s, t), see (1) and (2), we immediately deduce that +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2 as s → +∞. Let us denote by Λ 2 the function defined by (s, t) = 0 if s > t. The corresponding averaged trajectory is denoted by x. By arguing as above, we obtain that and Λ 2 +∞ t0 | Λ 2 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 | Γ(s) -Γ(s)| ds t0 Γ(s) ds t . Then, using the estimate t t | Γ(s) -Γ(s)| ds = o Γ(s) ds as t → +∞, t0 t0 we deduce that +∞ | Λ 2 (s, t) -Λ 2 (s, t)| ds -→ 0 as t → +∞. t0 In view of Lemma B.5, this implies that lim t→+∞ x(t) -x(t) = 0, which ends the proof. Let us now apply Theorem 4.4 to the class of differentiable functions γ, λ : [t 0 , +∞[→ R * + satisfying (27) lim t→+∞ γ(t) γ(t) 2 = 0 and lim t→+∞ d dt (λ(t)γ(t)) λ(t)γ(t) 2 = 0. Λ 2 (s, t) = Γ(s) t0 Γ(u) du t if s ≤ t, Corollary 4.5. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions [START_REF] Peypouquet | Convex optimization in normed spaces: theory, methods and examples[END_REF] . Assume that (a) lim inf t→+∞ as t → +∞. It ensues that the first condition of (H 2 ) is automatically satisfied, while the second one is given by (b). Condition (H 3 ) is implied by the assumption (c). In the same way, condition ds = +∞ is guaranteed by the assumption (d). It remains to establish condition (25) of Theorem 4.4. By applying Proposition 3.1 (ii) with λ(t) ≡ 1 and c = 0, we obtain +∞, the above result holds true with the function 1/γ in place of Γ, see the last assertion of Theorem 4.4. It ensues that condition (25) amounts to 1 γ(t) 2 = o t t0 Γ(s) ds as t → +∞, which is in turn equivalent to (29) 1 γ(t) 2 = o t t0 ds γ(s) as t → +∞. Since lim t→+∞ γ(t)/γ(t) 2 = 0, we have -γ(t)/γ(t) 3 = o(1/γ(t)) as t → +∞. By integrating on [t 0 , t], we obtain 1 2γ(t) 2 t t0 = t t0 - γ(s) γ(s) 3 ds = o t t0 ds γ(s) as t → +∞, because +∞ t0 ds γ(s) = +∞ by assumption. It ensues that condition (29) is fulfilled, hence all the hypotheses of Theorem 4.4 are satisfied. We deduce that there exists x ∞ ∈ zerA such that 1 t0 Γ(s) ds t t t0 Γ(s)x(s) ds x ∞ weakly in H as t → +∞. Since Γ(t) ∼ 1/γ(t) as t → t 1 γ(s) ds x +∞ t t0 t t0 x(s) γ(s) ds ds p(s) ∼ 1 p(t)γ(t) and t t0 p(s) λ(s) ds ∼ p(t) λ(t)γ(t) as t → +∞, thus implying that Γ(t) ∼ 1 γ(t) +∞ t0 Γ(s) t t0 p(s) ds ∼ p(t) γ(t) as t → +∞. In view of the first equivalence of (28), we infer that t +∞ ds p(s) t t0 p(s) ds ∼ 1 γ(t) 2 as t → +∞. ∞ weakly in H as t → +∞. Proof. Let us check that the assumptions of Theorem 4.4 are satisfied. Assumption (H 0 ) is verified in view of Proposition 3.1 (i) applied with c = 0. Since lim inf t→+∞ λ(t)γ(t) 2 > 1, condition (H 1 ) holds true by Proposition 3.2 used with c = c = 0. On the other hand, Proposition 3.1 shows that [START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF] 5.2.2. Anchor. Given z ∈ H, we define h z : [t 0 , +∞[→ R by Lemma 5.2. For each z ∈ H and all t ≥ t 0 , we have ḧz (t) + γ(t) ḣz (t) + x(t) -z, ∇Φ λ(t) (x(t)) = ẋ(t) 2 Rate of convergence of the values. Let x : [t 0 , +∞[→ H be a solution of (RIGS) γ,λ . Let us fix x ∈ argmin Φ, and set h= h x , that is, h : [t 0 , +∞[→ R + satisfies h(t) = 1 2 x(t) -x 2 . We define the function p : [t 0 , +∞[→ R + by p(t) = eThe following rate of convergence analysis is based on the decreasing properties of the function E, that will serve us as a Lyapunov function. Proposition 5.3 (Decay of E). Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ). The energy function E : [t 0 , +∞[→ R + satisfies for every t ≥ t 0 , Proof. By differentiating the function E, as expressed in (38), we obtain h z (t) = Ė(t) = Γ(t) 2 Ẇ (t) + 2Γ(t) Γ(t)W (t) + (1 + Γ(t)) ḣ(t) + Γ(t) ḧ(t). 1 x(t) -z 2 . 2 We have the following: Taking into account the expression of W and Ẇ , along with equalities (35) and (37), we obtain Ė(t) = Γ(t) 2 Ẇ (t) + 2Γ(t) Γ(t)W (t) + Γ(t)( ḧ(t) + γ(t) ḣ(t)) (35) (36) In particular, if z ∈ argmin Φ, then = -Γ(t) 2 γ(t) ẋ(t) 2 + λ(t) 2 ḧz (t) + γ(t) ḣz (t) + Φ λ(t) (x(t)) -Φ λ(t) (z) ≤ ẋ(t) 2 . ∇Φ λ(t) (x(t)) 2 + 2Γ(t) Γ(t) 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H +Γ(t) ẋ(t) 2 -∇Φ λ(t) (x(t)), x(t) -x Φ ≤ ḧz (t) + γ(t) ḣz (t) ≤ ẋ(t) 2 . ẋ(t) 2 Γ(t)(-Γ(t)γ(t) + Γ(t) + 1) + 2Γ(t) Γ(t)(Φ λ(t) (x(t)) -min H Proof. First observe that ḣz (t) = x(t) -z, ẋ(t) and ḧz (t) = x(t) -z, ẍ(t) + ẋ(t) 2 . By (RIGS) γ,λ and the convexity of Φ λ(t) , it ensues that ḧz (t) + γ(t) ḣz (t) = ẋ(t) 2 + x(t) -z, -∇Φ λ(t) (x(t)) ≤ ẋ(t) 2 + Φ λ(t) (z) -Φ λ(t) (x(t)), which is precisely (35)-(36). The last statement follows from the fact that argmin Φ λ = argmin Φ for all λ > 0. 5.3. t t 0 γ(τ ) dτ . Under the assumption (H 0 ) +∞ t0 ds p(s) < +∞, the function Γ : [t 0 , +∞[→ R + is defined by Γ(t) = p(t) +∞ t ds p(s) . Clearly, the function Γ is of class C 1 and satisfies (37) Γ(t) = γ(t)Γ(t) -1, t ≥ t 0 . Let us define the function E : [t 0 , +∞[→ R by (38) E(t) = Γ(t) 2 W (t) + h(t) + Γ(t) ḣ(t) = Γ(t) 2 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H Φ + 1 2 x(t) -x 2 + Γ(t) ẋ(t), x(t) -x (39) = Γ(t) 2 Φ λ(t) (x(t)) -min H Φ + 1 2 x(t) -x + Γ(t) ẋ(t) 2 . (40) Ė(t) + Γ(t) (3 -2γ(t)Γ(t)) Φ λ(t) (x(t)) -min H Φ ≤ 0. Under the assumption (K 1 ) There exists t 1 ≥ t 0 such that γ(t)Γ(t) ≤ 3/2 for every t ≥ t 1 , then we have Ė(t) ≤ 0 for every t ≥ t 1 . There exist t 1 ≥ t 0 and m < 3/2 such that γ(t)Γ(t) ≤ m for every t ≥ t 1 .Proof. (i) From Proposition 5.3, the function E is nonincreasing on [t 1 , +∞[. It ensues that E(t) ≤ E(t 1 ) for every t ≥ t 1 . Taking into account the expression (39), we deduce that for every t ≥ t 1 , Proposition 5.5. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ) and (K + 1 ). Then, we have Let θ : [t 0 , +∞[→ R + be a differentiable test function, and let t 1 ≥ t 0 be given by the assumption (K + 1 ). Let us multiply the inequality (42) by θ(t) and integrate on [t 1 , t] Theorem 5.6. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ), (K + 1 ), along with(K 2 )In particular, we obtain lim t→+∞ Φ(ξ(t)) = min H Φ, and lim t→+∞ ẋ(t) = 0. Since E(t) ≥ 0 and γ(t)Γ(t) ≤ m for every t ≥ t 1 , this implies that t +∞ (3 -2m) t1 Γ(s) (Φ λ(s) (x(s)) -min t0 H Γ(s) ds = +∞. Φ) ds ≤ E(t 1 ). The inequality (41) is obtained by letting t tend toward infinity. Let x(.) be a solution of (RIGS) γ,λ . Then we have 1/2 (i) As a consequence, setting ξ(t) = prox λ(t)Φ (x(t)), we have +∞ t0 +∞ Φ λ(t) (x(t)) -min H Φ = o 1 t t0 Γ(s) ds and ẋ(t) = o Γ(t) ẋ(t) 2 dt < +∞, and hence t0 Γ(t) W (t) dt < +∞; t0 Γ(s) ds 1 t as t → +∞. (ii) Proof. By (34) and λ nondecreasing we have +∞ t0 γ(t) t t0 Γ(s) ds ẋ(t) 2 dt < +∞. (44) Φ(ξ(t)) -min H Φ = o 1 t t0 Γ(s) ds and x(t) -ξ(t) = o λ(t) t0 Γ(s) ds t 1/2 as t → +∞. Ẇ (t) ≤ -γ(t) ẋ(t) 2 . Proof. From Proposition 5.5 (i), we have (42) +∞ t0 Γ(t) W (t) dt < +∞. On the other hand, the energy function W is nonincreasing by Proposition 5.1. By applying Lemma B.3 in the Appendix, we obtain t t1 The announced estimates follow immediately. θ(s) Ẇ (s) ds + that W (t) = o t t0 Γ(s) ds t t1 θ(s)γ(s) ẋ(s) 2 ds ≤ 0. 1 as t → +∞. Integrating by parts yields t t (43) θ(t)W (t) + θ(s)γ(s) ẋ(s) 2 ds ≤ θ(t 1 )W (t 1 ) + θ(s)W (s) ds. t1 t1 Using the expression of W and rearranging the terms, we find t t θ(t)W (t) + t1 2 . θ(s)γ(s) -θ(s)/2 ẋ(s) 2 ds ≤ θ(t 1 )W (t 1 ) + t1 θ(s)(Φ λ(s) (x(s)) -min As a consequence, setting ξ(t) = prox λ(t)Φ (x(t)), we have Φ(ξ(t)) -min Φ ≤ E(t 1 ) Γ(t) 2 and x(t) -ξ(t) 2 ≤ 2λ(t) Γ(t) 2 E(t 1 ). (ii) Assume moreover that λ(t) t Proof. (i) Since sup t≥t0 t 0 (K + 1 ) Then we have (41) satisfies the following differential inequality +∞ Γ(t) (Φ λ(t) (x(t)) -min H ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 . Φ) dt ≤ E(t 1 ) 3 -2m t1 From Proposition 5.5 (i), we have +∞ t0 Γ(s) ẋ(s) < +∞. Under (K + 1 ), we have the estimate +∞ t1 Γ(s)(Φ λ(s) (x(s)) -min H Φ) ds < +∞, see Corollary 5.4 (ii). The Γ(t) 2 (Φ λ(t) (x(t)) -min H announced estimates follow immediately. Φ) ≤ E(t 1 ) and (ii) Take now θ(t) = t t0 Γ(s) ds. Recalling that W (t) ≥ 0, inequality (43) then implies that for every 1 x(t) -x + Γ(t) ẋ(t) 2 ≤ E(t 1 ). 2 t ≥ t 1 , The first assertion follows immediately. t s t1 t (ii) Now assume (K + 1 ). By integrating (40) on [t 1 , t], we find γ(s) Γ(u) du ẋ(s) 2 ds ≤ Γ(s) ds W (t 1 ) + Γ(s)W (s) ds. t1 t0 t t0 t1 E(t) + It suffices then to recall that t1 Φ λ(s) (x(s)) -min H +∞ t1 Γ(s)W (s) ds < +∞ under hypothesis (K + Φ Γ(s)(3 -2γ(s)Γ(s)) ds ≤ E(t 1 ). 1 ), see point (i). H Φ) ds. (i) Choosing θ(t) = Γ(t) 2 , the above equality gives for every t ≥ t 1 , Γ(t) 2 W (t) + t t1 Γ(s)[Γ(s)γ(s) -Γ(s)] ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + 2 t t1 Γ(s) Γ(s)(Φ λ(s) (x(s)) -min H Φ) ds. Recalling that Γ = γΓ -1, we deduce that Γ(t) 2 W (t) + t t1 Γ(s) ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + 2 t t1 Γ(s)(γ(s)Γ(s) -1)(Φ λ(s) (x(s)) -min H Φ) ds. By assumption (K + 1 ), we have γ(t)Γ(t) ≤ 3/2 for every t ≥ t 1 . Since W (t) ≥ 0, it ensues that t t1 Γ(s) ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + t t1 Γ(s)(Φ λ(s) (x(s)) -min H Φ) ds. Theorem 5.7. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ), (K + 1 ), and (K 2 ). Suppose that λ : [t 0 , +∞[→ R * + is nondecreasing and satisfies sup t≥t0 λ(t) t t0 Γ(s) ds < +∞. Then, for every solution x(.) of (RIGS) γ,λ the following properties hold: (i) lim t→+∞ ξ(t) -x(t) = 0, where ξ(t) = prox λ(t)Φ (x(t)); (ii) x(t) converges weakly as t → +∞ toward some x * ∈ argmin Φ. Γ(s) ds < +∞, the second estimate of (44) implies that lim t→+∞ ξ(t) -x(t) = 0. (ii) We apply the Opial lemma, see Lemma 2.2. Let us fix x ∈ argmin Φ, and show that lim t→+∞ x(t)-x exists. For that purpose, let us set h(t) = 1 2 x(t) -x 2 . Recall from Lemma 5.2 that the function h 2 ds < +∞. By applying Lemma B.1 with g : B.1. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying ) dτ . Let g : [t 0 , +∞[→ R be a continuous function. Assume that h : [t 0 , +∞[→ R + is a function of class C 2 satisfying . Then the nonnegative part ḣ+ of ḣ belongs to L 1 (t 0 , +∞), and hence lim t→+∞ h(t) exists. Proof. (i) Let us multiply each member of inequality (48) by p(t) = e ) dτ and integrate on [t 0 , t]. By integrating again on [t 0 , t], we find h(t) ≤ h(t 0 ) + ḣ(t 0 ) |g(s)| ds < +∞. We easily deduce from (50) that for every t ≥ t 0 , < +∞ by assumption, we deduce from (51) and (52) that ḣ+ ∈ L 1 (t 0 , +∞). Hence lim t→+∞ h(t) exists. Let us now state a vector-valued version of Lemma B.1. Lemma B.2. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying < +∞, where the function p is defined by p(t) = e ) dτ . Let F : [t 0 , +∞[→ H be a measurable map such that +∞ t0 Γ(t) F (t) dt < +∞. Assume that x : [t 0 , +∞[→ H is a map of class C 2 satisfying ) dτ and integrate on [t 0 , t]. We obtain for every t ≥ t 0 , By integrating and applying Fubini theorem as in the proof of Lemma B.1, we find F (s) ds < +∞. The strong convergence of x(t) as t → +∞ follows immediately. Owing to the next lemma, we can estimate the rate of convergence of a function w : [t 0 , +∞[→ R + supposed to be nonincreasing and summable with respect to a weight function Γ. Lemma B.3. Let Γ : [t 0 , +∞[→ R + be a measurable function such that +∞ t0 Γ(t) dt = +∞. Assume that w : [t 0 , +∞[→ R + is nonincreasing and satisfies Proof. Let F : [t 0 , +∞[→ R + be the function defined by F (t) = t t0 Γ(s) ds. It follows from the hypothesis +∞ t0 Γ(s) ds = +∞ that the function F is an increasing bijection from [t 0 , +∞[ onto [0, +∞[. For every t ≥ t 0 , let us set α(t) = F -1 ( 1 2 F (t)). By definition, we have +∞ t0 p(s)g(s) ds. ds p(s) < +∞, where the +∞ t ds p(s) t t 0 γ(τ We obtain function p is defined by p(t) = e t t 0 γ(τ (48) ḧ(t) + γ(t) ḣ(t) ≤ g(t) on [t 0 , +∞[. (i) For every t ≥ t 0 , we have (49) h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 t s du p(u) (ii) Assume that +∞ t0 t t0 du p(u) + t t0 1 p(u) u t0 p(s) g(s) ds du. From Fubini theorem, we have t t0 1 p(u) u t0 p(s) g(s) ds du = t t0 t s du p(u) p(s)g(s) ds, and the inequality (49) follows immediately. (ii) Let us now assume that +∞ t0 Γ(s) (51) ḣ+ (t) ≤ | ḣ(t 0 )| 1 p(t) + 1 p(t) t t0 p(s) |g(s)| ds. By applying Fubini theorem, we find +∞ t0 1 p(t) t t0 p(s) |g(s)| ds dt = +∞ t0 +∞ s dt p(t) p(s) |g(s)| ds = +∞ t0 Γ(s) |g(s)| ds < +∞. (52) Since +∞ t0 dt p(t) +∞ t0 ds p(s) t t 0 γ(τ (53) t t 0 γ(τ ẋ(t) = ẋ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) F (s) ds. Taking the norm of each member, we deduce that ẋ(t) ≤ ẋ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) F (s) ds. +∞ t0 ẋ(t) dt ≤ ẋ(t 0 ) +∞ t0 dt t0 p(t) + +∞ t0 Γ(t)w(t) dt < +∞. Then we have w(t) = o 1 t t0 Γ(s) ds as t → +∞. α(t) t0 Γ(s) ds = 1 2 t t0 Γ(s) ds, hence t α(t) Γ(s) ds = 1 2 t t0 Γ(s) ds. Recalling that the function w is nonincreasing, we obtain t α(t) Γ(s) w(s) ds ≥ w(t) t α(t) Γ(s) ds = 1 2 w(t) t t0 Γ(s) ds. By assumption, we have +∞ t0 Γ(s)w(s) ds < +∞. Since lim t→+∞ α(t) = +∞, we deduce that t Γ(s) +∞ lim t→+∞ α(t) (50) ḣ(t) ≤ ḣ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) g(s) ds. Γ(s) |g(s)| ds < +∞, where Γ : [t 0 , +∞[→ R + is given by Γ(t) = p(t) ẍ(t) + γ(t) ẋ(t) = F (t) on [t 0 , +∞[. Then ẋ ∈ L 1 (t 0 , +∞), and hence x(t) converges strongly as t → +∞. Proof. Let us multiply (53) by p(t) = e with M = sup t≥t0 x(t) -x ∞ < +∞. Taking the upper limit as t → +∞, we deduce from property (20) that lim sup Since this is true for every ε > 0, we conclude that lim t→+∞ x(t) -x ∞ = 0.Lemma B.5. Let (X , . ) be a Banach space, and let x : [t 0 , +∞[→ X be a continuous map, supposed to be bounded on [t 0 , +∞[. Let Λ 1 , Λ 2 : [t 0 , +∞[×[t 0 , +∞[→ R + be measurable functions satisfying[START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF]. (s, t) -Λ 2 (s, t)| ds = 0.Let us consider the averaged trajectories x 1 , x 2 : [t 0 , +∞[→ X defined byx 1 (t) =Then we have lim t→+∞ x 1 (t) -x 2 (t) = 0.Proof. Let M ≥ 0 be such that x(t) ≤ M for every t ≥ t 0 . Observe thatx 1 (t) -x 2 (t) = Assume that +∞ (56) lim t→+∞ |Λ 1 +∞ t0 +∞ Λ 1 (s, t) x(s) ds and x 2 (t) = Λ 2 (s, t) x(s) ds. t0 t0 +∞ Λ(s, t) ds T T ≤ M Λ(s, t) ds + ε, t0 Λ(s, t) ds + ε t→+∞ x(t) -x ∞ ≤ ε. +∞ t0 (Λ 1 (s, t) -Λ 2 (s, t))x(s) ds ≤ +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t) | x(s) ds ≤ M +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t))| ds -→ 0 as t → +∞, in view of (56). Appendix A. Yosida regularization and Moreau envelopes A.1. Yosida regularization of an operator A. A set-valued mapping A from H to H assigns to each x ∈ H a set A(x) ⊂ H, hence it is a mapping from H to 2 H . Every set-valued mappping A : H → 2 H can be identified with its graph defined by The set {x ∈ H : 0 ∈ A(x)} of the zeros of A is denoted by zerA. An operator A : H → 2 H is said to be monotone if for any (x, u), (y, v) ∈ gphA, one has y -x, v -u ≥ 0. It is maximally monotone if there exists no monotone operator whose graph strictly contains gphA. If a single-valued operator A : H → H is continuous and monotone, then it is maximally monotone, cf. [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF]Proposition 2.4]. Given a maximally monotone operator A and λ > 0, the resolvent of A with index λ and the Yosida regularization of A with parameter λ are defined by respectively. The operator J λA : H → H is nonexpansive and eveywhere defined (indeed it is firmly non-expansive). Moreover, A λ is λ-cocoercive: for all x, y ∈ H we have This property immediately implies that A λ : H → H is 1 λ -Lipschitz continuous. Another property that proves useful is the resolvent equation (see, for example, [15, Proposition 2.6] or [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.6]) which is valid for any λ, µ > 0. This property allows to compute simply the resolvent of A λ by for any λ, µ > 0. Also note that for any x ∈ H, and any λ > 0 Finally, for any λ > 0, A and A λ have the same solution set S := A -1 λ (0) = A -1 (0). For a detailed presentation of the properties of the maximally monotone operators and the Yosida approximation, the reader can consult [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF] or [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF]. A.2. Differentiability properties of the Moreau envelopes. Lemma A.1. For each x ∈ H, the real-valued function λ → Φ λ (x) is continuously differentiable on ]0, +∞[, with Proof. By definition of Φ λ , we have where the infimum in the above expression is achieved at J λ (x) := (I + λ∂Φ) -1 (x). Let us prove that (47) d dλ λΦ λ (x) = Φ(J λ (x)).
73,059
[ "754031" ]
[ "424479" ]
01697117
en
[ "math", "qfin" ]
2024/03/05 22:32:13
2019
https://hal.science/hal-01697117v3/file/AJEE_20180407_Final.pdf
Eduardo Abi Jaber email: [email protected] Omar El Euch email: [email protected] Multi-factor approximation of rough volatility models Keywords: Rough volatility models, rough Heston models, stochastic Volterra equations, affine Volterra processes, fractional Riccati equations, limit theorems Rough volatility models are very appealing because of their remarkable fit of both historical and implied volatilities. However, due to the non-Markovian and non-semimartingale nature of the volatility process, there is no simple way to simulate efficiently such models, which makes risk management of derivatives an intricate task. In this paper, we design tractable multi-factor stochastic volatility models approximating rough volatility models and enjoying a Markovian structure. Furthermore, we apply our procedure to the specific case of the rough Heston model. This in turn enables us to derive a numerical method for solving fractional Riccati equations appearing in the characteristic function of the log-price in this setting. Introduction Empirical studies of a very wide range of assets volatility time-series in [START_REF] Gatheral | Volatility is rough[END_REF] have shown that the dynamics of the log-volatility are close to that of a fractional Brownian motion W H with a small Hurst parameter H of order 0.1. Recall that a fractional Brownian motion W H can be built from a two-sided Brownian motion thanks to the Mandelbrot-van Ness representation W H t = 1 Γ(H + 1/2) t 0 (t -s) H-1 2 dW s + 1 Γ(H + 1/2) 0 -∞ (t -s) H-1 2 -(-s) H-1 2 dW s . The fractional kernel (t -s) H-1 2 is behind the H -ε Hölder regularity of the volatility for any ε > 0. For small values of the Hurst parameter H, as observed empirically, stochastic volatility models involving the fractional kernel are called rough volatility models. Aside from modeling historical volatility dynamics, rough volatility models reproduce accurately with very few parameters the behavior of the implied volatility surface, see [START_REF] Bayer | Pricing under rough volatility[END_REF][START_REF] Euch | Roughening Heston[END_REF], especially the at-the-money skew, see [START_REF] Fukasawa | Asymptotic analysis for stochastic volatility: Martingale expansion[END_REF]. Moreover, microstructural foundations of rough volatility are studied in [START_REF] Euch | The microstructural foundations of rough volatility and leverage effect[END_REF][START_REF] Jaisson | Rough fractional diffusions as scaling limits of nearly unstable heavy tailed hawkes processes[END_REF]. In this paper, we are interested in a class of rough volatility models where the dynamics of the asset price S and its stochastic variance V are given by dS t = S t V t dW t , S 0 > 0, (1.1) V t = V 0 + 1 Γ(H + 1 2 ) t 0 (t -u) H-1 2 (θ(u) -λV u )du + 1 Γ(H + 1 2 ) t 0 (t -u) H-1 2 σ(V u )dB u , (1.2) for all t ∈ [0, T ], on some filtered probability space (Ω, F, F, P). Here T is a positive time horizon, the parameters λ and V 0 are non-negative, H ∈ (0, 1/2) is the Hurst parameter, σ is a continuous function and W = ρB + 1 -ρ 2 B ⊥ with (B, B ⊥ ) a two-dimensional F-Brownian motion and ρ ∈ [-1, 1]. Moreover, θ is a deterministic mean reversion level allowed to be time-dependent to fit the market forward variance curve (E[V t ]) t≤T as explained in Section 2 and in [START_REF] Euch | Perfect hedging in rough Heston models[END_REF]. Under some general assumptions, we establish in Section 2 the existence of a weak non-negative solution to the fractional stochastic integral equation in (1.2) exhibiting H -ε Hölder regularity for any ε > 0. Hence, this class of models is a natural rough extension of classical stochastic volatility models where the fractional kernel is introduced in the drift and stochastic part of the variance process V . Indeed, when H = 1/2, we recover classical stochastic volatility models where the variance process is a standard diffusion. Despite the fit to the historical and implied volatility, some difficulties are encountered in practice for the simulation of rough volatility models and for pricing and hedging derivatives with them. In fact, due to the introduction of the fractional kernel, we lose the Markovian and semimartingale structure. In order to overcome theses difficulties, we approximate these models by simpler ones that we can use in practice. In [START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | The microstructural foundations of rough volatility and leverage effect[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF], the rough Heston model (which corresponds to the case of σ(x) = ν √ x) is built as a limit of microscopic Hawkes-based price models. This allowed the understanding of the microstructural foundations of rough volatility and also led to the formula of the characteristic function of the log-price. Hence, the Hawkes approximation enabled us to solve the pricing and hedging under the rough Heston model. However, this approach is specific to the rough Heston case and can not be extended to an arbitrary rough volatility model of the form (1.1)-(1.2). Inspired by the works of [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Carmona | Approximation of some Gaussian processes[END_REF][START_REF] Harms | Affine representations of fractional processes with applications in mathematical finance[END_REF][START_REF] Muravlëv | Representation of fractal Brownian motion in terms of an infinitedimensional Ornstein-Uhlenbeck process[END_REF], we provide a natural Markovian approximation for the class of rough volatility models (1.1)- (1.2). The main idea is to write the fractional kernel K(t) = t H-1 2 Γ(H+1/2) as a Laplace transform of a positive measure µ K(t) = ∞ 0 e -γt µ(dγ); µ(dγ) = γ -H-1 2 Γ(H + 1/2)Γ(1/2 -H) dγ. (1.3) We then approximate µ by a finite sum of Dirac measures µ n = n i=1 c n i δ γ n i with positive weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n , for n ≥ 1. This in turn yields an approxi-mation of the fractional kernel by a sequence of smoothed kernels (K n ) n≥1 given by K n (t) = n i=1 c n i e -γ n i t , n ≥ 1. This leads to a multi-factor stochastic volatility model (S n , V n ) = (S n t , V n t ) t≤T , which is Markovian with respect to the spot price and n variance factors (V n,i ) 1≤i≤n and is defined as follows dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , (1.4) where dV n,i t = (-γ n i V n,i t -λV n t )dt + σ(V n t )dB t , and g n (t) = V 0 + t 0 K n (t -s)θ(s)ds with the initial conditions S n 0 = S 0 and V n,i 0 = 0. Note that the factors (V n,i ) 1≤i≤n share the same dynamics except that they mean revert at different speeds (γ n i ) 1≤i≤n . Relying on existence results of stochastic Volterra equations in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF], we provide in Theorem 3.1 the strong existence and uniqueness of the model (S n , V n ), under some general conditions. Thus the approximation (1.4) is uniquely well-defined. We can therefore deal with simulation, pricing and hedging problems under these multi-factor models by using standard methods developed for stochastic volatility models. Theorem 3.5, which is the main result of this paper, establishes the convergence of the multifactor approximation sequence (S n , V n ) n≥1 to the rough volatility model (S, V ) in (1.1)-(1.2) when the number of factors n goes to infinity, under a suitable choice of the weights and mean reversions (c n i , γ n i ) 1≤i≤n . This convergence is obtained from a general result about stability of stochastic Volterra equations derived in Section 3. [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF]. In [START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF], the characteristic function of the log-price for the specific case of the rough Heston model is obtained in terms of a solution of a fractional Riccati equation. We highlight in Section 4.1 that the corresponding multi-factor approximation (1.4) inherits a similar affine structure as in the rough Heston model. More precisely, it displays the same characteristic function formula involving a n-dimensional classical Riccati equation instead of the fractional one. This suggests solving numerically the fractional Riccati equation by approximating it through a n-dimensional classical Riccati equation with large n, see Theorem 4.1. In Section 4.2, we discuss the accuracy and complexity of this numerical method and compare it to the Adams scheme, see [START_REF] Diethelm | A predictor-corrector approach for the numerical solution of fractional differential equations[END_REF][START_REF] Diethelm | Detailed error analysis for a fractional Adams method[END_REF][START_REF] Diethelm | The fracpece subroutine for the numerical solution of differential equations of fractional order[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF]. The paper is organized as follows. In Section 2, we define the class of rough volatility models (1.1)-(1.2) and discuss the existence of such models. Then, in Section 3, we build a sequence of multi-factor stochastic volatility models of the form of (1.4) and show its convergence to a rough volatility model. By applying this approximation to the specific case of the rough Heston model, we obtain a numerical method for computing solutions of fractional Riccati equations that is discussed in Section 4. Finally, some proofs are relegated to Section 5 and some useful technical results are given in an Appendix. A definition of rough volatility models We provide in this section the precise definition of rough volatility models given by (1.1)-(1.2). We discuss the existence of such models and more precisely of a non-negative solution of the fractional stochastic integral equation (1.2). The existence of an unconstrained weak solution V = (V t ) t≤T is guaranteed by Corollary B.2 in the Appendix when σ is a continuous function with linear growth and θ satisfies the condition ∀ε > 0, ∃C ε > 0; ∀u ∈ (0, T ] |θ(u)| ≤ C ε u -1 2 -ε . (2.1) Furthermore, the paths of V are Hölder continuous of any order strictly less than H and sup t∈[0,T ] E[|V t | p ] < ∞, p > 0. (2.2) Moreover using Theorem B.4 together with Remarks B.5 and B.6 in the Appendix 1 , the existence of a non-negative continuous process V satisfying (1.2) is obtained under the additional conditions of non-negativity of V 0 and θ and σ(0) = 0. We can therefore introduce the following class of rough volatility models. Definition 2.1. (Rough volatility models) We define a rough volatility model by any R × R + - valued continuous process (S, V ) = (S t , V t ) t≤T satisfying dS t = S t V t dW t , V t = V 0 + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 (θ(u) -λV u )du + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 σ(V u )dB u , on a filtred probability space (Ω, F, F, P) with non-negative initial conditions (S 0 , V 0 ). Here T is a positive time horizon, the parameter λ is non-negative, H ∈ (0, 1/2) is the Hurst parameter and W = ρB + 1 -ρ 2 B ⊥ with (B, B ⊥ ) a two-dimensional F-Brownian motion and ρ ∈ [-1, 1]. Moreover, to guarantee the existence of such model, σ : R → R is assumed continuous with linear growth such that σ(0) = 0 and θ : [0, T ] → R is a deterministic non-negative function satisfying (2.1). As done in [START_REF] Euch | Perfect hedging in rough Heston models[END_REF], we allow the mean reversion level θ to be time dependent in order to be consistent with the market forward variance curve. More precisely, the following result shows that the mean reversion level θ can be written as a functional of the forward variance curve (E[V t ]) t≤T . Proposition 2.2. Let (S, V ) be a rough volatility model given by Definition 2.1. Then, (E[V t ]) t≤T is linked to θ by the following formula E[V t ] = V 0 + t 0 (t -s) α-1 E α (-λ(t -s) α )θ(s)ds, t ∈ [0, T ], (2.3 ) where α = H + 1/2 and E α (x) = k≥0 x k Γ(α(k+1)) is the Mittag-Leffler function. Moreover, (E[V t ]) t≤T admits a fractional derivative 2 of order α at each time t ∈ (0, T ] and θ(t) = D α (E[V . ] -V 0 ) t + λE[V t ], t ∈ (0, T ]. (2.4) 1 Theorem B.4 is used here with the fractional kernel K(t) = t H-1 2 Γ(H+1/2) together with b(x) = -λx and g(t) = V0 + t 0 K(t -u)θ(u)du. 2 Recall that the fractional derivative of order α ∈ (0, 1) of a function f is given by d dt t 0 (t-s) -α Γ(1-α) f (s)ds whenever this expression is well defined. Proof. Thanks to (2.2) together with Fubini theorem, t → E[V t ] solves the following fractional linear integral equation E[V t ] = V 0 + 1 Γ(H + 1/2) t 0 (t -s) H-1 2 (θ(s) -λE[V s ])ds, t ∈ [0, T ], (2.5) yielding (2.3) by Theorem A.3 and Remark A.5 in the Appendix. Finally, (2.4) is obviously obtained from (2.5). Finally, note that uniqueness of the fractional stochastic integral equation (1.2) is a difficult problem. Adapting the proof in [START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], we can prove pathwise uniqueness when σ is η-Hölder continuous with η ∈ (1/(1+2H), 1]. This result does not cover the square-root case, i.e. σ(x) = ν √ x, for which weak uniqueness has been established in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF]. Multi-factor approximation of rough volatility models Thanks to the small Hölder regularity of the variance process, models of Definition 2.1 are able to reproduce the rough behavior of the volatility observed in a wide range of assets. However, the fractional kernel forces the variance process to leave both the semimartingale and Markovian worlds, which makes numerical approximation procedures a difficult and challenging task in practice. The aim of this section is to construct a tractable and satisfactory Markovian approximation of any rough volatility model (S, V ) of Definition 2.1. Because S is entirely determined by ( • 0 V s ds, • 0 √ V s dW s ) , it suffices to construct a suitable approximation of the variance process V . This is done by smoothing the fractional kernel. More precisely, denoting by K(t) = t H-1 2 Γ(H+1/2) , the fractional stochastic integral equation (1.2) reads V t = V 0 + t 0 K(t -s) ((θ(s) -λV s )ds + σ(V s )dB s ) , which is a stochastic Volterra equation. Approximating the fractional kernel K by a sequence of smooth kernels (K n ) n≥1 , one would expect the convergence of the following corresponding sequence of stochastic Volterra equations V n t = V 0 + t 0 K n (t -s) ((θ(s) -λV n s )ds + σ(V n s )dB s ) , n ≥ 1, to the fractional one. The argument of this section runs as follows. First, exploiting the identity (1.3), we construct a family of potential candidates for (K n , V n ) n≥1 in Section 3.1 such that V n enjoys a Markovian structure. Second, we provide convergence conditions of (K n ) n≥1 to K in L 2 ([0, T ], R) in Section 3.2. Finally, the approximation result for the rough volatility model (S, V ) is established in Section 3.3 relying on an abstract stability result of stochastic Volterra equations postponed to Section 3.4 for sake of exposition. Construction of the approximation In [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Harms | Affine representations of fractional processes with applications in mathematical finance[END_REF][START_REF] Muravlëv | Representation of fractal Brownian motion in terms of an infinitedimensional Ornstein-Uhlenbeck process[END_REF], a Markovian representation of the fractional Brownian motion of Riemann-Liouville type is provided by writing the fractional kernel K(t) = t H-1 2 Γ(H+1/2) as a Laplace transform of a non-negative measure µ as in (1.3). This representation is extended in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF] for the Volterra square-root process. Adopting the same approach, we establish a similar representation for any solution of the fractional stochastic integral equation (1.2) in terms of an infinite dimensional system of processes sharing the same Brownian motion and mean reverting at different speeds. Indeed by using the linear growth of σ together with the stochastic Fubini theorem, see [START_REF] Veraar | The stochastic Fubini theorem revisited[END_REF], we obtain that V t = g(t) + ∞ 0 V γ t µ(dγ), t ∈ [0, T ], with dV γ t = (-γV γ t -λV t )dt + σ(V t )dB t , V γ 0 = 0, γ ≥ 0, and g(t) = V 0 + t 0 K(t -s)θ(s)ds. (3.1) Inspired by [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Carmona | Approximation of some Gaussian processes[END_REF], we approximate the measure µ by a weighted sum of Dirac measures µ n = n i=1 c n i δ γ n i , n ≥ 1, leading to the following approximation V n = (V n t ) t≤T of the variance process V V n t = g n (t) + n i=1 c n i V n,i t , t ∈ [0, T ], (3.2) dV n,i t = (-γ n i V n,i t -λV n t )dt + σ(V n t )dB t , V n,i 0 = 0, where g n (t) = V 0 + t 0 K n (t -u)θ(u)du, (3.3) and K n (t) = n i=1 c n i e -γ n i t . (3.4) The choice of the positive weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n , which is crucial for the accuracy of the approximation, is studied in Section 3.2 below. Before proving the convergence of (V n ) n≥1 , we shall first discuss the existence and uniqueness of such processes. This is done by rewriting the stochastic equation (3.2) as a stochastic Volterra equation of the form V n t = g n (t) + t 0 K n (t -s) (-λV n s ds + σ(V n s )dB s ) , t ∈ [0, T ]. (3.5) The existence of a continuous non-negative weak solution V n is ensured by Theorem B.4 together with Remarks B.5 and B.6 in the Appendix 3 , because θ and V 0 are non-negative and σ(0) = 0. Moreover, pathwise uniqueness of solutions to (3.5) follows by adapting the standard arugments of [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF], provided a suitable Hölder continuity of σ, see Proposition B.3 in the Appendix. Note that this extension is made possible due to the smoothness of the kernel K n . For instance, this approach fails for the fractional kernel because of the singularity at zero. This leads us to the following result which establishes the strong existence and uniqueness of a non-negative solution of (3.5) and equivalently of (3.2). Theorem 3.1. Assume that θ : [0, T ] → R is a deterministic non-negative function satisfying (2.1) and that σ : R → R is η-Hölder continuous with σ(0) = 0 and η ∈ [1/2, 1]. Then, there exists a unique strong non-negative solution V n = (V n t ) t≤T to the stochastic Volterra equation (3.5) for each n ≥ 1. Due to the uniqueness of (3.2), we obtain that V n is a Markovian process according to n state variables (V n,i ) 1≤i≤n that we call the factors of V n . Moreover, V n being non-negative, it can model a variance process. This leads to the following definition of multi-factor stochastic volatility models. Definition 3.2. (Multi-factor stochastic volatility models). We define the following sequence of multi-factor stochastic volatility models (S n , V n ) = (S n t , V n t ) t≤T as the unique R × R +valued strong solution of dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , with dV n,i t = (-γ n i V n,i t -λV n t )dt + σ(V n t )dB t , V n,i 0 = 0, S n 0 = S 0 > 0, on a filtered probability space (Ω, F, P, F), where F is the canonical filtration a two-dimensional Brownian motion (W, W ⊥ ) and B = ρW + 1 -ρ 2 W ⊥ with ρ ∈ [-1, 1]. Here, the weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n are positive, σ : R → R is η-Hölder continuous such that σ(0) = 0, η ∈ [1/2, 1] and g n is given by (3.3), that is g n (t) = V 0 + t 0 K n (t -s)θ(s)ds, with a non-negative initial variance V 0 , a kernel K n defined as in (3.4) and a non-negative deterministic function θ : [0, T ] → R satisfying (2.1). Note that the strong existence and uniqueness of (S n , V n ) follows from Theorem 3.1. This model is Markovian with n + 1 state variables which are the spot price S n and the factors of the variance process V n,i for i ∈ {1, . . . , n}. An approximation of the fractional kernel Relying on (3.5), we can see the process V n as an approximation of V , solution of (1.2), obtained by smoothing the fractional kernel K(t) = t H-1 2 Γ(H+1/2) into K n (t) = n i=1 c n i e -γ n i t . Intuitively, we need to choose K n close to K when n goes to infinity, so that (V n ) n≥1 converges to V . Inspired by [START_REF] Carmona | Approximation of some Gaussian processes[END_REF], we give in this section a condition on the weights (c n i ) 1≤i≤n and mean reversion terms 0 < γ n 1 < ... < γ n n so that the following convergence K n -K 2,T → 0, holds as n goes to infinity, where • 2,T is the usual L 2 ([0, T ], R) norm. Let (η n i ) 0≤i≤n be auxiliary mean reversion terms such that η n 0 = 0 and η n i-1 ≤ γ n i ≤ η n i for i ∈ {1, . . . , n}. Writing K as the Laplace transform of µ as in (1.3), we obtain that K n -K 2,T ≤ ∞ η n n e -γ(•) 2,T µ(dγ) + n i=1 J n i , with J n i = c n i e -γ n i (•) - η n i η n i-1 e -γ(•) µ(dγ) 2,T . We start by dealing with the first term, ∞ η n n e -γ(•) 2,T µ(dγ) = ∞ η n n 1 -e -2γT 2γ µ(dγ) ≤ 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H . Moreover by choosing c n i = η n i η n i-1 µ(dγ), γ n i = 1 c n i η n i η n i-1 γµ(dγ), i ∈ {1, . . . , n}, (3.6) and using the Taylor-Lagrange inequality up to the second order, we obtain c n i e -γ n i t - η n i η n i-1 e -γt µ(dγ) ≤ t 2 2 η n i η n i-1 (γ -γ n i ) 2 µ(dγ), t ∈ [0, T ]. (3.7) Therefore, n i=1 J n i ≤ T 5/2 2 √ 5 n i=1 η n i η n i-1 (γ n i -γ) 2 µ(dγ). This leads to the following inequality K n -K 2,T ≤ f (2) n (η i ) 0≤i≤n , where f n is a function of the auxiliary mean reversions defined by f (2) n ((η n i ) 1≤i≤n ) = T 5 2 2 √ 5 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) + 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H . (3.8) Hence, we obtain the convergence of K n to the fractional kernel under the following choice of weights and mean reversions. Assumption 3.1. We assume that the weights and mean reversions are given by (3.6) such that η n 0 = 0 < η n 1 < . . . < η n n and η n n → ∞, n i=1 η n i η n i-1 (γ n i -γ) 2 µ(dγ) → 0, (3.9) as n goes to infinity. Proposition 3.3. Fix (c n i ) 1≤i≤n and (γ n i ) 1≤i≤n as in Assumption 3.1 and K n given by (3.4), for all n ≥ 1. Then, (K n ) n≥1 converges in L 2 [0, T ] to the fractional kernel K(t) = t H-1/2 Γ(H+ 1 2 ) as n goes to infinity. There exists several choices of auxiliary factors such that condition (3.9) is met. For instance, assume that η n i = iπ n for each i ∈ {0, . . . , n} such that π n > 0. It follows from n i=1 η n i η n i-1 (γ -γ i ) 2 µ(dγ) ≤ π 2 n η n n 0 µ(dγ) = 1 (1/2 -H)Γ(H + 1/2)Γ(1/2 -H) π 5 2 -H n n 1 2 -H , that (3.9) is satisfied for η n n = nπ n → ∞, π 5 2 -H n n 1 2 -H → 0, as n tends to infinity. In this case, K n -K 2,T ≤ 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H + HT 5 2 √ 10(1/2 -H) π 2 n (η n n ) 1 2 -H . This upper bound is minimal for π n = n -1 5 T √ 10(1 -2H) 5 -2H 2 5 , (3.10) and K n -K 2,T ≤ C H n -4H 5 , where C H is a positive constant that can be computed explicitly and that depends only on the Hurst parameter H ∈ (0, 1/2). Remark 3.4. Note that the kernel approximation in Proposition 3.3 can be easily extended to any kernel of the form K(t) = ∞ 0 e -γt µ(dγ), where µ is a non-negative measure such that ∞ 0 (1 ∧ γ -1/2 )µ(dγ) < ∞. Convergence result We assume now that the weights and mean reversions of the multi-factor stochastic volatility model (S n , V n ) satisfy Assumption 3.1. Thanks to Proposition 3.3, the smoothed kernel K n is close to the fractional one for large n. Because V n satisfies the stochastic Volterra equation (3.5), V n has to be close to V and thus by passing to the limit, (S n , V n ) n≥1 should converge to the rough volatility model (S, V ) of Definition 2.1 as n goes large. This is the object of the next theorem, which is the main result of this paper. Theorem 3.5. Let (S n , V n ) n≥1 be a sequence of multi-factor stochastic volatility models given by Definition 3.2. Then, under Assumption 3.1, the family (S n , V n ) n≥1 is tight for the uniform topology and any point limit (S, V ) is a rough volatility model given by Definition 2.1. Theorem 3.5 states the convergence in law of (S n , V n ) n≥1 whenever the fractional stochastic integral equation (1.2) admits a unique weak solution. In order to prove Theorem 3.5, whose proof is in Section 5.2 below, a more general stability result for d-dimensional stochastic Volterra equations is established in the next subsection. Stability of stochastic Volterra equations As mentioned above, Theorem 3.5 relies on the study of the stability of more general ddimensional stochastic Volterra equations of the form X t = g(t) + t 0 K(t -s)b(X s )ds + t 0 K(t -s)σ(X s )dW s , t ∈ [0, T ], (3.11) where b : R d → R d , σ : R d → R d×m are continuous and satisfy the linear growth condition, K ∈ L 2 ([0, T ], R d×d ) admits a resolvent of the first kind L, see Appendix A.2, and W is a m-dimensional Brownian motion on some filtered probability space (Ω, F, F, P). From Proposition B.1 in the Appendix, g : [0, T ] → R d and K ∈ L 2 ([0, T ], R d×d ) should satisfy Assumption B.1, that is |g(t + h) -g(t)| 2 + h 0 |K(s)| 2 ds + T -h 0 |K(h + s) -K(s)| 2 ds ≤ Ch 2γ , (3.12) for any t, h ≥ 0 with t + h ≤ T and for some positive constants C and γ, to guarantee the weak existence of a continuous solution X of (3.11). More precisely, we consider a sequence X n = (X n t ) t≤T of continuous weak solutions to the stochastic Volterra equation (3.11) with a kernel K n ∈ L 2 ([0, T ], R d×d ) admitting a resolvent of the first kind, on some filtered probability space (Ω n , F n , F n , P n ), X n t = g n (t) + t 0 K n (t -s)b(X n s )ds + t 0 K n (t -s)σ(X n s )dW n s , t ∈ [0, T ], with g n : [0, T ] → R d and K n satisfying (3.12) for every n ≥ 1. The stability of (3.11) means the convergence in law of the family of solutions (X n ) n≥1 to a limiting process X which is a solution to (3.11), when (K n , g n ) is close to (K, g) as n goes large. This convergence is established by verifying first the Kolmogorov tightness criterion for the sequence (X n ) n≥1 . It is obtained when g n and K n satisfy (3.12) uniformly in n in the following sense. Assumption 3.2. There exists positive constants γ and C such that sup n≥1 |g n (t + h) -g n (t)| 2 + h 0 |K n (s)| 2 ds + T -h 0 |K n (h + s) -K n (s)| 2 ds ≤ Ch 2γ , for any t, h ≥ 0 with t + h ≤ T , The following result, whose proof is postponed to Section 5.1 below, states the convergence of (X n ) n≥1 to a solution of (3.11). Theorem 3.6. Assume that T 0 |K(s) -K n (s)| 2 ds -→ 0, g n (t) -→ g(t), for any t ∈ [0, T ] as n goes to infinity. Then, under Assumption 3.2, the sequence (X n ) n≥1 is tight for the uniform topology and any point limit X is a solution of the stochastic Volterra equation (3.11). The particular case of the rough Heston model The rough Heston model introduced in [START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF] is a particular case of the class of rough volatility models of Definition 2.1, with σ(x) = ν √ x for some positive parameter ν, that is dS t = S t V t dW t , S 0 > 0, V t = g(t) + t 0 K(t -s) -λV s ds + ν V s dB s , where K(t) = t H-1 2 Γ(H+1/2) denotes the fractional kernel and g is given by (3.1). Aside from reproducing accurately the historical and implied volatility, the rough Heston model displays a closed formula for the characteristic function of the log-price in terms of a solution to a fractional Riccati equation allowing to fast pricing and calibration, see [START_REF] Euch | Roughening Heston[END_REF]. More precisely, it is shown in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF] that L(t, z) = E exp z log(S t /S 0 ) is given by exp t 0 F (z, ψ(t -s, z))g(s)ds , (4.1) where ψ(•, z) is the unique continuous solution of the fractional Riccati equation ψ(t, z) = t 0 K(t -s)F (z, ψ(s, z))ds, t ∈ [0, T ], (4.2) with F (z, x) = 1 2 (z 2 -z) + (ρνz -λ)x + ν 2 2 x 2 and z ∈ C such that (z) ∈ [0, 1] . Unlike the classical case H = 1/2, (4.2) does not exhibit an explicit solution. However, it can be solved numerically through the Adam scheme developed in [START_REF] Diethelm | A predictor-corrector approach for the numerical solution of fractional differential equations[END_REF][START_REF] Diethelm | Detailed error analysis for a fractional Adams method[END_REF][START_REF] Diethelm | The fracpece subroutine for the numerical solution of differential equations of fractional order[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF] for instance. In this section, we show that the multi-factor approximation applied to the rough Heston model gives rise to another natural numerical scheme for solving the fractional Riccati equation. Furthermore, we will establish the convergence of this scheme with explicit errors. Multi-factor scheme for the fractional Riccati equation We consider the multi-factor approximation (S n , V n ) of Definition 3.2 with σ(x) = ν √ x, where the number of factors n is large, that is dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , with dV n,i t = (-γ n i V n,i t -λV n t )dt + ν V n t dB t , V n,i 0 = 0, S n 0 = S 0 . Recall that g n is given by (3.3) and it converges pointwise to g as n goes large, see Lemma 5.1. We write the dynamics of (S n , V n ) in terms of a Volterra Heston model with the smoothed kernel K n given by (3.4) as follows dS n t = S n t V n t dW t , V n t = g n (t) - t 0 K n (t -s)λV n s ds + t 0 K n (t -s)ν V n s dB s . In [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF], the characteristic function formula of the log-price (4.1) is extended to the general class of Volterra Heston models. In particular, L n (t, z) = E exp z log(S n t /S 0 ) is given by exp t 0 F (z, ψ n (t -s, z))g n (s)ds , (4.3) where ψ n (•, z) is the unique continuous solution of the Riccati Volterra equation ψ n (t, z) = t 0 K n (t -s)F (z, ψ n (s, z))ds, t ∈ [0, T ], (4.4 ) for each z ∈ C with (z) ∈ [0, 1]. Thanks to the weak uniqueness of the rough Heston model, established in several works [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], and to Theorem 3.5, (S n , V n ) n≥1 converges in law for the uniform topology to (S, V ) when n tends to infinity. In particular, L n (t, z) converges pointwise to L(t, z). Therefore, we expect ψ n (•, z) to be close to the solution of the fractional Riccati equation (4.2). This is the object of the next theorem, whose proof is reported to Section 5.3 below. where an explicit error is given. Indeed, set ψ n,i (t, z) = t 0 e -γ n i (t-s) F (z, ψ n (s, z))ds, i ∈ {1, . . . , n}. Then, ψ n (t, z) = n i=1 c n i ψ n,i (t, z), and (ψ n,i (•, z)) 1≤i≤n solves the following n-dimensional system of ordinary Riccati equations ∂ t ψ n,i (t, z) = -γ n i ψ n,i (t, z) + F (z, ψ n (t, z)), ψ n,i (0, z) = 0, i ∈ {1, . . . , n}. (4.5) Hence, (4.5) can be solved numerically by usual finite difference methods leading to ψ n (•, z) as an approximation of the fractional Riccati solution. Numerical illustrations In this section, we consider a rough Heston model with the following parameters λ = 0.3, ρ = -0.7, ν = 0.3, H = 0.1, V 0 = 0.02, θ ≡ 0.02. We discuss the accuracy of the multi-factor approximation sequence (S n , V n ) n≥1 as well as the corresponding Riccati Volterra solution (ψ n (•, z)) n≥1 , for different choices of the weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n . This is achieved by first computing, for different number of factors n, the implied volatility σ n (k, T ) of maturity T and log-moneyness k by a Fourier inversion of the characteristic function formula (4.3), see [START_REF] Carr | Option valuation using the fast Fourier transform[END_REF][START_REF] Lewis | A simple option formula for general jump-diffusion and other exponential lévy processes[END_REF] for instance. In a second step, we compare σ n (k, T ) to the implied volatility σ(k, T ) of the rough Heston model. We also compare the Riccati Volterra solution ψ n (T, z) to the fractional one ψ(T, z). Note that the Riccati Volterra solution ψ n (•, z) is computed by solving numerically the ndimensional Riccati equation (4.5) with a classical finite difference scheme. The complexity of such scheme is O(n × n ∆t ), where n ∆t is the number of time steps applied for the scheme, while the complexity of the Adam scheme used for the computation of ψ(•, z) is O(n 2 ∆t ). In the following numerical illustrations, we fix n ∆t = 200. In order to guarantee the convergence, the weights and mean reversions have to satisfy Assumption 3.1 and in particular they should be of the form (3.6) in terms of auxiliary mean reversions (η n i ) 0≤i≤n satisfying (3.9). For instance, one can fix η n i = iπ n , i ∈ {0, . . . , n}, (4.6) where π n is defined by (3.10), as previously done in Section 3.2. For this particular choice, Figure 1 shows a decrease of the relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) towards zero for different values of b. We also observe in the Figure 2 below that the implied volatility σ n (k, T ) of the multifactor approximation is close to σ(k, T ) for a number of factors n ≥ 20. Notice that the approximation is more accurate around the money. In order to obtain a more accurate convergence, we can minimize the upper bound f (3.8). Hence, we choose (η n i ) 0≤i≤n to be a solution of the constrained minimization problem inf (2) n ((η n i ) 0≤i≤n ) of K n -K 2,T defined in (η n i ) i ∈En f (2) n ((η n i ) 0≤i≤n ), (4.7) where We notice from Figure 3, that the relative error | ψ n (T,ib)-ψ(T,ib) E n = {(η n i ) 0≤i≤n ; 0 = η n 0 < η n 1 < ... < η n n }. ψ(T,ib) | is smaller under the choice of factors (4.7). Indeed the Volterra approximation ψ n (T, ib) is now closer to the fractional Riccati solution ψ(T, ib) especially for small number of factors. However, when n is large, the accuracy of the approximation seems to be close to the one under (4.6). For instance when n = 500, the relative error is around 1% under both (4.6) and (4.7). In the same way, we observe in Figure 4 that the accuracy of the implied volatility approximation σ n (k, T ) is more satisfactory under (4.7) especially for a small number of factors. Theorem 4.1 states that the convergence of ψ n (•, z) depends actually on the L 1 ([0, T ], R)-error between K n and K. Similarly to the computations of Section 3.2, we may show that, T 0 |K n (s) -K(s)|ds ≤ f (1) n ((η n i ) 0≤i≤n ), where f (1) n ((η n i ) 0≤i≤n ) = T 3 6 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) + 1 Γ(H + 3/2)Γ(1/2 -H) (η n n ) -H-1 2 . This leads to choosing (η n i ) 0≤i≤n as a solution of the constrained minimization problem inf (η n i ) i ∈En f (1) n ((η n i ) 0≤i≤n ). (4.8) It is easy to show that such auxiliary mean-reversions (η n i ) 0≤i≤n satisfy (3.9) and thus Assumption 3.1 is met. Figures 5 and6 exhibit similar results as the ones in Figures 3 and4 corresponding to the choice of factors (4.7). In fact, we notice in practice that the solution of the minimization problem (4.7) is close to the one in (4.8). Upper bound for call prices error Using a Fourier transform method, we can also provide an error between the price of the call C n (k, T ) = E[(S n T -S 0 e k ) + ] in the multi-factor model and the price of the same call C(k, T ) = E[(S T -S 0 e k ) + ] in the rough Heston model. However, for technical reasons, this bound is obtained for a modification of the multi-factor approximation (S n , V n ) n≥1 of Definition 3.2 where the function g n given initially by (3.3) is updated into g n (t) = t 0 K n (t -s) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds, (4.9) where K n is the smoothed approximation (3.4) of the fractional kernel. Note that the strong existence and uniqueness of V n is still directly obtained from Proposition B.3 and its nonnegativity from Theorem B.4 together with Remarks B.5 and B.6 in the Appendix4 . Although for g n satisfying (4.9), (V n ) n≥1 can not be tight5 , the corresponding spot price (S n ) n≥1 converges as shown in the following proposition. Proposition 4.2. Let (S n , V n ) n≥1 be a sequence of multi-factor Heston models as in Definition 3.2 with σ(x) = ν √ x and g n given by (4.9). Then, under Assumption 3.1, (S n , • 0 V n s ds) n≥1 converges in law for the uniform topology to (S, • 0 V s ds), where (S, V ) is a rough Heston model as in Definition 2.1 with σ(x) = ν √ x. Note that the characteristic function (4.3) still holds. Using Theorem 4.1 together with a Fourier transform method, we obtain an explicit error for the call prices. We refer to Section 5.5 below for the proof. Proposition 4.3. Let C(k, T ) be the price of the call in the rough Heston model with maturity T > 0 and log-moneyness k ∈ R. We denote by C n (k, T ) the price of the call in the multifactor Heston model of Definition 3.2 such that g n is given by (4.9). If |ρ| < 1, then there exists a positive constant c > 0 such that |C(k, T ) -C n (k, T )| ≤ c T 0 |K(s) -K n (s)|ds, n ≥ 1. Proofs In this section, we use the convolution notations together with the resolvent definitions of Appendix A. We denote by c any positive constant independent of the variables t, h and n and that may vary from line to line. For any h ∈ R, we will use the notation ∆ h to denote the semigroup operator of right shifts defined by ∆ h f : t → f (h + t) for any function f . We first prove Theorem 3.6, which is the building block of Theorem 3.5. Then, we turn to the proofs of the results contained in Section 4, which concern the particular case of the rough Heston model. Proof of Theorem 3.6 Tightness of (X n ) n≥1 : We first show that, for any p ≥ 2, sup n≥1 sup t≤T E[|X n t | p ] < ∞. (5.1) Thanks to Proposition B.1, we already have sup t≤T E[|X n t | p ] < ∞. (5.2) Using the linear growth of (b, σ) and (5.2) together with Jensen and BDG inequalities, we get E[|X n t | p ] ≤ c sup t≤T |g n (t)| p + T 0 |K n (s)| 2 ds p 2 -1 t 0 |K n (t -s)| 2 (1 + E[|X n s | p ])ds) . Relying on Assumption 3.2 and the convergence of (g n (0), T 0 |K n (s)| 2 ds) n≥1 , sup t≤T |g n (t)| p and T 0 |K n (s)| 2 ds are uniformly bounded in n. This leads to E[|X n t | p ] ≤ c 1 + t 0 |K n (t -s)| 2 E[|X n s | p ]ds) . By the Grönwall type inequality in Lemma A.4 in the Appendix, we deduce that E[|X n t | p ] ≤ c 1 + t 0 E n c (s)ds) ≤ c 1 + T 0 E n c (s)ds) , where We now show that (X n ) n≥1 exhibits the Kolmogorov tightness criterion. In fact, using again the linear growth of (b, σ) and (5.1) together with Jensen and BDG inequalities, we obtain, for any p ≥ 2 and t, h ≥ 0 such that t + h ≤ T , E n c ∈ L 1 ([0, T ], R) E[|X n t+h -X n t | p ] ≤ c |g n (t+h)-g n (t)| p + T -h 0 |K n (h+s)-K n (s)| 2 ds p/2 + h 0 |K n (s)| 2 ds p/2 . Hence, Assumption 3.2 leads to E[|X n t+h -X n t | p ] ≤ ch pγ , and therefore to the tightness of (X n ) n≥1 for the uniform topology. Convergence of (X n ) n≥1 : Let M n t = t 0 σ(X n s )dW n s . As M n t = t 0 σσ * (X n s )ds, ( M n ) n≥1 is tight and consequently we get the tightness of (M n ) n≥1 from [18, Theorem VI-4.13]. Let (X, M ) = (X t , M t ) t≤T be a possible limit point of (X n , M n ) n≥1 . Thanks to [18, Theorem VI-6.26], M is a local martingale and necessarily M t = t 0 σσ * (X s )ds, t ∈ [0, T ]. Moreover, setting Y n t = t 0 b(X n s )ds + M n t , the assoicativity property (A.1) in the Appendix yields (L * X n ) t = (L * g n )(t) + L * (K n -K) * dY n t + Y n t , (5.3) where L is the resolvent of the first kind of K defined in Appendix A.2. By the Skorokhod representation theorem, we construct a probability space supporting a sequence of copies of (X n , M n ) n≥1 that converges uniformly on [0, T ], along a subsequence, to a copy of (X, M ) almost surely, as n goes to infinity. We maintain the same notations for these copies. Hence, we have sup t∈[0,T ] |X n t -X t | → 0, sup t∈[0,T ] |M n t -M t | → 0, almost surely, as n goes to infinity. Relying on the continuity and linear growth of b together with the dominated convergence theorem, it is easy to obtain for any t ∈ [0, T ] (L * X n ) t → (L * X) t , t 0 b(X n s )ds → t 0 b(X s )ds, almost surely as n goes to infinity. Moreover for each t ∈ [0, T ] (L * g n )(t) → (L * g)(t), by the uniform boundedness of g n in n and t and the dominated convergence theorem. Finally thanks to the Jensen inequality, E[| (L * ((K n -K) * dY n )) t | 2 ] ≤ c sup t≤T E[| ((K n -K) * dY n ) t | 2 ]. From (5.1) and the linear growth of (b, σ), we deduce sup t≤T E[| ((K n -K) * dY n ) t | 2 ] ≤ c T 0 |K n (s) -K(s)| 2 ds, which goes to zero when n is large. Consequently, we send n to infinity in (5.3) and obtain the following almost surely equality, for each t ∈ [0, T ], (L * X) t = (L * g)(t) + t 0 b(X s )ds + M t . (5.4) Recall also that M = • 0 σσ * (X s )ds. Hence, by [23, Theorem V-3.9], there exists a mdimensional Brownian motion W such that M t = t 0 σ(X s )dW s , t ∈ [0, T ]. The processes in (5.4) being continuous, we deduce that, almost surely, (L * X) t = (L * g)(t) + t 0 b(X s )ds + t 0 σ(X s )dW s , t ∈ [0, T ]. We convolve by K and use the associativity property (A.1) in the Appendix to get that, almost surely, t 0 X s ds = t 0 g(s)ds + t 0 s 0 K(s -u)(b(X u )du + σ(X u )dW u ) ds, t ∈ [0, T ]. Finally it is easy to see that the processes above are differentiable and we conclude that X is solution of the stochastic Volterra equation (3.11), by taking the derivative. Proof of Theorem 3.5 Theorem 3.5 is easily obtained once we prove the tightness of (V n ) n≥1 for the uniform topology and that any limit point V is solution of the fractional stochastic integral equation (1.2). This is a direct consequence of Theorem 3.6, by setting d = m = 1, g and g n respectively as in (3.1) and (3.3), b(x) = -λx, K being the fractional kernel and K n (t) = n i=1 c n i e -γ n i t its smoothed approximation. Under Assumption 3.1, (K n ) n≥1 converges in L 2 ([0, T ], R) to the fractional kernel, see Proposition 3.3. Hence, it is left to show the pointwise convergence of (g n ) n≥1 to g on [0, T ] and that (K n , g n ) n≥1 satisfies Assumption 3.2. g n (t) → g(t), as n tends to infinity. Proof. Because θ satisfies (2.1), it is enough to show that for each t ∈ [0, T ] t 0 (t -s) -1 2 -ε |K n (s) -K(s)|ds (5.5) converges to zero as n goes large, for some ε > 0 and K n given by (3.4). Using the representation of K as the Laplace transform of µ as in (1.3), we obtain that (5.5) is bounded by t 0 (t -s) -1 2 -ε ∞ η n n e -γs µ(dγ)ds + n i=1 t 0 (t -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1 e -γs µ(dγ)|ds. (5.6) The first term in (5.6) converges to zero for large n by the dominated convergence theorem because η n n tends to infinity, see Assumption 3.1. Using the Taylor-Lagrange inequality (3.7), the second term in (5.6) is dominated by 1 2 t 0 (t -s) -1 2 -ε s 2 ds n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ), which goes to zero thanks to Assumption 3.1. Lemma 5.2 (K n satisfying Assumption 3.2). Under Assumption 3.1, there exists C > 0 such that, for any t, h ≥ 0 with t + h ≤ T , sup n≥1 T -h 0 |K n (h + s) -K n (s)| 2 ds + h 0 |K n (s)| 2 ds ≤ Ch 2H , where K n is defined by (3.4). Proof. We start by proving that for any t, h ≥ 0 with t + h ≤ T h 0 |K n (s)| 2 ds ≤ ch 2H . (5.7) In fact we know that this inequality is satisfied for K(t) = t H-1 2 Γ(H+1/2) . Thus it is enough to prove K n -K 2,h ≤ ch H , where • 2,h stands for the usual L 2 ([0, h], R) norm. Relying on the Laplace transform representation of K given by (1.3), we obtain K n -K 2,h ≤ ∞ η n n e -γ(•) 2,h µ(dγ) + n i=1 J n i,h , where J n i,h = c n i e -γ n i (•) - η n i η n i-1 e -γ(•) µ(dγ) 2,h . We start by bounding the first term, ∞ η n n e -γ(•) 2,h µ(dγ) ≤ ∞ 0 1 -e -2γh 2γ µ(dγ) = h H Γ(H + 1/2)Γ(1/2 -H) √ 2 ∞ 0 1 -e -2γ γ γ -H-1 2 dγ. As in Section 3.2, we use the Taylor-Lagrange inequality (3.7) to get n i=1 J n i,h ≤ 1 2 √ 5 h 5 2 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ). Using the boundedness of n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) n≥1 from Assumption 3. 1, we deduce (5.7). We now prove T -h 0 |K n (h + s) -K n (s)| 2 ds ≤ ch 2H . (5.8) In the same way, it is enough to show (∆ h K n -∆ h K) -(K n -K) 2,T -h ≤ ch H , Similarly to the previous computations, we get (∆ h K n -∆ h K) -(K n -K) 2,T -h ≤ ∞ η n n e -γ(•) -e -γ(h+•) 2,T -h µ(dγ) + n i=1 J n i,h , with J n i,h = c n i (e -γ n i (•) -e -γ n i (h+•) ) - η n i η n i-1 (e -γ(•) -e -γ(h+•) )µ(dγ) 2,T -h . Notice that ∞ η n n e -γ(•) -e -γ(h+•) 2,T -h µ(dγ) = ∞ η n n (1 -e -γh ) 1 -e -2γ(T -h) 2γ µ(dγ) ≤ c ∞ 0 (1 -e -γh )γ -H-1 dγ ≤ ch H . Moreover, fix h, t > 0 and set χ(γ) = e -γt -e -γ(t+h) . The second derivative reads χ (γ) = h t 2 γe -γt 1 -e -γh γh -he -γ(t+h) -2te -γ(t+h) , γ > 0. (5.9) Because x → xe -x and x → 1-e -x x are bounded functions on (0, ∞), there exists C > 0 independent of t, h ∈ [0, T ] such that |χ (γ)| ≤ Ch, γ > 0. The Taylor-Lagrange formula, up to the second order, leads to |c n i (e -γ n i t -e -γ n i (t+h) ) - η n i η n i-1 (e -γt -e -γ(t+h) )µ(dγ)| ≤ C 2 h η n i η n i-1 (γ -γ n i ) 2 µ(dγ). Thus n i=1 J n i,h ≤ C 2 h n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ). Finally, (5.8) follows from the boundedness of n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) n≥1 due to Assump- tion 3.1. Lemma 5.3 (g n satisfying Assumption 3.2). Define g n : [0, T ] → R by (3.3) such that θ : [0, T ] → R satisfies (2.1). Under Assumption 3.1, for each ε > 0, there exists C ε > 0 such that for any t, h ≥ 0 with t + h ≤ T sup n≥1 |g n (t) -g n (t + h)| ≤ C ε h H-ε . Proof. Because θ satisfies (2.1), it is enough to prove that, for each fixed ε > 0, there exists C > 0 such that sup n≥1 h 0 (h -s) -1 2 -ε |K n (s)|ds ≤ Ch H-ε , (5.10) and sup n≥1 t 0 (t -s) -1 2 -ε |K n (s) -K n (h + s)|ds ≤ Ch H-ε , (5.11) for any t, h ≥ 0 with t + h ≤ T . (5.10) being satisfied for the fractional kernel, it is enough to establish h 0 (h -s) -1 2 -ε |K n (s) -K(s)|ds ≤ ch H-ε . In the proof of Lemma 5.1, it is shown that h 0 (h -s) -1 2 -ε |K n (s) -K(s)|ds is bounded by (5.6), that is h 0 (h -s) -1 2 -ε ∞ η n n e -γs µ(dγ)ds + n i=1 h 0 (h -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1 e -γs µ(dγ)|ds. The first term is dominated by h 0 (h -s) -1 2 -ε ∞ 0 e -γs µ(dγ)ds = h H-ε B(1/2 -ε, H + 1/2) B(1/2 -H, H + 1/2) , where B is the usual Beta function. Moreover thanks to (3.7) and Assumption 3.1, we get n i=1 h 0 (h -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1 e -γs µ(dγ)|ds ≤ ch (t -s) -1 2 -ε |(K n (s) -∆ h K n (s)) -(K(s) -∆ h K(s))| ds ≤ ch H-ε . By similar computations as previously and using (5.9), we get that t 0 (t -s) -1 2 -ε |(K n (s) -∆ h K n (s)) -(K(s) -∆ h K(s))| ds is dominated by c t 0 (t -s) 1 2 -ε ∞ η n n (1 -e -γh )e -γs µ(dγ)ds + h n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) . The first term being bounded by t 0 (t -s) 1 2 -ε ∞ 0 (1 -e -γh )e -γs µ(dγ)ds = t 0 (t -s) 1 2 -ε (K(s) -K(h + s))ds ≤ ch H-ε , Assumption 3.1 leads to (5.11). Proof of Theorem 4.1 Uniform boundedness : We start by showing the uniform boundedness of the unique continuous solutions (ψ n (•, a + ib)) n≥1 of (4.4). Proposition 5.4. For a fixed T > 0, there exists C > 0 such that sup n≥1 sup t∈[0,T ] |ψ n (t, a + ib)| ≤ C 1 + b 2 , for any a ∈ [0, 1] and b ∈ R. Proof. Let z = a + ib and start by noticing that (ψ n (•, z)) is non-positive because it solves the following linear Volterra equation with continuous coefficients χ = K n * f + ρν (z) -λ + ν 2 2 (ψ n (•, z)) χ , where f = 1 2 a 2 -a -(1 -ρ 2 )b 2 - 1 2 (ρb + νψ n (•, z)) 2 is continuous non-positive, see Theorem C.1. In the same way (ψ(•, z)) is also non-positive. Moreover, observe that ψ n (•, z) solves the following linear Volterra equation with continuous coefficients χ = K n * 1 2 (z 2 -z) + (ρνz -λ + ν 2 2 ψ n (•, z))χ , and ρνz -λ + ν 2 2 ψ n (•, z) ≤ ν -λ. Therefore, Corollary C.4 leads to sup t∈[0,T ] |ψ n (t, z)| ≤ 1 2 |z 2 -z| T 0 E n ν-λ (s)ds, where E n ν-λ denotes the canonical resolvent of K n with parameter ν -λ, see Appendix A.3. This resolvent converges in L 1 ([0, T ], R) because K n converges in L 1 ([0, T ], R) to K, see [16, Theorem 2.3.1]. Hence, ( T 0 E n ν-λ (s)ds) n≥1 is bounded, which ends the proof. End of the proof of Theorem 4.1 : Set z = a + ib and recall that ψ n (•, z) = K n * F (z, ψ n (•, z)); ψ(•, z) = K * F (z, ψ(•, z)). with F (z, x) = 1 2 z 2 -z + (ρνz -λ)x + ν 2 2 x 2 . Hence, for t ∈ [0, T ], ψ(t, z) -ψ n (t, z) = h n (t, z) + K * F (z, ψ(•, z)) -F (z, ψ n (•, z)) (t), with h n (•, z) = (K n -K) * F (z, ψ n (•, z)). |h n (t, a + ib)| ≤ C(1 + b 4 ) T 0 |K n (s) -K(s)|ds, (5.12) for any b ∈ R and a ∈ [0, 1]. Moreover notice that (ψ -ψ n -h n )(•, z) is solution of the following linear Volterra equation with continuous coefficients χ = K * ρνz -λ + ν 2 2 (ψ + ψ n )(•, z) (χ + h n (•, z)) , and remark that the real part of ρνz -λ + ν 2 2 (ψ + ψ n )(•, z) is dominated by ν -λ because (ψ(•, z)) and (ψ n (•, z)) are non-positive. An application of Corollary C.4 together with (5.12) ends the proof. Proof of Proposition 4.2 We consider for each n ≥ 1, (S n , V n ) defined by the multi-factor Heston model in Definition 3.2 with σ(x) = ν √ x. Tightness of ( • 0 V n s ds, • 0 V n s dW s , • 0 V n s dB s ) n≥1 : Because the process • 0 V n s ds is non- decreasing, it is enough to show that sup n≥1 E[ T 0 V n t dt] < ∞, (5.13) to obtain its tightness for the uniform topology. Recalling that sup t∈ [0,T ] E[V n t ] < ∞ from Proposition B.1 in the Appendix, we get E t 0 V n s dB s = 0, and then by Fubini theorem E[V n t ] = g n (t) + n i=1 c n i E[V n,i t ], with E[V n,i t ] = t 0 (-γ n i E[V n,i s ] -λE[V n s ])ds. Thus t → E[V n t ] solves the following linear Volterra equation χ(t) = t 0 K n (t -s) -λχ(s) + θ(s) + V 0 s -H-1 2 Γ(1/2 -H) ds, with K n given by (3.4). Theorem A.3 in the Appendix leads to E[V n t ] = t 0 E n λ (t -s) θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) ds, and then by Fubini theorem again t 0 E[V n s ]ds = t 0 t-s 0 E n λ (u)du θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) ds, where E n λ is the canonical resolvent of K n with parameter λ, defined in Appendix A.3. Because (K n ) n≥1 converges to the fractional kernel K in L 1 ([0, T ], R), we obtain the convergence of E n λ in L 1 ([0, T ], R) to the canonical resolvent of K with parameter λ, see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.1]. In particular thanks to Corollary C.2 in the Appendix, t 0 E n λ (s)ds is uniformly bounded in t ∈ [0, T ] and n ≥ 1. This leads to (5.13) and then to the tightness of ( • 0 V n s ds, • 0 V n s dW s , • 0 V n s dB s ) n≥1 by [START_REF] Jacod | Limit theorems for stochastic processes[END_REF]. Convergence of (S n , • 0 V n s ds) n≥1 : We set M n,1 t = t 0 V n s dW s and M n,2 t = t 0 V n s dB s . Denote by (X, M 1 , M 2 ) a limit in law for the uniform topology of a subsequence of the tight family ( • 0 V n s ds, M n,1 , M n,2 ) n≥1 . An application of stochastic Fubini theorem, see [START_REF] Veraar | The stochastic Fubini theorem revisited[END_REF], yields t 0 V n s ds = t 0 t-s 0 (K n (u) -K(u))dudY n s + t 0 K(t -s)Y n s ds, t ∈ [0, T ], (5.14) where Y n t = t 0 (s -H-1 2 V 0 Γ(1/2-H) +θ(s)-λV n s )ds+νM n,2 t . Because (Y n ) n≥1 converges in law for the uniform topology to Y = (Y t ) t≤T given by Y t = t 0 (s -H-1 2 V 0 Γ( 1 2 -H) + θ(s))ds -λX t + νM 2 t , we also get the convergence of ( • 0 K(• -s)Y n s ds) n≥1 to • 0 K(• -s)Y s ds. Moreover, for any t ∈ [0, T ], t 0 t-s 0 (K n (u) -K(u))du s -H-1 2 V 0 Γ( 1 2 -H) + θ(s) -λV n s ds is bounded by t 0 |K n (s) -K(s)|ds t 0 (s -H-1 2 V 0 Γ( 1 2 -H) + θ(s))ds + λ t 0 V n s ds , which converges in law for the uniform topology to zero thanks to the convergence of ( • 0 V n s ds) n≥1 together with Proposition 3.3. Finally, E t 0 t-s 0 (K n (u) -K(u))dudM n,2 s 2 ≤ c T 0 (K n (s) -K(s)) 2 dsE t 0 V n s ds , which goes to zero thanks to (5.13) and Proposition 3.3. Hence, by passing to the limit in (5.14), we obtain X t = t 0 K(t -s)Y s ds, for any t ∈ [0, T ], almost surely. The processes being continuous, the equality holds on [0, T ]. Then, by the stochastic Fubini theorem, we deduce that X = • 0 V s ds, where V is a continuous process defined by V t = t 0 K(t -s)dY s = V 0 + t 0 K(t -s)(θ(s) -λV s )ds + ν t 0 K(t -s)dM 2 s . Furthermore because (M n,1 , M n,2 ) is a martingale with bracket • 0 V n s ds 1 ρ ρ 1 , [18, Theorem VI-6.26] implies that (M 1 , M 2 ) is a local martingale with the following bracket • 0 V s ds 1 ρ ρ 1 . By [23, Theorem V-3.9], there exists a two-dimensional Brownian motion ( W , B) with d W , B t = ρdt such that M 1 t = t 0 V s d W s , M 2 t = t 0 V s d B s , t ∈ [0, T ]. In particular V is solution of the fractional stochastic integral equation in Definition 2.1 with σ(x) = ν √ x. Because S n = exp(M n,1 -1 2 • 0 V n s ds), we deduce the convergence of (S n , • 0 V n s ds) n≥1 to the limit point (S, • 0 V s ds) that displays the rough-Heston dynamics of Definition 2.1. The uniqueness of such dynamics, see [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], enables us to conclude that (S n , V n ) n≥1 admits a unique limit point and hence converges to the rough Heston dynamics. Proof of Proposition 4.3 We use the Lewis Fourier inversion method, see [START_REF] Lewis | A simple option formula for general jump-diffusion and other exponential lévy processes[END_REF], to write C n (k, T ) -C(k, T ) = S 0 e k 2 2π b∈R e -ibk b 2 + 1 4 L(T, 1 2 + ib) -L n (T, 1 2 + ib) db. Hence, |C n (k, T ) -C(k, T )| ≤ S 0 e k 2 2π b∈R 1 b 2 + 1 4 L(T, 1 2 + ib) -L n (T, 1 2 + ib) db. (5.15) Because L(T, z) and L n (T, z) satisfy respectively the formulas (4.1) and (4.3) with g and g n given by g(t) = t 0 K(t-s) V 0 s -H-1 2 Γ(1/2 -H) +θ(s) ds, g n (t) = t 0 K n (t-s) V 0 s -H-1 2 Γ(1/2 -H) +θ(s) ds, and ψ(•, z) and ψ n (•, z) solve respectively (4.2) and (4.4), we use the Fubini theorem to deduce that L(T, z) = exp T 0 ψ(T -s, z) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds , (5.16) and L n (T, z) = exp T 0 ψ n (T -s, z) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds , (5.17) with z = 1/2 + ib. Therefore, relying on the local Lipschitz property of the exponential function, it suffices to find an upper bound for (ψ n (•, z)) in order to get an error for the price of the call from (5.15). This is the object of the next paragraph. Upper bound of (ψ n (•, z)) : We denote by φ n η (•, b) the unique continuous function satisfying the following Riccati Volterra equation φ n η (•, b) = K n * -b + ηφ n η (•, b) + ν 2 2 φ n η (•, b) 2 , with b ≥ 0 and η, ν ∈ R. Proposition 5.5. Fix b 0 , t 0 ≥ 0 and η ∈ R. The functions b → φ n η (t 0 , b) and t → φ n η (t, b 0 ) are non-increasing on R + . Furthermore φ n η (t, b) ≤ 1 -1 + 2bν 2 ( t 0 E n η (s)ds) 2 ν 2 t 0 E n η (s)ds , t > 0, where E n η is the canonical resolvent of K n with parameter η defined in Appendix A. ∆ h φ n η (b 0 , t) = ∆ t K n * F (φ n η (•, b 0 )) (h) + K n * F (∆ h φ n η (•, b 0 )) (t) with F (b, x) = -b + ηx + ν 2 2 x 2 . Notice that t → -∆ t K n * F (φ n η (•, b 0 )) (h) ∈ G K , defined in Appendix C, thanks to Theorem C.1. φ n η (•, b) -∆ h φ n η (•, b ) being solution of the following linear Volterra integral equation with continuous coefficients, x(t) = -∆ t K n * F (b, φ n η (•, b 0 )) (h) + K n * η + ν 2 2 (φ n η (•, b) + ∆ h φ n η (•, b)) x ( φ n η (t, b) = t 0 E n η (t -s)(-b + ν 2 2 φ n η (s, b) 2 ) ≤ t 0 E n η (s)ds -b + ν 2 2 φ n η (t, b) 2 . We end the proof by solving this inequality of second order in φ n η (t, b) and using that φ n η is non-positive. Notice that t 0 E n η (s)ds > 0 for each t > 0, see Corollary C.2. Corollary 5.6. Fix a ∈ [0, 1]. We have, for any t ∈ (0, T ] and b ∈ R, sup n≥1 (ψ n (t, a + ib)) ≤ 1 -1 + (a -a 2 + (1 -ρ 2 )b 2 )ν 2 m(t) 2 ν 2 m(t) where m(t) = inf n≥1 t 0 E n ρνa-λ (s)ds > 0 for all t ∈ (0, T ] and E n η is the canonical resolvent of K n with parameter η defined in Appendix A. χ = K * 1 2 (ρb + ν (ψ n (•, a + ib))) 2 + ρνa -λ + ν 2 2 ( (ψ n (•, a + ib)) + φ η (•, r)) χ , we use Theorem C.1 together with Proposition 5.5 to get, for all t ∈ [0, T ] and b ∈ R, (ψ n (t, a + ib)) ≤ 1 -1 + 2rν 2 ( t 0 E n η (s)ds) 2 ν 2 t 0 E n η (s)ds . ( 5.18) Moreover for any t ∈ [0, T ], t 0 E n η (s)ds converges as n goes to infinity to t 0 E η (s)ds because K n converges to K in L 1 ([0, T ], R), see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.1], where E η denotes the canonical resolvent of K with parameter η. Therefore, m(t) = inf n≥1 t 0 E n η (s)ds > 0, for all t ∈ (0, T ], because t 0 E η (s)ds > 0 and t 0 E n η (s)ds > 0 for all n ≥ 1, see Corollary C.2. Finally we end the proof by using (5.18) together with the fact that x → 1- √ 1+2rν 2 x 2 ν 2 x is non-increasing on (0, ∞). End of the proof of Proposition 4.3 : Assume that |ρ| < 1 and fix a = 1/2. By dominated convergence theorem, T 0 1 -1 + (a -a 2 + (1 -ρ 2 )b 2 )ν 2 m(T -s) 2 ν 2 m(T -s) (θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) )ds is equivalent to -|b| 1 -ρ 2 ν T 0 (θ(s) + V 0 s -H-1 2 Γ( A.4. Let x, f ∈ L 1 loc (R + , R) such that x(t) ≤ (λK * x)(t) + f (t), t ≥ 0, a.e. Then, x(t) ≤ f (t) + (λE λ * f )(t), t ≥ 0, a.e. Note that the definition of the resolvent of the second kind and canonical resolvent can be extended for matrix-valued kernels. In that case, Theorem A.3 still holds. Remark A.5. The canonical resolvent of the fractional kernel K(t) = t H-1 2 Γ(H+1/2) with param- eter λ is given by t α-1 E α (-λt α ), where E α (x) = B Some existence results for stochastic Volterra equations We collect in this Appendix existence results for general stochastic Volterra equations as introduced in [START_REF] Jaber | Affine Volterra processes[END_REF]. We refer to [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF] for the proofs. We fix T > 0 and consider the ddimensional stochastic Volterra equation X t = g(t) + t 0 K(t -s)b(X s )ds + t 0 K(t -s)σ(X s )dB s , t ∈ [0, T ], (B.1) where b : R d → R d , σ : R d → R d×m are continuous functions with linear growth, K ∈ L 2 ([0, T ], R d×d ) is a kernel admitting a resolvent of the first kind L, g : [0, T ] → R d is a continuous function and B is a m-dimensional Brownian motion on a filtered probability space (Ω, F, F, P). In order to prove the weak existence of continuous solutions to (B.1), the following regularity assumption is needed. Assumption B.1. There exists γ > 0 and C > 0 such that for any t, h ≥ 0 with t + h ≤ T , |g(t + h) -g(t)| 2 + h 0 |K(s)| 2 ds + T -h 0 |K(h + s) -K(s)| 2 ds ≤ Ch 2γ . The following existence result can be found in [ E[|X t | p ] < ∞, p > 0, (B.2) and admits Hölder continuous paths on [0, T ] of any order strictly less than γ. In particular, for the fractional kernel, Proposition B.1 yields the following result. Corollary B.2. Fix H ∈ (0, 1/2) and θ : [0, T ] → R satisfying ∀ε > 0, ∃C ε > 0; ∀u ∈ (0, T ] |θ(u)| ≤ C ε u -1 2 -ε . The fractional stochastic integral equation X t = X 0 + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 (θ(u) + b(X u ))du + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 σ(X u )dB u , admits a weak continuous solution X = (X t ) t≤T for any X 0 ∈ R. Moreover X satisfies (B.2) and admits Hölder continuous paths on [0, T ] of any order strictly less than H. Proof. It is enough to notice that the fractional stochastic integral equation is a particular case of (B.1) with d = m = 1, K(t) = t H-1 2 Γ(H+1/2) the fractional kernel, which admits a resolvent of the first kind, see Section A.2, and g(t) = X 0 + 1 Γ(1/2 + H) t 0 (t -u) H-1/2 θ(u)du. As t → t 1/2+ε θ(t) is bounded on [0, T ], we may show that g is H -ε Hölder continuous for any ε > 0. Hence, Assumption B.1 is satisfied and the claimed result is directly obtained from Proposition B.1. We now establish the strong existence and uniqueness of (B.1) in the particular case of smooth kernels. This is done by extending the Yamada-Watanabe pathwise uniqueness proof in [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF]. Proposition B.3. Fix m = d = 1 and assume that g is Hölder continuous, K ∈ C 1 ([0, T ], R) admitting a resolvent of the first kind and that there exists C > 0 and η ∈ [1/2, 1] such that for any x, y ∈ R, |b(x) -b(y)| ≤ C|x -y|, |σ(x) -σ(y)| ≤ C|x -y| η . Then, the stochastic Volterra equation (B.1) admits a unique strong continuous solution. Proof. We start by noticing that, K being smooth, it satisfies Assumption B.1. Hence, the existence of a weak continuous solution to (B.1) follows from Proposition B.1. It is therefore enough to show the pathwise uniqueness. We may proceed similarly to [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF] by considering a 0 = 1, a k-1 > a k for k ≥ 1 with a k-1 a k x -2η dx = k and ϕ k ∈ C 2 (R, R) such that ϕ k (x) = ϕ k (-x), ϕ k (0) = 0 and for x > 0 • ϕ k (x) = 0 for x ≤ a k , ϕ k (x) = 1 for x ≥ a k-1 and ϕ k (x) ∈ [0, 1] for a k < x < a k-1 . • ϕ k (x) ∈ [0, 2 k x -2η ] for a k < x < a k-1 . Let X 1 and X 2 be two solutions of (B.1) driven by the same Brownian motion B. Notice that, thanks to the smoothness of K, X i -g are semimartingales and for i = 1, 2 d(X i t -g(t)) = K(0)dY i t + (K * dY i ) t dt, with Y i t = t 0 b(X i s )ds + t 0 σ(X i s )dB s . Using Itô's formula, we write ϕ k (X 2 t -X 1 t ) = I 1 t + I 2 t + I 3 t , where I 1 t = K(0) t 0 ϕ k (X 2 s -X 1 s )d(Y 1 s -Y 2 s ), I 2 t = t 0 ϕ k (X 2 s -X 1 s )(K * d(Y 1 -Y 2 )) s ds, I 3 t = K(0) 2 2 t 0 ϕ k (X 2 s -X 1 s )(σ(X 2 s ) -σ(X 1 s )) 2 ds. Recalling that sup t≤T E[(X i t ) 2 ] < ∞ for i = 1, 2 from Proposition B.1, we obtain that E[I 1 t ] ≤ E[K(0) t 0 |b(X 2 s ) -b(X 1 s )|ds] ≤ c t 0 E[|X 2 s -X 1 s |]ds, and E[I 2 t ] ≤ c t 0 E[(|K | * |b(X 2 ) -b(X 1 )|) s ]ds ≤ c t 0 E[|X 2 s -X 1 s |] E[I 3 t ] ≤ c k , which goes to zero when k is large. Moreover E[ϕ k (X 2 t -X 1 t )] converges to E[|X 2 t -X 1 t |] when k tends to infinity, thanks to the monotone convergence theorem. Thus, we pass to the limit and obtain E[|X 2 t -X 1 t |] ≤ c t 0 E[|X 2 s -X 1 s |]ds. Grönwall's lemma leads to E[|X 2 t -X 1 t |] = 0 yielding the claimed pathwise uniqueness. Under additional conditions on g and K one can obtain the existence of non-negative solutions to (B.1) in the case of d = m = 1. As in [2, Theorem 3.5], the following assumption is needed. Assumption B.2. We assume that K ∈ L 2 ([0, T ], R) is non-negative, non-increasing and continuous on (0, T ]. We also assume that its resolvent of the first kind L is non-negative and non-increasing in the sense that 0 ≤ L([s, s + t]) ≤ L([0, t]) for all s, t ≥ 0 with s + t ≤ T . admits a unique continuous solution χ. Furthermore if g ∈ G K and w is non-negative, then χ is non-negative and ∆ t 0 χ = g t 0 + K * (∆ t 0 z∆ t 0 χ + ∆ t 0 w) with g t 0 (t) = ∆ t 0 g(t) + (∆ t K * (zχ + w))(t 0 ) ∈ G K , for all for t 0 , t ≥ 0. Proof. The existence and uniqueness of such solution in χ ∈ L 1 loc (R + , R) is obtained from [2, Lemma C.1]. Because χ is solution of (C.1), it is enough to show the local boundedness of χ to get its continuity. This follows from Grönwall's Lemma A.4 applied on the following inequality |χ(t)| ≤ g ∞,T + (K * ( z ∞,T |χ|(.) + w ∞,T )) (t), for any t ∈ [0, T ] and for a fixed T > 0. We assume now that g ∈ G K and w is non-negative. The fact that g t 0 ∈ G K , for t 0 ≥ 0, is proved by adapting the computations of the proof of [1, Theorem 3.1] with ν = 0 provided that χ is non-negative. In order to establish the non-negativity of χ, we introduce, for each ε > 0, χ ε as the unique continuous solution of χ ε = g + K * (zχ ε + w + ε) . (C.2) It is enough to prove that χ ε is non-negative, for every ε > 0, and that (χ ε ) ε>0 converges uniformly on every compact to χ as ε goes to zero. Positivity of χ ε : It is easy to see that χ ε is non-negative on a neighborhood of zero because, for small t, χ ε (t) = g(t) + (z(0)g(0) + w(0) + ε) t 0 K(s)ds + o( t 0 K(s)ds), as χ, z and w are continuous functions. Hence, t 0 = inf{t > 0; χ ε (t) < 0} is positive. If we assume that t 0 < ∞, we get χ ε (t 0 ) = 0 by continuity of χ ε . χ ε being the solution of (C.2), we have ∆ t 0 χ ε = g t 0 ,ε + K * (∆ t 0 z∆ t 0 χ ε + ∆ t 0 w + ε), with g t 0 ,ε (t) = ∆ t 0 g(t)+(∆ t K * (zχ ε +w +ε))(t 0 ). Then, by using Lemma A.1 with F = ∆ t K, we obtain g t 0 ,ε (t) = ∆ t 0 g(t) -(d(∆ t K * L) * g)(t 0 ) -(∆ t K * L)(0)g(t 0 ) + (d(∆ t K * L) * χ ε )(t 0 ) + (∆ t K * L)(0)χ ε (t 0 ), which is continuous and non-negative, because g ∈ G K and ∆ t K * L is non-decreasing for any t ≥ 0, see Remark A.2. Hence, in the same way, ∆ t 0 χ ε is non-negative on a neighborhood of zero. Thus t 0 = ∞, which means that χ ε is non-negative. Uniform convergence of χ ε : We use the following inequality |χ -χ ε |(t) ≤ (K * ( z ∞,T |χ -χ ε | + ε)) (t), t ∈ [0, T ], together with the Gronwall Lemma A.4 to show the uniform convergence on [0, T ] of χ ε to χ as ε goes to zero. In particular, χ is also non-negative. Corollary C.2. Let K ∈ L 2 loc (R + , R) satisfying Assumption B.2 and define E λ as the canonical resolvent of K with parameter λ ∈ R -{0}. Then, t → t 0 E λ (s)ds is non-negative and non-decreasing on R + . Furthermore t 0 E λ (s)ds is positive, if K does not vanish on [0, t] Proof. The non-negativity of χ = • 0 E λ (s)ds is obtained from Theorem C.1 and from the fact that χ is solution of the following linear Volterra equation χ = K * (λχ + 1), by Theorem A.3. For fixed t 0 > 0, ∆ t 0 χ satisfies ∆ t 0 χ = g t 0 + K * (λ∆ t 0 χ + 1), with g t 0 (t) = ∆ t K * (λ∆ t 0 χ + 1) (t 0 ) ∈ G K , see Theorem C.1. It follows that ∆ t 0 χ -χ solves x = g t 0 + K * (λx). Hence, another application of Theorem C.1 yields that χ ≤ ∆ t 0 χ, proving that t → As done in the proof of Theorem C.1, ψ ε converges uniformly on every compact to ψ as ε goes to zero. Thus, it is enough to show that, for every ε > 0 and t ≥ 0, |h(t)| ≤ ψ ε (t). for small t, thanks to the continuity of z, w, h, φ h , φ ψε and ψ ε . In both cases, we obtain that |h| ≤ ψ ε on a neighborhood of t 0 . Therefore t 0 = ∞ and for any t ≥ 0 |h(t)| ≤ ψ ε (t). The following result is a direct consequence of Theorems C. Theorem 4 . 1 . 41 There exists a positive constant C such that, for any a ∈ [0, 1], b ∈ R and n ≥ 1, sup t∈[0,T ] |ψ n (t, a + ib) -ψ(t, a + ib)| ≤ C(1 + b 4 ) T 0 |K n (s) -K(s)|ds, where ψ(•, a + ib) (resp. ψ n (•, a + ib)) denotes the unique continuous solution of the Riccati Volterra equation (4.2) (resp. (4.4)).Relying on the L 1 -convergence of (K n ) n≥1 to K under Assumption 3.1, see Proposition 3.3, we have the uniform convergence of (ψ n (•, z)) n≥1 to ψ(•, z) on [0, T ]. Hence, Theorem 4.1 suggests a new numerical method for the computation of the fractional Riccati solution (4.2) Figure 1 : 1 Figure 1: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.6) and for different numbers of factors n with T = 1. Figure 2 : 2 Figure 2: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.6) and for different numbers of factors n with T = 1. Figure 3 : 3 Figure 3: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.7) and for different numbers of factors n with T = 1. Figure 4 : 4 Figure 4: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.7) and for different numbers of factors n with T = 1. Figure 5 : 5 Figure 5: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.8) and for different numbers of factors n with T = 1. Figure 6 : 6 Figure 6: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.8) and for different numbers of factors n with T = 1. is the canonical resolvent of |K n | 2 with parameter c, defined in Appendix A.3, and the last inequality follows from the fact that • 0 E n c (s)ds is non-decreasing by Corollary C.2. The convergence of |K n | 2 to |K| 2 in L 1 ([0, T ], R) implies the convergence of E n c to the canonical resolvent of |K| 2 with parameter c in L 1 ([0, T ], R), see [16, Theorem 2.3.1]. Thus, T 0 E n c (s)ds is uniformly bounded in n, yielding (5.1). Lemma 5 . 1 ( 51 Convergence of g n ). Define g n : [0, T ] → R and g : [0, T ] → R respectively by (3.1) and (3.3) such that θ : [0, T ] → R satisfies (2.1). Under assumption (3.1), we have for any t ∈ [0, T ] 5 2 5 -ε , yielding (5.10). Similarly, we obtain(5.11) by showing that t 0 Thanks to Proposition 5.4, we get the existence of a positive constant C such that sup n≥1 sup t∈[0,T ] 3 . 3 Proof. The claimed monotonicity of b → φ n η (t 0 , b) is directly obtained from Theorem C.1. Consider now h, b 0 > 0. It is easy to see that ∆ h φ n η (•, b 0 ) solves the following Volterra equation t), we deduce its non-negativity using again Theorem C.1. Thus, t ∈ R + → φ n η (t, b 0 ) is nonincreasing and consequently sup s∈[0,t] |φ η (s, b)| = |φ n η (t, b 0 )| as φ n η (0, b) = 0. Hence, Theorem A.3 leads to 3 . 3 Proof. Let r = a -a 2 + (1 -ρ 2 )b 2 and η = ρνa -λ. φ n η (•, r) -(ψ n (•, a + ib))being solution of the following linear Volterra equation with continuous coefficients k+1)) is the Mittag-Leffler function and α = H + 1/2 for H ∈ (0, 1/2). ds, because b is Lipschitz continuous and K is bounded on [0, T ]. Finally by definition of ϕ k and the η-Hölder continuity of σ, we have Theorem C. 1 . 1 Let K ∈ L 2 loc (R + , R) satisfying Assumption B.2 and g, z, w : R + → R be continuous functions. The linear Volterra equation χ = g + K * (zχ + w) (C.1) t 0 E 0 λ (s)ds is non-decreasing. We now provide a version of Theorem C.1 for complex valued solutions. Theorem C.3. Let z, w : R + → C be continuous functions and h 0 ∈ C. The following linear Volterra equation h = h 0 + K * (zh + w) admits unique continuous solution h : R + → C such that |h(t)| ≤ ψ(t), t ≥ 0, where ψ : R + → R is the unique continuous solution of ψ = |h 0 | + K * ( (z)ψ + |w|). Proof. The existence and uniqueness of a continuous solution is obtained in the same way as in the proof of Theorem C.1. Consider now, for each ε > 0, ψ ε the unique continuous solution of ψ ε = |h 0 | + K * ( (z)ψ + |w| + ε). 1 and C. 3 .Corollary C. 4 . 0 E 340 Let h 0 ∈ C and z, w : R + → C be continuous functions such that (z) ≤ λ for some λ ∈ R. We define h : R + → C as the unique continuous solution ofh = h 0 + K * (zh + w).Then, for any t ∈ [0, T ],|h(t)| ≤ |h 0 | + ( w ∞,T + λ|h 0 |) T 0 E λ (s)ds,where E λ is the canonical resolvent of K with parameter λ.Proof. From Theorem C.3, we obtain that |h| ≤ ψ 1 , where ψ 1 is the unique continuous solution ofψ 1 = |h 0 | + K * ( (z)ψ 1 + |w|).Moreover define ψ 2 as the unique continuous solution ofψ 2 = |h 0 | + K * (λψ 2 + w ∞,T ).Then,ψ 2 -ψ 1 solves χ = K * (λχ + f ), with f = (λ -(z))ψ 1 + w ∞,T -w, which is a non-negative function on [0, T ]. Theorem C.1 now yields |h| ≤ ψ 1 ≤ ψ 2 .Finally, the claimed bound follows by noticing that, for t ∈ [0, T ],ψ 2 (t) = |h 0 | + ( w ∞,T + λ|h 0 |) t λ (s)ds, by Theorem A.3 and that • 0 E λ (s)ds is non-decreasing by Corollary C.2. , z 2 ∈ C such that (z 1 ), (z 2 ) ≤ c, |e z 1 -e z 2 | ≤ e c |z 1 -z 2 |, Theorem A.3. Let f ∈ L 1 loc (R + , R). The integral equation x = f + λK * x admits a unique solution x ∈ L 1 loc (R + , R) given by x = f + λE λ * f.When K and λ are positive, E λ is also positive, see[START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF] Proposition 9.8.1]. In that case, we have a Grönwall type inequality given by [16, Lemma 9.8.2]. Lemma 1 2 -H) )ds, as b tends to infinity. Hence, thanks to Corollary 5.6, there exists C > 0 such that for any b ∈ R sup (ψ n (t, a + ib)) ≤ C(1 -|b|). (5.19) n≥1 Recalling that ∀z 1 we obtain |L n (a+ib, T )-L(a+ib, T )| ≤ e C(1-|b|) sup t∈[0,T ] |ψ n (t, a+ib)-ψ(t, a+ib)| 0 T (θ(s)+V 0 s -H-1 2 Γ( 1 2 -H) )ds, from (5.16), (5.17) and (5.19). We deduce Proposition 4.3 thanks to (5.15) and Theorem 4.1 together with the fact that b∈R b 4 +1 b 2 + 1 Proposition B.1. Under Assumption B.1, the stochastic Volterra equation (B.1) admits a weak continuous solution X = (X t ) t≤T . Moreover X satisfies sup t∈[0,T ] 1, Theorem A.1]. Theorem B.4 is used here with the smoothed kernel K n given by (3.4) together with b(x) = -λx and g defined as in(3.1) Note that Theorem B.4 is used here for the smoothed kernel K n , b(x) = -λx and g n defined by (4.9). In fact, V n 0 = 0 while V0 may be positive. e C(1-|b|) db < ∞. Acknowledgments We thank Bruno Bouchard, Christa Cuchiero, Philipp Harms and Mathieu Rosenbaum for many interesting discussions. Omar El Euch is thankful for the support of the Research Initiative "Modélisation des marchés actions, obligations et dérivés", financed by HSBC France, under the aegis of the Europlace Institute of Finance. Appendix A Stochastic convolutions and resolvents We recall in this Appendix the framework and notations introduced in [START_REF] Jaber | Affine Volterra processes[END_REF]. A.1 Convolution notation For a measurable function K on R + and a measure L on R + of locally bounded variation, the convolutions K * L and L * K are defined by L(ds)K(t -s) whenever these expressions are well-defined. If F is a function on R + , we write K * F = K * (F dt), that is We can show that L * F is almost everywhere well-defined and belongs to L p loc (R + , R), whenever Finally from Lemma 2.4 in [START_REF] Jaber | Affine Volterra processes[END_REF] together with the Kolmogorov continuity theorem, we can show that there exists a unique version of (K * dM t ) t≥0 that is continuous whenever b and σ are locally bounded. In this paper, we will always work with this continuous version. Note that the convolution notation could be easily extended for matrix-valued K and L. In this case, the associativity properties exposed above hold. A.2 Resolvent of the first kind We define the resolvent of the first kind of a d × d-matrix valued kernel K, as the R d×d -valued measure L on R + of locally bounded variation such that where id stands for the identity matrix, see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Definition 5.5.1]. The resolvent of the first kind does not always exist. In the case of the fractional kernel Γ(H+1/2) the resolvent of the first kind exists and is given by for any H ∈ (0, 1/2). If K is non-negative, non-increasing and not identically equal to zero on R + , the existence of a resolvent of the first kind is guaranteed by [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 5.5.5]. The following result shown in [2, Lemma 2.6], is stated here for d = 1 but is true for any dimension d ≥ 1. ) such that F * L is right-continuous and of locally bounded variation one has Here, df denotes the measure such that f (t) = f (0) + [0,t] df (s), for all t ≥ 0, for any right-continuous function of locally bounde variation f on R + . Remark A.2. The previous lemma will be used with In particular, ∆ h K * L is of locally bounded variation. A.3 Resolvent of the second kind We consider a kernel K ∈ L 1 loc (R + , R) and define the resolvent of the second kind of K as the unique function For λ ∈ R, we define the canonical resolvent of K with parameter λ as the unique solution This means that E λ = -R -λK /λ, when λ = 0 and E 0 = K. The existence and uniqueness of R K and E λ is ensured by [16, Theorem 2.3.1] together with the continuity of We recall [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.5] regarding the existence and uniqueness of a solution of linear Volterra integral equations in L 1 loc (R + , R). In [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF], the proof of [2, Theorem 3.5] is adapted to prove the existence of a non-negative solution for a wide class of admissible input curves g satisfying 6 We therefore define the following set of admissible input curves Remark B.5. Note that any locally square-integrable completely monotone kernel 7 that is not identically zero satisfies Assumption B.2, see [START_REF] Jaber | Affine Volterra processes[END_REF]Example 3.6]. In particular, this is the case for , with H ∈ (0, 1/2). • any weighted sum of exponentials K(t) = n i=1 c i e -γ i t such that c i , γ i ≥ 0 for all i ∈ {1, . . . , n} and c i > 0 for some i. Remark B.6. Theorem B.4 will be used with functions g of the following form where ξ is a non-negative measure of locally bounded variation and c is a non-negative constant. In that case, we may show that (B.3) is satisfied, under Assumption B.2. C Linear Volterra equation with continuous coefficients In this section, we consider K ∈ L 2 loc (R + , R) satisfying Assumption B.2 with T = ∞ and recall the definition of G K , that is We denote by . ∞,T the usual uniform norm on [0, T ], for each T > 0. 6 Under Assumption B.2 one can show that ∆ h K * L is non-increasing and right-continuous thanks to Remark A.2 so that the associated measure d(∆ h K * L) is well-defined. 7 A kernel K ∈ L 2 loc (R+, R) is said to be completely monotone, if it is infinitely differentiable on (0, ∞) such that (-1) j K (j) (t) ≥ 0 for any t > 0 and j ≥ 0. We start by showing the inequality in a neighborhood of zero. Because z, h, w and ψ ε are continuous, we get, taking h 0 = 0, for small t. Hence, |h| ≤ ψ ε on a neighborhood of zero. This result still holds when h 0 is not zero. Indeed in that case, it is easy to show that for t going to zero, and As |h 0 | is now positive, we conclude that |h| ≤ ψ ε on a neighborhood of zero by the Cauchy-Schwarz inequality. Hence, t 0 = inf{t > 0; ψ ε (t) < |h(t)|} is positive. If we assume that t 0 < ∞, we would get that |h(t 0 )| = ψ ε (t 0 ) by continuity of h and ψ ε . Moreover, An application of Lemma A.1 with F = ∆ t K for t > 0, yields Relying on the fact that d(∆ t K * L) is a non-negative measure and ∆ t K * L ≤ 1, by Remark A.2, together with the fact that |h(s)| ≤ ψ ε (s) for s ≤ t 0 , we get that |φ h (t)| ≤ φ ψε (t). We now notice that in the case h(t 0 ) = 0, we have
79,624
[ "996885", "1027310" ]
[ "60", "300340" ]
01761092
en
[ "chim", "phys" ]
2024/03/05 22:32:13
2018
https://hal.sorbonne-universite.fr/hal-01761092/file/RSDH.pdf
Cairedine Kalai Julien Toulouse A general range-separated double-hybrid density-functional theory A range-separated double-hybrid (RSDH) scheme which generalizes the usual range-separated hybrids and double hybrids is developed. This scheme consistently uses a two-parameter Coulombattenuating-method (CAM)-like decomposition of the electron-electron interaction for both exchange and correlation in order to combine Hartree-Fock exchange and second-order Møller-Plesset (MP2) correlation with a density functional. The RSDH scheme relies on an exact theory which is presented in some detail. Several semi-local approximations are developed for the short-range exchangecorrelation density functional involved in this scheme. After finding optimal values for the two parameters of the CAM-like decomposition, the RSDH scheme is shown to have a relatively small basis dependence and to provide atomization energies, reaction barrier heights, and weak intermolecular interactions globally more accurate or comparable to range-separated MP2 or standard MP2. The RSDH scheme represents a new family of double hybrids with minimal empiricism which could be useful for general chemical applications. I. INTRODUCTION Over the past two decades, density-functional theory (DFT) [1] within the Kohn-Sham (KS) scheme [2] has been a method of choice to study ground-state properties of electronic systems. KS DFT is formally exact, but it involves the so-called exchange-correlation energy functional whose explicit form in terms of the electron density is still unknown. Hence, families of approximations to this quantity have been developed: semi-local approximations (local-density approximation (LDA), generalized-gradient approximations (GGAs) and meta-GGAs), hybrid approximations, and approximations depending on virtual orbitals (see, e.g., Ref. 3 for a recent review). This last family of approximations includes approaches combining semi-local density-functional approximations (DFAs) with Hartree-Fock (HF) exchange and second-order Møller-Plesset (MP2) correlation, either based on a range separation of the electron-electron interaction or a linear separation. In the range-separated hybrid (RSH) variant, the Coulomb electron-electron interaction w ee (r 12 ) = 1/r 12 is decomposed as [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF] w ee (r 12 ) = w lr,µ ee (r 12 ) + w sr,µ ee (r 12 ), where w lr,µ ee (r 12 ) = erf(µr)/r 12 is a long-range interaction (written with the error function erf) and w sr,µ ee (r 12 ) = erfc(µr)/r 12 is the complementary short-range interaction (written with the complementary error function erfc), the decomposition being controlled by the parameter µ (0 ≤ µ < ∞). HF exchange and MP2 correlation can then be used for the long-range part of the energy, while a semi-local exchange-correlation DFA is used for * Electronic address: [email protected] the complementary short-range part, resulting in a method that is denoted by RSH+MP2 [6]. Among the main advantages of such an approach are the explicit description of van der Waals dispersion interactions via the long-range MP2 part (see, e.g., Ref. 7) and the fast (exponential) convergence of the long-range MP2 correlation energy with respect to the size of the one-electron basis set [8]. On the other hand, the short-range exchangecorrelation DFAs used still exhibit significant errors, such as self-interaction errors [9], limiting the accuracy for the calculations of atomization energies or non-covalent electrostatic interactions for example. Similarly, the double-hybrid (DH) variant [10] (see Ref. 11 for a review) for combining MP2 and a semi-local DFA can be considered as corresponding to a linear separation of the Coulomb electronelectron interaction [12] w ee (r 12 ) = λw ee (r 12 ) + (1 -λ)w ee (r 12 ), (2) where λ is a parameter (0 ≤ λ ≤ 1). If HF exchange and MP2 correlation is used for the part of the energy associated with the interaction λw ee (r 12 ) and a semi-local exchange-correlation DFA is used for the complementary part, then a one-parameter version of the DH approximations is obtained [12]. One of the main advantages of the DH approximations is their quite efficient reduction of the self-interaction error [13] thanks to their large fraction of HF exchange (λ ≈ 0.5 or more). On the other hand, they inherit (a fraction of) the slow (polynomial) basis convergence of standard MP2 [14], and they are insufficiently accurate for the description of van der Waals dispersion interactions and need the addition of semiempirical dispersion corrections [15]. In this work, we consider range-separated double-hybrid (RSDH) [START_REF]The term "range-separated double hybrid (RSDH)" has already been used in Ref. 92[END_REF] approximations which combine the two above-mentioned approaches, based on the following decomposition of the Coulomb electron-electron interaction w ee (r 12 ) = w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) +(1 -λ)w sr,µ ee (r 12 ), where, again, the energy corresponding to the first part of the interaction (in square brackets) is calculated with HF exchange and MP2 correlation, and the complementary part is treated by a semi-local exchange-correlation DFA. The expected features of such an approach are the explicit description of van der Waals dispersion interactions through the long-range part, and reduced self-interaction errors in the short-range part (and thus improved calculations of properties such as atomization energies). The decomposition of Eq. ( 3) is in fact a special case of the decomposition used in the Coulombattenuating method (CAM) [START_REF] Yanai | [END_REF] w ee (r 12 ) = (α + β)w lr,µ ee (r 12 ) + αw sr,µ ee (r 12 ) + (1 -α -β)w lr,µ ee (r 12 ) + (1 -α)w sr,µ ee (r 12 ) , (4) with the parameters α + β = 1 and α = λ. The choice α+β = 1 is known to be appropriate for Rydberg and charge-transfer excitation energies [18] and for reaction-barrier heights [19]. We also expect it to be appropriate for the description of longrange van der Waals dispersion interactions. It should be noted that the CAM decomposition has been introduced in Ref. 17 at the exchange level only, i.e. for combining HF exchange with a semilocal exchange DFA without modifying the semilocal correlation DFA. Only recently, Cornaton and Fromager [20] (see also Ref. 21) pointed out the possibility of a CAM double-hybrid approximation but remarked that the inclusion of a fraction of short-range electron-electron interaction in the MP2 part would limit the basis convergence, and preferred to develop an alternative approach which uses only the perturbation expansion of a longrange interacting wave function. Despite the expected slower basis convergence of DH approximations based on the decomposition in Eq. (3) [or in Eq. ( 4)] in comparison to the RSH+MP2 method based on the decomposition in Eq. ( 1), we still believe it worthwhile to explore this kind of DH approximations in light of the above-mentioned expected advantages. In fact, we will show that the basis convergence of the RSDH approximations is relatively fast, owing to the inclusion of a modest fraction of short-range MP2 correlation. The decomposition in Eq. ( 3) has been used several times at the exchange level [22][23][24][25][26][27][28][29][30][31][32][33]. A few DH approximations including either long-range exchange or long-range correlation terms have been proposed. The ωB97X-2 approximation [34] adds a full-range MP2 correlation term to a hybrid approximation including long-range HF exchange. The B2-P3LYP approximation [35] and the lrc-XYG3 approximation [36] add a long-range MP2 correlation correction to a standard DH approximation including full-range HF exchange. Only in Ref. 37 the decomposition in Eq. ( 3) was consistently used at the exchange and correlation level, combining a pair coupled-cluster doubles approximation with a semi-local exchangecorrelation DFA, in the goal of describing static correlation. However, the formulation of the exact theory based on the decomposition in Eq. ( 3), as well as the performance of the MP2 and semi-local DFAs in this context, have not been explored. This is what we undertake in the present work. The paper is organized as follows. In Section II, the theory underlying the RSDH approximations is presented, and approximations for the corresponding short-range correlation density functional are developed. Computational details are given in Section III. In Section IV, we give and discuss the results, concerning the optimization of the parameters µ and λ on small sets of atomization energies (AE6 set) and reaction barrier heights (BH6 set), the study of the basis convergence, and the tests on large sets of atomization energies (AE49 set), reaction barrier heights (DBH24 set), and weak intermolecular interactions (S22 set). Section V contains conclusions and future work prospects. Finally, the Appendix contains the derivation of the uniform coordinate scaling relation and the Coulomb/high-density and shortrange/low-density limits of the short-range correlation density functional involved in this work. Unless otherwise specified, Hartree atomic units are tacitly assumed throughout this work. II. RANGE-SEPARATED DOUBLE-HYBRID DENSITY-FUNCTIONAL THEORY A. Exact theory The derivation of the RSDH density-functional theory starts from the universal density functional [38], F [n] = min Ψ→n Ψ| T + Ŵee |Ψ , ( 5 ) where T is the kinetic-energy operator, Ŵee the Coulomb electron-electron repulsion operator, and the minimization is done over normalized antisymmetric multideterminant wave functions Ψ giving a fixed density n. The universal density functional is then decomposed as F [n] = F µ,λ [n] + Ēsr,µ,λ Hxc [n], (6) where F µ,λ [n] is defined as F µ,λ [n] = min Ψ→n Ψ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee |Ψ . (7) In Eq. ( 7), Ŵ lr,µ ee is the long-range electron-electron repulsion operator and λ Ŵ sr,µ ee is the short-range electron-electron repulsion operator scaled by the constant λ, with expressions: Ŵ lr,µ ee = 1 2 w lr,µ ee (r 12 )n 2 (r 1 , r 2 )dr 1 dr 2 , (8) Ŵ sr,µ ee = 1 2 w sr,µ ee (r 12 )n 2 (r 1 , r 2 )dr 1 dr 2 , (9) where n2 (r 1 , r 2 ) = n(r 1 )n(r 2 ) -δ(r 1r 2 )n(r 1 ) is the pair-density operator, written with the density operator n(r). Equation ( 6) defines the complement short-range Hartree-exchange-correlation density functional Ēsr,µ,λ Hxc [n] depending on the two parameters µ and λ. It can itself be decomposed as Ēsr,µ,λ Hxc [n] = E sr,µ,λ H [n] + Ēsr,µ,λ xc [n], (10) where E sr,µ,λ H [n] is the short-range Hartree contribution, E sr,µ,λ H [n] = (1 -λ) × 1 2 w sr,µ ee (r 12 )n(r 1 )n(r 2 )dr 1 dr 2 , (11) and Ēsr,µ,λ xc [n] is the short-range exchangecorrelation contribution. The exact ground-state electronic energy of a N -electron system in the external nuclei-electron potential v ne (r) can be expressed as E = min n→N F [n] + v ne (r)n(r)dr = min n→N F µ,λ [n] + Ēsr,µ,λ Hxc [n] + v ne (r)n(r)dr = min Ψ→N Ψ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne |Ψ + Ēsr,µ,λ Hxc [n Ψ ] , (12) where n → N refers to N -representable densities, Ψ → N refers to N -electron normalized antisymmetric multideterminant wave functions, and n Ψ denotes the density coming from Ψ, i.e. n Ψ (r) = Ψ| n(r) |Ψ . In Eq. ( 12), the last line was obtained by using the expression of F µ,λ [n] in Eq. ( 7), introducing the nuclei-electron potential operator Vne = v ne (r)n(r)dr, and recomposing the two-step minimization into a single one, i.e. min n→N min Ψ→n = min Ψ→N . The minimizing wave function Ψ µ,λ in Eq. ( 12) satisfies the corresponding Euler-Lagrange equation, leading to the Schrödinger-like equation T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne + V sr,µ,λ Hxc [n Ψ µ,λ ] |Ψ µ,λ = E µ,λ |Ψ µ,λ , (13) where E µ,λ is the Lagrange multiplier associated with the normalization constraint of the wave function. In Eq. ( 13), V sr,µ,λ Hxc [n] = v sr,µ,λ Hxc (r)n(r)dr is the complement short-range Hartree-exchangecorrelation potential operator with v sr,µ,λ Hxc (r) = δ Ēsr,µ,λ Hxc [n]/δn(r). Equation ( 13) defines an effective Hamiltonian Ĥµ,λ = T + Vne + Ŵ lr,µ ee +λ Ŵ sr,µ ee + V sr,µ,λ Hxc [n Ψ µ,λ ] that must be solved iteratively for its ground-state multideterminant wave function Ψ µ,λ which gives the exact ground-state density and the exact ground-state energy via Eq. ( 12), independently of µ and λ. We have therefore defined an exact formalism combining a wave-function calculation with a density functional. This formalism encompasses several important special cases: • µ = 0 and λ = 0. In Eq. ( 12), the electron-electron operator vanishes, Ŵ lr,µ=0 ee + 0 × Ŵ sr,µ=0 ee = 0, and the density functional reduces to the KS Hartree-exchange-correlation density functional, Ēsr,µ=0,λ=0 Hxc [n] = E Hxc [n], so that we recover standard KS DFT E = min Φ→N Φ| T + Vne |Φ + E Hxc [n Φ ] , ( 14 ) where Φ is a single-determinant wave function. • µ → ∞ or λ = 1. In Eq. ( 12), the electronelectron operator reduces to the Coulomb interaction Ŵ lr,µ→∞ ee + λ Ŵ sr,µ→∞ ee = Ŵee or Ŵ lr,µ ee + 1 × Ŵ sr,µ ee = Ŵee , and the density functional vanishes, Ēsr,µ→∞,λ Hxc [n] = 0 or Ēsr,µ,λ=1 Hxc [n] = 0, so that we recover standard wave-function theory E = min Ψ→N Ψ| T + Ŵee + Vne |Ψ . ( 15 ) • 0 < µ < ∞ and λ = 0. In Eq. ( 12), the electron-electron operator reduces to the longrange interaction Ŵ lr,µ ee + 0 × Ŵ sr,µ ee = Ŵ lr,µ ee , and the density functional reduces to the usual shortrange density functional, Ēsr,µ,λ=0 Hxc [n] = Ēsr,µ Hxc [n], so that we recover range-separated DFT [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF] E = min Ψ→N Ψ| T + Ŵ lr,µ ee + Vne |Ψ + Ēsr,µ Hxc [n Ψ ] . (16) • µ = 0 and 0 < λ < 1. In Eq. ( 12), the electronelectron operator reduces to the scaled Coulomb interaction Ŵ lr,µ=0 ee + λ Ŵ sr,µ=0 ee = λ Ŵee , and the density functional reduces to the λ-complement density functional, Ēsr,µ=0,λ Hxc [n] = Ēλ Hxc [n], so that we recover the multideterminant extension of KS DFT based on the linear decomposition of the electron-electron interaction [12,39] E = min Ψ→N Ψ| T + λ Ŵee + Vne |Ψ + Ēλ Hxc [n Ψ ] . (17) B. Single-determinant approximation As a first step, we introduce a single-determinant approximation in Eq. ( 12), E µ,λ 0 = min Φ→N Φ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne |Φ + Ēsr,µ,λ Hxc [n Φ ] , (18) where the search is over N -electron normalized single-determinant wave functions. The minimizing single determinant Φ µ,λ is given by the HF-or KS-like equation T + Vne + V lr,µ Hx,HF [Φ µ,λ ] + λ V sr,µ Hx,HF [Φ µ,λ ] + V sr,µ,λ Hxc [n Φ µ,λ ] |Φ µ,λ = E µ,λ 0 |Φ µ,λ , (19) where V lr,µ Hx,HF [Φ µ,λ ] and V sr,µ Hx,HF [Φ µ,λ ] are the longrange and short-range HF potential operators constructed with the single determinant Φ µ,λ , and E µ,λ 0 is the Lagrange multiplier associated with the normalization constraint. Equation ( 19) must be solved self-consistently for its single-determinant ground-state wave function Φ µ,λ . Note that, due to the single-determinant approximation, the density n Φ µ,λ is not the exact ground-state density and the energy in Eq. ( 18) is not the exact ground-state energy and depends on the parameters µ and λ. It can be rewritten in the form E µ,λ 0 = Φ µ,λ | T + Vne |Φ µ,λ + E H [n Φ µ,λ ] +E lr,µ x,HF [Φ µ,λ ] + λE sr,µ x,HF [Φ µ,λ ] + Ēsr,µ,λ xc [n Φ µ,λ ],( 20 ) where E H [n] = (1/2) w ee (r 12 )n(r 1 )n(r 2 )dr 1 dr 2 is the standard Hartree energy with the Coulomb electron-electron interaction, and E lr,µ x,HF and E sr,µ x,HF are the long-range and short-range HF exchange energies. For µ = 0 and λ = 0, we recover standard KS DFT, while for µ → ∞ or λ = 1 we recover standard HF theory. For intermediate values of µ and λ, this scheme is very similar to the approximations of Refs. 22-33, except that the part of correlation associated with the interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) is missing in Eq. (20). The addition of this correlation is done in a second step with MP2 perturbation theory. C. Second-order Møller-Plesset perturbation theory A rigorous non-linear Rayleigh-Schrödinger perturbation theory starting from the singledeterminant reference of Section II B can be developed, similarly to what was done for the RSH+MP2 method in Refs. 6, 40, 41 and for the one-parameter DH approximations in Ref. 12. This is done by introducing a perturbation strength parameter ǫ and defining the energy expression: E µ,λ,ǫ = min Ψ→N Ψ| T + Vne + V lr,µ Hx,HF [Φ µ,λ ] +λ V sr,µ Hx,HF [Φ µ,λ ] + ǫ Ŵµ,λ |Ψ + Ēsr,µ,λ Hxc [n Ψ ] ,( 21 ) where the search is over N -electron normalized antisymmetric multideterminant wave functions, and Ŵµ,λ is a Møller-Plesset-type perturbation operator Ŵµ,λ = Ŵ lr,µ ee + λ Ŵ sr,µ ee -V lr,µ Hx,HF [Φ µ,λ ] -λ V sr,µ Hx,HF [Φ µ,λ ].( 22 ) The minimizing wave function Ψ µ,λ,ǫ in Eq. ( 21) is given by the corresponding Euler-Lagrange equation: T + Vne + V lr,µ Hx,HF [Φ µ,λ ] + λ V sr,µ Hx,HF [Φ µ,λ ] + ǫ Ŵµ,λ + V sr,µ,λ Hxc [n Ψ µ,λ,ǫ ] |Ψ µ,λ,ǫ = E µ,λ,ǫ |Ψ µ,λ,ǫ . (23) For ǫ = 0, Eq. ( 23) reduces to the singledeterminant reference of Eq. ( 19), i.e. Ψ µ,λ,ǫ=0 = Φ µ,λ and E µ,λ,ǫ=0 = E µ,λ 0 . For ǫ = 1, Eq. ( 23) reduces to Eq. ( 13), i.e. Ψ µ,λ,ǫ=1 = Ψ µ,λ and E µ,λ,ǫ=1 = E µ,λ , and Eq. ( 21) reduces to Eq. ( 12), i.e. we recover the physical energy E µ,λ,ǫ=1 = E, independently of µ and λ. The perturbation theory is then obtained by expanding these quantities in (k) . Following the same steps as in Ref. 6, we find the zeroth-order energy, ǫ around ǫ = 0: E µ,λ,ǫ = ∞ k=0 ǫ k E µ,λ,(k) , Ψ µ,λ,ǫ = ∞ k=0 ǫ k Ψ µ,λ,(k) , and E µ,λ,ǫ = ∞ k=0 ǫ k E µ,λ, E µ,λ,(0) = Φ µ,λ | T + Vne + V lr,µ Hx,HF [Φ µ,λ ] +λ V sr,µ Hx,HF [Φ µ,λ ] |Φ µ,λ + Ēsr,µ,λ Hxc [n Φ µ,λ ], (24) and the first-order energy correction, E µ,λ,(1) = Φ µ,λ | Ŵµ,λ |Φ µ,λ , (25) so that the zeroth+first order energy gives back the energy of the single-determinant reference in Eq. ( 20), E µ,λ,(0) + E µ,λ,(1) = E µ,λ 0 . (26) The second-order energy correction involves only double-excited determinants Φ µ,λ ij→ab (of energy E µ,λ 0,ij→ab ) and takes the form a MP2-like correlation energy, assuming a non-degenerate ground state in Eq. ( 19), E µ,λ,(2) = E µ,λ c,MP2 = - occ i<j vir a<b Φ µ,λ ij→ab | Ŵµ,λ |Φ µ,λ 2 E µ,λ 0,ij→ab -E µ,λ 0 = - occ i<j vir a<b ij| ŵlr,µ ee + λ ŵsr,µ ee |ab -ij| ŵlr,µ ee + λ ŵsr,µ ee |ba 2 ε a + ε b -ε i -ε j , (27) where i and j refer to occupied spin orbitals and a and b refer to virtual spin orbitals obtained from Eq. ( 19), ε k are the associated orbital energies, and ij| ŵlr,µ ee + λ ŵsr,µ ee |ab are the twoelectron integrals corresponding to the interaction w lr,µ ee (r 12 )+λw sr,µ ee (r 12 ). Note that the orbitals and orbital energies implicitly depend on µ and λ. Just like in standard Møller-Plesset perturbation theory, there is a Brillouin theorem making the singleexcitation term vanish (see Ref. 6). Also, contrary to the approach of Refs. 20, 21, the second-order energy correction does not involve any contribution from the second-order correction to the density. The total RSDH energy is finally E µ,λ RSDH = E µ,λ 0 + E µ,λ c,MP2 . (28) It is instructive to decompose the correlation energy in Eq. ( 27) as E µ,λ c,MP2 = E lr,µ c,MP2 + λE lr-sr,µ c,MP2 + λ 2 E sr,µ c,MP2 , (29) with a pure long-range contribution, E lr,µ c,MP2 = - occ i<j vir a<b ij| ŵlr,µ ee |ab -ij| ŵlr,µ ee |ba 2 ε a + ε b -ε i -ε j , (30) a pure short-range contribution, E sr,µ c,MP2 = - occ i<j vir a<b | ij| ŵsr,µ ee |ab -ij| ŵsr,µ ee |ba | 2 ε a + ε b -ε i -ε j , (31) and a mixed long-range/short-range contribution, E lr-sr,µ c,MP2 = - occ i<j vir a<b ij| ŵlr,µ ee |ab -ij| ŵlr,µ ee |ba ( ab| ŵsr,µ ee |ij -ba| ŵsr,µ ee |ij ) ε a + ε b -ε i -ε j + c.c., (32) where c.c. stands for the complex conjugate. The exchange-correlation energy in the RSDH approximation is thus E µ,λ xc,RSDH = E lr,µ x,HF + λE sr,µ x,HF + E lr,µ c,MP2 +λE lr-sr,µ c,MP2 + λ 2 E sr,µ c,MP2 + Ēsr,µ,λ xc [n]. (33) It remains to develop approximations for the complement short-range exchange-correlation density functional Ēsr,µ,λ xc [n], which is done in Section II D. D. Complement short-range exchange-correlation density functional Decomposition into exchange and correlation The complement short-range exchangecorrelation density functional Ēsr,µ,λ xc [n] can be decomposed into exchange and correlation contributions, Ēsr,µ,λ xc [n] = E sr,µ,λ x [n] + Ēsr,µ,λ c [n], (34) where the exchange part is defined with the KS single determinant Φ[n] and is linear with respect to λ, E sr,µ,λ x [n] = Φ[n]| (1 -λ) Ŵ sr,µ ee |Φ[n] -E sr,µ,λ H [n] = (1 -λ)E sr,µ x [n], (35) where E sr,µ x [n] = E sr,µ,λ=0 x [n] is the usual shortrange exchange density functional, as already introduced, e.g., in Ref. 5). Several (semi-)local approximations have been proposed for E sr,µ x [n] (see, e.g., Refs. [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF][42][43][44][45][46][47][48][49]. By contrast, the complement short-range correlation density functional Ēsr,µ,λ c [n] cannot be exactly expressed in terms of the short-range correlation density functional Ēsr,µ c [n] = Ēsr,µ,λ=0 c [n] of Ref. 5 for which several (semi-)local approximations have been proposed [START_REF] Toulouse | [END_REF][45][46][47][48][49][50]. Note that in the approach of Ref. 20 the complement density functional was defined using the pure long-range interacting wave function Ψ µ = Ψ µ,λ=1 and it was possible, using uniform coordinate scaling relations, to find an exact expression for it in terms of previously studied density functionals. This is not the case in the present approach because the complement density functional is defined using the wave function Ψ µ,λ obtained with both long-range and short-range interactions. As explained in the Appendix, uniform coordinate scaling relations do not allow one to obtain an exact expression for Ēsr,µ,λ c [n] in terms of previously studied density functionals. Therefore, the difficulty lies in developing approximations for Ēsr,µ,λ c [n]. For this, we first give the exact expression of Ēsr,µ,λ c [n] in the Coulomb limit µ → 0 (and the related high-density limit) and in the shortrange limit µ → ∞ (and the related low-density limit). Expression of Ēsr,µ,λ c [n] in the Coulomb limit µ → 0 and in the high-density limit The complement short-range correlation density functional Ēsr,µ,λ c [n] can be written as Ēsr,µ,λ c [n] = E c [n] -E µ,λ c [n], (36) where E c [n] is the standard KS correlation density functional and E µ,λ c [n] is the correlation density functional associated with the interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ). For µ = 0, the density functional E µ=0,λ c [n] = E λ c [n] corresponds to the correlation functional associated with the scaled Coulomb interaction λw ee (r 12 ), which can be exactly expressed as 51,[START_REF] Levy | Density Functional Theory[END_REF] where n 1/λ (r) = (1/λ 3 )n(r/λ) is the density with coordinates uniformly scaled by 1/λ. Therefore, for µ = 0, the complement short-range correlation density functional is E λ c [n] = λ 2 E c [n 1/λ ] [ Ēsr,µ=0,λ c [n] = E c [n] -λ 2 E c [n 1/λ ], (37) which is the correlation functional used in the density-scaled one-parameter double-hybrid (DS1DH) scheme of Sharkas et al. [12]. For a KS system with a non-degenerate ground state, we have in the λ → 0 limit: E c [n 1/λ ] = E GL2 c [n] + O(λ) where E GL2 c [n] is the second-order Görling-Levy (GL2) correlation energy [START_REF] Görling | [END_REF]. Therefore, in this case, Ēsr,µ=0,λ c [n] has a quadratic dependence in λ near λ = 0. In practice with GGA functionals, it has been found that the density scaling in Eq. ( 37) can sometimes be advantageously neglected, i.e. E c [n 1/λ ] ≈ E c [n] [12, 39], giving Ēsr,µ=0,λ c [n] ≈ (1 -λ 2 )E c [n]. ( 38 ) Even if we do not plan to apply the RSDH scheme with µ = 0, the condition in Eq. ( 37) is in fact relevant for an arbitrary value of µ in the high-density limit, i.e. n γ (r) = γ 3 n(γr) with γ → ∞, since in this limit the short-range interaction becomes equivalent to the Coulomb interaction in the complement short-range correlation density functional: lim γ→∞ Ēsr,µ,λ c [n γ ] = lim γ→∞ Ēsr,µ=0,λ c [n γ ] (see Appendix). In fact, for a KS system with a non-degenerate ground state, the approximate condition in Eq. ( 38) is sufficient to recover the exact high-density limit for an arbitrary value of µ which is (see Appendix) lim γ→∞ Ēsr,µ,λ c [n γ ] = (1 -λ 2 )E GL2 c [n]. ( 39 ) 3. Expression of Ēsr,µ,λ c [n] in the short-range limit µ → ∞ and the low-density limit The leading term in the asymptotic expansion of Ēsr,µ,λ c [n] as µ → ∞ is (see Appendix) Ēsr,µ,λ c [n] = (1 -λ) π 2µ 2 n 2,c [n](r, r)dr + O 1 µ 3 , (40) where n 2,c [n](r, r) is the correlation part of the on-top pair density for the Coulomb interaction. We thus see that, for µ → ∞, Ēsr,µ,λ c [n] is linear with respect to λ. In fact, since the asymptotic expansion of the usual short-range correlation functional is Ēsr,µ c [n] = π/(2µ 2 ) n 2,c [n](r, r)dr + O(1/µ 3 ) [5], we can write for µ → ∞, Ēsr,µ,λ c [n] = (1 -λ) Ēsr,µ c [n] + O 1 µ 3 . ( 41 ) The low-density limit, i.e. n γ (r) = γ 3 n(γr) with γ → 0, is closely related to the limit µ → ∞ (see Appendix) Ēsr,µ,λ c [n γ ] ∼ γ→0 γ 3 (1 -λ)π 2µ 2 n 1/γ 2,c [n](r, r)dr ∼ γ→0 γ 3 (1 -λ)π 4µ 2 -n(r) 2 + m(r) 2 dr, (42) in which appears n 1/γ 2,c [n](r, r), the on-top pair density for the scaled Coulomb interaction (1/γ)w ee (r 12 ), and its strong-interaction limit lim γ→0 n [54] where m(r) is the spin magnetization. Thus, in the low-density limit, contrary to the usual KS correlation functional E c [n] which goes to zero linearly in γ [51], times the complicated nonlocal strictlycorrelated electron functional [55], the complement short-range correlation functional Ēsr,µ,λ c 1/γ 2,c [n](r, r) = -n(r) 2 /2 + m(r) 2 /2 [n] goes to zero like γ 3 and becomes a simple local functional of n(r) and m(r). We now propose several simple approximations for Ēsr,µ,λ c [n]. On the one hand, Eq. ( 38) suggests the approximation Ēsr,µ,λ c,approx1 [n] = (1 -λ 2 ) Ēsr,µ c [n], (43) which is correctly quadratic in λ at µ = 0 but is not linear in λ for µ → ∞. On the other hand, Eq. ( 41) suggests the approximation Ēsr,µ,λ c,approx2 [n] = (1 -λ) Ēsr,µ c [n], (44) which is correctly linear in λ for µ → ∞ but not quadratic in λ at µ = 0. However, it is possible to impose simultaneously the two limiting behaviors for µ = 0 and µ → ∞ with the following approximation Ēsr,µ,λ c,approx3 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ √ λ c [n], (45) which reduces to Eq. ( 38) for µ = 0 and satisfies Eq. ( 40) for µ → ∞. Another possibility, proposed in Ref. 37, is Ēsr,µ,λ c,approx4 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ/λ c [n 1/λ ], (46) which correctly reduces to Eq. ( 37) for µ = 0. For µ → ∞, its asymptotic expansion is Ēsr,µ,λ c,approx4 [n] = π 2µ 2 n 2,c [n](r, r)dr -λ 4 π 2µ 2 n 2,c [n 1/λ ](r, r)dr + O 1 µ 3 , (47) i.e. it does not satisfy Eq. (40). Contrary to what was suggested in Ref. 37, Eq. ( 46) is not exact but only an approximation. However, using the scaling relation on the system-averaged on-top pair density [54] n 2,c [n γ ](r, r)dr = γ 3 n 1/γ 2,c [n](r, r)dr, (48) it can be seen that, in the low-density limit γ → 0, Eq. ( 47) correctly reduces to Eq. ( 42). In Ref. 37, the authors propose to neglect the scaling of the density in Eq. ( 46) leading to Ēsr,µ,λ c,approx5 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ/λ c [n], (49) which reduces to Eq. ( 38) for µ = 0, but which has also a wrong λ-dependence for large µ Ēsr,µ,λ c,approx5 [n] = (1 -λ 4 ) π 2µ 2 n 2,c [n](r, r)dr +O 1 µ 3 , (50) and does not anymore satisfy the low-density limit. Another strategy is to start from the decomposition of the MP2-like correlation energy in Eq. ( 29) which suggests the following approximation for the complement short-range correlation functional Ēsr,µ,λ c,approx6 [n] = (1 -λ)E lr-sr,µ c [n] +(1 -λ 2 )E sr,µ c [n], (51) where E lr-sr,µ c [n] = Ēsr,µ c [n]-E sr,µ c [n] is the mixed long-range/short-range correlation functional [56,57] and E sr,µ c [n] is the pure short-range correlation functional associated with the short-range interaction w sr,µ ee (r 12 ) [56,57]. An LDA functional has been constructed for E sr,µ c [n] [58]. Since 40) or (41). One can also enforce the exact condition at µ = 0, Eq. (38), by introducing a scaling of the density Ēsr,µ,λ c,approx7 [n] = (1 -λ)E lr-sr,µ c [n] + E sr,µ c [n] -λ 2 E sr,µ/λ c [n 1/λ ]. ( 52 ) 5. Assessment of the approximations for Ēsr,µ,λ c [n] on the uniform-electron gas We now test the approximations for the complement short-range correlation functional Ēsr,µ,λ c [n] introduced in Sec. II D 4 on the spin-unpolarized uniform-electron gas. As a reference, for several values of the Wigner-Seitz radius r s = (3/(4πn)) 1/3 and the parameters µ and λ, we have calculated the complement shortrange correlation energy per particle as εsr,µ,λ c,unif (r s ) = ε c,unif (r s ) -ε µ,λ c,unif (r s ), (53) where ε c,unif (r s ) is the correlation energy per particle of the uniform-electron gas with the Coulomb electron-electron w ee (r 12 ) and ε µ,λ c,unif (r s ) is the correlation energy per particle of an uniform-electron gas with the modified electron-electron w lr,µ ee (r 12 )+ λw sr,µ ee (r 12 ). We used what is known today as the direct random-phase approximation + secondorder screened exchange (dRPA+SOSEX) method (an approximation to coupled-cluster doubles) [59,60] as introduced for the uniform-electron gas by Freeman [61] and extended for modified electronelectron interactions in Refs. 4, 45, and which is known to give reasonably accurate correlation energies per particle of the spin-unpolarized electron gas (error less than 1 millihartree for r s < 10). We note that these calculations would allow us to construct a complement short-range LDA correlation functional, but we refrain from doing that since we prefer to avoid having to do a complicated fit of εsr,µ,λ c (r s ) with respect to r s , µ, and λ. Moreover, this would only give a spin-independent LDA functional. We thus use these uniform-electron gas calculations only to test the approximations of Sec. II D 4. For several values of r s , µ, and λ, we have calculated the complement short-range correlation energy per particle corresponding to the approximations 1 to 7 using the LDA approximation for Ēsr,µ c [n] from Ref. 50 (for approximations 1 to 7), as well as the LDA approximation for E sr,µ c [n] from Ref. 58 (for approximations 6 and 7), and the errors with respect to the dRPA+SOSEX results are reported in Fig. 1. Note that the accuracy of the dRPA+SOSEX reference decreases as r s increases, the error being of the order of 1 millihartree for r s = 10, which explains why the curves on the third graph of Fig. 1 appear shifted with respect to zero at large r s . By construction, all the approximations become exact for λ = 0 (and trivially for λ = 1 or in the µ → ∞ limit since the complement short-range correlation energy goes to zero in these cases). For intermediate values of λ and finite values of µ, all the approximations, except approximation 2, tend to give too negative correlation energies. As it could have been expected, approximation 2, which is the only one incorrectly linear in λ at µ = 0, gives quite a large error (of the order of 0.01 hartree or more) for small µ, intermediate λ, and small r s (it in fact diverges in the high-density limit r s → 0), but the error goes rapidly to zero as µ increases, reflecting the fact that this approximation has the correct leading term of the asymptotic expansion for µ → ∞. On the contrary, approximation 1, being quadratic in λ, gives a smaller error (less than 0.005 hartree) for small µ but the error goes slower to zero as µ increases. Approximation 3 combines the advantages of approximations 1 and 2: it gives a small error for small µ which goes rapidly to zero as µ increases. Approximation 4, which contains the scaling of the den- Error on the complement short-range correlation energy per particle (hartree) FIG. 1: Error on the complement short-range correlation energy per particle εsr,µ,λ c,unif (rs) of the uniformelectron gas obtained with approximations 1 to 7 of Sec. II D 4 with respect to the dRPA+SOSEX results. sity, is exact for µ = 0, and gives a small error (at most about 0.003 hartree) for intermediate values of µ, but the error does not go rapidly to zero as µ increases. Again, this reflects the fact that this approximation does not give the correct leading term of the asymptotic expansion for µ → ∞ for arbitrary λ and r s . This confirms that Eq. ( 46) does not give the exact complement short-range correlation functional, contrary to what was thought in Ref. 37. A nice feature however of approximation 4 is that it becomes exact in the high-density limit r s → 0 of the uniform-electron gas (the scaling of the density at µ = 0 is needed to obtain the correct high-density limit in this zero-gap system). Approximation 5, obtained from approximation 4 by neglecting the scaling of the density in the correlation functional, and used in Ref. 37, gives quite large errors for the uniform-electron gas, approaching 0.01 hartree. Approximations 6 and 7 are quite good. They both have the correct leading term of the asymptotic expansion for µ → ∞, but approximation 7 has the additional advantage of having also the correct µ → 0 or r s → 0 limit. Approximation 7 is our best approximation, with a maximal error of about 1 millihartree. Unfortunately, approximations 6 and 7 involve the pure short-range correlation functional E sr,µ c [n], for which we currently have only a spinunpolarized LDA approximation [58]. For this reason, we do not consider these approximations in the following for molecular calculations. We will limit ourselves to approximations 1 to 5 which only involve the complement short-range correlation functional Ēsr,µ c [n], for which we have spindependent GGAs [START_REF] Toulouse | [END_REF][46][47][48][49]. III. COMPUTATIONAL DETAILS The RSDH scheme has been implemented in a development version of the MOLPRO 2015 program [START_REF] Werner | version 2015.1, a package of ab initio programs[END_REF]. The calculation is done in two steps: first a self-consistent-field calculation is perform according to Eqs. ( 18)-( 20), and then the MP2-like correlation energy in Eq. ( 27) is evaluated with the previously calculated orbitals. The λ-dependent complement short-range exchange functional is calculated according to Eq. ( 35) and the approximations 1 to 5 [see Eqs. ( 43)-( 49)] have been implemented for the complement short-range correlation functional, using the short-range Perdew-Becke-Ernzerhof (PBE) exchange and correlation functionals of Ref. 48 for E sr,µ x [n] and Ēsr,µ c [n]. The RSDH scheme was applied on the AE6 and BH6 sets [START_REF] Lynch | [END_REF], as a first assessment of the approximations on molecular systems and in order to determine the optimal parameters µ and λ. The AE6 set is a small representative benchmark of six atomization energies consisting of SiH [64] at the geometries optimized by quadratic configuration interaction with single and double excitations with the modified Gaussian-3 basis set (QCISD/MG3) [65]. The reference values for the atomization energies and barrier heights are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Refs. 66, 67. For each approximation, we have first varied µ and λ between 0 and 1 by steps of 0.1 to optimize the parameters on each set. We have then refined the search by steps of 0.02 to find the common optimal parameters on the two sets combined. RSDH scheme was then tested on the AE49 set of 49 atomization energies [68] (consisting of the G2-1 set [69,70] stripped of the six molecules containing Li, Be, and Na [71]) and on the DBH24/08 set [72,73] of 24 forward and reverse reaction barrier heights. These calculations were performed with the cc-pVQZ basis set, with MP2(full)/6-31G* geometries for the AE49 set, and with the aug-cc-pVQZ basis set [74] with QCISD/MG3 geometries for the DBH24/08 set. The reference values for the AE49 set are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Ref. 75, and the reference values for the DBH24/08 set are the zeropoint exclusive values from Ref. 73. Finally, the RSDH scheme was tested on the S22 set of 22 weakly interacting molecular complexes [77]. These calculations were performed with the aug-cc-pVDZ and aug-cc-pVTZ basis sets and the counterpoise correction, using the geometries from Ref. 77 and the complete-basis-set (CBS)-extrapolated CCSD(T) reference interaction energies from Ref. 78. The local MP2 approach [79] is used on the largest systems in the S22 set. Core electrons are kept frozen in all our MP2 calculations. Spin-restricted calculations are performed for all the closed-shell systems, and spinunrestricted calculations for all the open-shell systems. As statistical measures of goodness of the different methods, we compute mean absolute errors (MAEs), mean errors (MEs), root mean square deviations (RMSDs), mean absolute percentage errors (MA%E), and maximal and minimal errors. IV. RESULTS AND DISCUSSION A. Optimization of the parameters on the AE6 and BH6 sets We start by applying the RSDH scheme on the small AE6 and BH6 sets, and determining optimal values for the parameters µ and λ. Figure 2 shows the MAEs for these two sets obtained with approximations 1 to 5 of Sec. II D 4 as a function of λ for µ = 0.5 and µ = 0.6. We choose to show plots for only these two values of µ, since they are close the optimal value of µ for RSH+MP2 [12,80] and also for RSDH with all the approximations except approximation 2. This last approximation is anyhow of little value for thermochemistry since it gives large MAEs on the AE6 set for intermediate values of λ, which must be related to the incorrect linear dependence in λ of this approximation in the limit µ → 0 or the high-density limit. We thus only discuss next the other four approximations. Let us start by analyzing the results for the AE6 set. For the approximations 1, 3, 4, and 5, we can always find an intermediate value of λ giving a smaller MAE than the two limiting cases λ = 0 (corresponding to RSH+MP2) and λ = 1 (corresponding to standard MP2). Among these four approximations, the approximations 1 and 5 are the least effective to reduce the MAE in comparison to RSH+MP2 and MP2, which may be connected to the fact that these two approximations are both incorrect in the low-density limit. The approximations 3 and 4, which become identical in the high-and low-density limits (for systems with non-zero gaps), are the two best approximations, giving minimal MAEs of 2.2 and 2.3 kcal/mol, respectively, at the optimal parameter values (µ, λ) = (0.5, 0.6) and (0.6, 0.65), respectively. Let us consider now the results for the BH6 set. Each MAE curve displays a marked minimum at an intermediate value of λ, at which the corresponding approximation is globally more accurate than both RSH+MP2 and MP2. All the approximations perform rather similarly, giving minimal MAEs of about 1 kcal/mol. In fact, for µ = 0.5 and µ = 0.6, the approximations 3 and 4 give essentially identical MAEs for all λ. The optimal parameter values for these two approximations are (µ, λ) = (0.5, 0.5), i.e. relatively close to the optimal values found for the AE6 set. For each of our two best approximations 3 and 4, we also determine optimal values of µ and λ that minimize the total MAE of the combined AE6 + BH6 set, and which could be used for general chemical applications. For the approximation 3, the optimal parameter values are (µ, λ) = (0.46, 0.58), giving a total MAE of 1.68 kcal/mol. For the approximation 4, the optimal parameter values are (µ, λ) = (0.62, 0.60), giving a total MAE of 1.98 kcal/mol. In the following, we further assess the approximations 3 and 4 with these optimal parameters. B. Assessment on the AE49 and DBH24/08 sets of atomization energies and reaction barrier heights We assess now the RSDH scheme with the approximations 3 and 4, evaluated with the previously determined optimal parameters (µ, λ), on the larger AE49 and DBH24/08 sets of atomization energies and reaction barrier heights. The results are reported in Tables I and II, and compared with other methods corresponding to limiting cases of the RSDH scheme: DS1DH [12] (with the PBE exchange-correlation functional [76]) corresponding to the µ = 0 limit of the RSDH scheme with TABLE I: Atomization energies (in kcal/mol) of the AE49 set calculated by DS1DH (with the PBE exchangecorrelation functional [76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48), and MP2. The calculations were carried out using the cc-pVQZ basis set at MP2(full)/6-31G* geometries and with parameters (µ, λ) optimized on the AE6+BH6 combined set. The reference values are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Ref. 75 approximation 4, RSH+MP2 [6] corresponding to the λ = 0 limit of the RSDH scheme, and standard MP2 corresponding to the µ → ∞ or λ = 1 limit of the RSDH scheme. On the AE49 set, the two RSDH approximations (3 and 4) give very similar results. With a MAE of 4.3 kcal/mol and a RMSD of about 5.1 kcal/mol, they provide an overall improvement over both RSH+MP2 and standard MP2 which give MAEs larger by about 1 kcal/mol and RMSDs larger by about 2 kcal/mol. It turns out that the DS1DH approximation gives a smaller MAE of 3.2 kcal/mol than the two RSDH approximations, but a similar RMSD of 5.0 kcal/mol. On the DBH24/08 set, the two RSDH approximations give less similar but still comparable results with MAEs of 1.9 and 2.7 kcal/mol for approximations 3 and 4, respectively. This is a big improvement over standard MP2 which gives a MAE of 6.2 kcal/mol, but similar to the accuracy of RSH+MP2 which gives a MAE of 2.0 kcal/mol. Again, the smallest MAE of 1.5 kcal/mol is obtained with the the DS1DH approximation. The fact that the DS1DH approximation ap- pears to be globally more accurate that the RSDH approximations on these larger sets but not on the small AE6 and BH6 sets points to a limited representativeness of the latter small sets, and suggests that there may be room for improvement by optimizing the parameters on larger sets. C. Assessment of the basis convergence We study now the basis convergence of the RSDH scheme. Figure 3 shows the convergence of the total energy of He, Ne, N 2 , and H 2 O with respect to the cardinal number X for a series of Dunning basis sets cc-pVXZ (X = 2, 3, 4, 5), calculated with MP2, RSH+MP2, and RSDH with approximations 3 and 4 (with the parameters (µ, λ) optimized on the AE6+BH6 combined set). The results for MP2 and RSH+MP2 are in agreement with what is already known. MP2 has a slow basis convergence, with the error on the total energy decreasing as a third-power law, ∆E MP2 ∼ A X -3 [81,82], due to the difficulty of describing the short-range part of the correlation hole near the electron-electron cusp. RSH+MP2 has a fast basis convergence, with the error decreasing as an exponential law, ∆E RSH+MP2 ∼ B e -βX [8], since it involves only the long-range MP2 correlation energy. Unsurprisingly, the RSDH scheme displays a ba-sis convergence which is intermediate between that of MP2 and RSH+MP2. What should be remarked is that, for a given basis, the RSDH basis error is closer to the RSH+MP2 basis error than to the MP2 basis error. The basis dependence of RSDH is thus only moderately affected by the presence of short-range MP2 correlation. This can be understood by the fact that RSDH contains only a modest fraction λ 2 ≈ 0.35 of the pure short-range MP2 correlation energy E sr,µ c,MP2 [see Eq. ( 29)], which should have a third-power-law convergence, while the pure long-range correlation energy E lr,µ c,MP2 and the mixed long-range/short-range correlation energy E lr-sr,µ c,MP2 both should have an exponential-law convergence. We thus expect the RSDH error to decrease as ∆E RSDH ∼ λ 2 A X -3 + B e -βX , with constants A, B, β a priori different from the ones introduced for MP2 and RSH+MP2. The results of Figure 3 are in fact in agreement with such a basis dependence with similar constants A, B, β for MP2, RSH+MP2, and RSDH. D. Assessment on the S22 set of intermolecular interactions We finally test the RSDH scheme on weak intermolecular interactions. Table III reports the interaction energies for the 22 molecular dimers of the S22 set calculated by RSH+MP2, RSDH (with approximations 3 and 4), and MP2, using the augcc-pVDZ and aug-cc-pVTZ basis sets. We also report DS1DH results, but since this method is quite inaccurate for dispersion interactions we only did calculations with the aug-cc-pVDZ basis set for a rough comparison. Again, the basis dependence of RSDH is intermediate between the small basis dependence of RSH+MP2 and the larger basis dependence of standard MP2. The basis convergence study in Section IV C suggests that the RSDH results with the aug-cc-pVTZ basis set are not far from the CBS limit. V D Z V T Z V Q Z V The two approximations (3 and 4) used in the RSDH scheme give overall similar results, which may be rationalized by the fact low-density regions primarily contribute to these intermolecular interaction energies and the approximations 3 and 4 become identical in the low-density limit. For hydrogen-bonded complexes, RSDH with the augcc-pVTZ basis set gives a MA%E of about 3-4%, similar to standard MP2 but in clear improvement over RSH+MP2 which tends to give too negative interaction energies. Presumably, this is so because the explicit wave-function treatment of the short-range interaction λw sr,µ ee (r 12 ) makes RSDH accurately describe of the short-range component of the intermolecular interaction, while still cor-rectly describe the long-range component. For complexes with a predominant dispersion contribution, RSDH with the aug-cc-pVTZ basis set gives too negative interaction energies by about 30 %, similar to both MP2 and RSH+MP2. Notably, DS1DH gives much too negative interaction energies for the largest and most polarizable systems, leading to a MA%E of more than 100 % with augcc-pVDZ basis set. This can be explained by the fact that the reduced amount of HF exchange at long range in DS1DH leads to smaller HOMO-LUMO gaps in these systems in comparison with RSH+MP2 and RSDH, causing a overlarge MP2 contribution. For mixed complexes, RSDH with the aug-cc-pVTZ basis set gives a MA%E of about 14-15 %, which is a bit worse than MP2 but slightly better than RSH+MP2. Again, DS1DH tends to give significantly too negative interaction energies for the largest dimers. Overall, for weak intermolecular interactions, RSDH thus provides a big improvement over DS1DH, a small improvement over RSH+MP2, and is quite similar to standard MP2. [76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48), and MP2. The parameters (µ, λ) used are the ones optimized on the AE6+BH6 combined set, except for the RSH+MP2 values which are taken from Ref. 83 in which µ = 0.50 was used. The basis sets used are aVDZ and aVTZ which refer to aug-cc-pVDZ and aug-cc-pVTZ, respectively, and the counterpoise correction is applied. The values in italics were obtained using the local MP2 approach, the ones with an asterisk (*) were obtained in Ref. 84 with the density-fitting approximation, and the ones with a dagger ( †) were obtained with the approximation: EaVTZ(RSDH approx4) ≈ EaVDZ(RSDH approx4) + EaVTZ(RSDH approx3) -EaVDZ(RSDH approx3). The geometries of the complexes are taken from Ref. 77 and the reference interaction energies are taken as the CCSD(T)/CBS estimates of Ref. 78 V. CONCLUSION We have studied a wave-function/DFT hybrid approach based on a CAM-like decomposition of the electron-electron interaction in which a correlated wave-function calculation associated with the two-parameter interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) is combined with a complement short-range density functional. Specifically, we considered the case of MP2 perturbation theory for the wave-function part and obtained a scheme that we named RSDH. This RSDH scheme is a generalization of the usual one-parameter DHs (corresponding to the special case µ = 0) and the range-separated MP2/DFT hybrid known as RSH+MP2 (corresponding to the special case λ = 0). It allows one to have both 100% HF exchange and MP2 correlation at long interelectronic distances and fractions of HF exchange and MP2 correlation at short interelectronic distances. We have also proposed a number of approximations for the complement short-range exchange-correlation density functional, based on the limits µ = 0 and µ → ∞, and showed their relevance on the uniform-electron gas with the corresponding electron-electron interaction, in particular in the high-and low-density limits. The RSDH scheme with complement shortrange DFAs constructed from a short-range version of the PBE functional has then been applied on small sets of atomization energies (AE6 set) and reaction barrier heights (BH6 set) in order to find optimal values for the parameters µ and λ. It turns out that the optimal values of these parameters for RSDH, µ ≈ 0.5 -0.6 and λ ≈ 0.6, are very similar to the usual optimal values found separately for RSH+MP2 and one-parameter DHs. With these values of the parameters, RSDH has a relatively fast convergence with respect to the size of the one-electron basis, which can be explained by the fact that its contains only a modest fraction λ 2 ≈ 0.35 of pure short-range MP2 correlation. We have tested the RSDH scheme with the two best complement short-range DFAs (re-ferred to as approximations 3 and 4) on large sets of atomization energies (AE49 set), reaction barrier heights (DBH24 set), and weak intermolecular interactions (S22 set). The results show that the RSDH scheme is either globally more accurate or comparable to RSH+MP2 and standard MP2. If we had to recommend a computational method for general chemical applications among the methods tested in this work, it would be RSDH with approximation 3 with parameters (µ, λ) = (0.46, 0.58). There is much room however for improvement and extension. The parameters µ and λ could be optimized on larger training sets. More accurate complement short-range DFAs should be constructed. The MP2 correlation term could be replaced by random-phase approximations, which would more accurately describe dispersion interactions [59,83], or by multireference perturbation theory [85], which would capture static correlation effects. The RSDH scheme could be extended to linear-response theory for calculating excitation energies or molecular properties, e.g. by generalizing the methods of Refs. 86-89. 4 . 4 Approximations for Ēsr,µ,λ c [n] E lr-sr,µ=0 c [n] = 0 and E sr,µ=0 c [n] = E c [n], the approximation in Eq. (51) reduces to Eq. (38) for µ = 0. For µ → ∞, since E sr,µ c [n] decays faster than 1/µ 2 , i.e. E sr,µ c [n] = O(1/µ 3 ) [58], E lr-sr,µ c [n] and Ēsr,µ c [n] have the same leading term in the large-µ expansion, i.e. E lr-sr,µ c [n] = Ēsr,µ c [n] + O(1/µ 3 ), and thus the approximation in Eq. (51) satisfies Eq. ( FIG. 2 : 2 FIG.2: MAEs for the AE6 and BH6 sets obtained with the RSDH scheme using approximations 1 to 5 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48) as a function of λ for µ = 0.5 and µ = 0.6. The basis set used is cc-pVQZ. 4 , S 2 , SiO, C 3 H 4 (propyne), C 2 H 2 O 2 (glyoxal), and C 4 H 8 (cyclobutane). The BH6 set is a small representative benchmark of forward and reverse hydrogen transfer barrier heights of three reactions, OH + CH 4 → CH 3 + H 2 O, H + OH → O + H 2 , and H + H 2 S → HS + H 2 . All the calculations for the AE6 and BH6 sets were performed with the Dunning cc-pVQZ basis set . Molecule DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference (µ,λ) = (0,0.70) (0.58,0) (0.46,0.58) (0.62,0.60) CH 81.13 78.38 79.93 79.58 79.68 83.87 CH 2 ( 3 B 1 ) 190.68 190.19 190.42 190.32 188.70 189.74 CH 2 ( 1 A 1 ) CH 3 175.20 305.32 170.26 302.91 173.36 304.43 173.24 304.34 174.45 180.62 303.36 306.59 CH 4 NH NH 2 NH 3 415.79 81.39 179.12 293.24 410.84 81.09 177.12 288.76 414.23 80.28 177.03 290.33 414.45 79.49 176.22 290.02 414.83 418.87 78.57 82.79 176.65 181.96 293.11 297.07 OH OH 2 FH 105.26 229.78 140.11 104.49 225.48 137.20 104.02 226.96 138.16 103.81 227.21 138.35 105.78 106.96 233.83 232.56 144.17 141.51 SiH 2 ( 1 A 1 ) 146.66 143.21 146.22 146.15 145.90 153.68 SiH 2 ( 3 B 1 ) 130.48 133.05 130.56 130.39 128.93 133.26 SiH 3 SiH 4 PH 2 222.07 315.08 148.62 220.05 311.89 146.37 222.39 315.80 147.60 222.21 315.81 146.98 220.51 228.08 314.27 324.59 144.95 153.97 PH 3 SH 2 233.35 178.66 229.18 174.18 232.06 177.29 231.69 177.66 230.24 241.47 178.55 183.30 ClH HCCH 105.39 406.75 101.63 399.05 104.43 403.84 104.90 405.17 106.53 107.20 409.58 402.76 H 2 CCH 2 H 3 CCH 3 CN 561.56 708.35 178.63 554.55 701.94 172.93 559.03 706.59 174.03 559.63 706.98 172.54 561.38 561.34 707.15 710.20 168.84 180.06 HCN CO HCO 315.27 262.93 283.19 305.21 254.60 277.00 310.17 258.48 278.81 310.79 258.64 278.50 319.26 311.52 269.29 258.88 285.79 278.28 H 2 CO H 3 COH N 2 H 2 NNH 2 375.40 510.48 229.78 433.52 367.85 505.00 218.09 428.92 370.90 507.31 223.07 428.93 370.96 507.41 222.86 427.80 379.19 373.21 513.32 511.83 234.80 227.44 432.46 436.70 NO O 2 HOOH 156.24 126.21 267.02 151.17 119.73 259.77 151.16 120.29 260.59 149.94 119.49 260.11 156.94 152.19 128.55 120.54 272.51 268.65 F 2 CO 2 Si 2 38.66 400.49 71.64 31.64 390.46 67.21 32.30 393.18 69.78 31.17 393.30 70.72 42.18 409.33 388.59 38.75 70.56 73.41 P 2 S 2 112.91 104.29 107.27 100.90 111.26 102.71 112.87 103.56 113.59 115.95 103.67 103.11 Cl 2 SiO 58.97 192.77 54.19 185.82 57.31 189.17 57.94 189.93 60.43 200.09 192.36 59.07 SC SO 172.35 127.74 163.07 122.46 164.01 123.89 170.43 123.77 175.16 170.98 129.29 125.80 ClO 62.96 60.81 59.82 58.55 59.69 64.53 ClF Si 2 H 6 62.43 521.08 57.94 517.07 58.98 522.83 58.50 522.88 65.20 519.17 535.47 62.57 CH 3 Cl 393.93 388.23 392.44 393.14 394.57 394.52 CH 3 SH 470.26 463.90 468.46 469.10 469.94 473.49 HOCl SO 2 164.83 260.22 158.46 244.46 160.70 250.91 160.75 251.27 168.50 165.79 270.72 259.77 MAE 3.19 5.49 4.31 4.31 5.37 ME RMSD Min error Max error -1.18 4.98 -14.39 11.90 -6.32 7.41 -18.40 1.87 -3.98 5.13 -12.64 4.59 -3.97 5.18 -12.59 4.71 -0.24 6.75 -16.30 20.74 TABLE II : II Forward (F) and reverse (R) reaction barrier heights (in kcal/mol) of the DBH24/08 set calculated by DS1DH (with the PBE exchange-correlation functional[76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref.48), and MP2. The calculations were carried out using the aug-cc-pVQZ basis set at QCISD/MG3 geometries and with parameters (µ, λ) optimized on the AE6+BH6 combined set. The reference values are taken from Ref.73. CH 3 Cl → ClCH 3 • • • Cl -12.45/12.45 15.40/15.40 14.36/14.36 9.90/9.90 14.64/14.64 13.41/13.41 F -• • • CH 3 Cl → FCH 3 • • • Cl - Reaction DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference (µ,λ) = (0,0.70) (0.58,0) (0.46,0.58) (0.62,0.60) F/R F/R F/R F/R F/R F/R Heavy-atom transfer H + N 2 O → OH + N 2 21.64/75.80 19.34/77.14 22.76/80.39 25.01/82.80 35.94/89.26 17.13/82.47 H + ClH → HCl + H 18.51/18.51 19.77/19.77 20.23/20.23 20.99/20.99 22.79/22.79 18.00/18.00 CH 3 + FCl → CH 3 F + Cl 7.54/60.77 8.21/63.59 9.79/64.81 11.25/66.64 19.74/74.29 6.75/60.00 Nucleophilic substitution Cl -• • • 2.83/27.93 4.72/31.46 OH -+ CH 3 F → HOCH 3 + F --3.11/16.76 -1.59/21.56 3.99/30.52 -1.92/19.23 4.27/30.67 -1.53/19.56 4.59/28.88 3.44/29.42 -1.75/17.86 -2.44/17.66 Unimolecular and association H + N 2 → HN 2 16.36/10.27 14.03/13.09 17.00/11.57 18.75/11.45 27.60/8.06 14.36/10.61 H + C 2 H 4 → CH 3 CH 2 4.15/44.13 2.70/45.76 4.34/45.49 5.40/45.89 9.32/46.54 1.72/41.75 HCN → HNC 49.13/33.01 48.52/34.81 49.07/33.59 50.05/33.95 34.46/52.09 48.07/32.82 Hydrogen transfer OH + CH 4 → CH 3 + H 2 O 4.54/19.33 6.03/19.75 6.53/20.38 7.33/21.35 7.66/25.01 6.70/19.60 H + OH → O + H 2 12.22/11.02 13.44/10.00 13.49/12.64 14.47/13.94 17.56/15.58 10.70/13.10 H + H 2 S → H 2 + HS 4.04/15.09 4.73/15.35 5.00/15.81 5.46/15.96 6.42/16.36 3.60/17.30 MAE ME RMSD 1.52 -0.09 2.09 2.01 1.06 2.36 1.85 1.50 2.30 2.65 1.95 3.26 6.17 4.70 8.56 Min error Max error -6.67 4.51 -5.33 4.01 -2.08 5.63 -3.51 7.88 -13.61 19.61 TABLE III : III Interaction energies (in kcal/mol) for the complexes of the S22 set calculated by DS1DH (with the PBE exchange-correlation functional . The MP2 values are also taken from Ref. 78. -13.31 -11.76 -11.95 * -12.70 -11.42 -10.77 -9.49 † -9.80 -10.63 -9.74 Indole/benzene -17.26 -6.95 -6.96 * -8.83 -6.97 -9.25 -7.39 † -7.13 -7.74 -4.59 Adenine/thymine stack -20.84 -15.11 -14.71 * -14.28 -14.56 -14.25 -14.53 † -13.24 -14.26 -11. Complex DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference (µ,λ) = (0,0.70) (0.50,0) (0.46,0.58) (0.62,0.60) aVDZ aVDZ aVTZ aVDZ aVTZ aVDZ aVTZ aVDZ aVTZ Hydrogen-bonded complexes Ammonia dimer -2.70 -3.13 -3.25 -3.00 -3.18 -2.94 -3.16 -2.68 -2.99 -3.17 Water dimer -4.63 -5.34 -5.45 -5.03 -5.19 -4.93 -5.12 -4.36 -4.69 -5.02 Formic acid dimer -17.28 -21.20 -21.57 -19.31 -20.14 -18.86 -19.80 -15.99 -17.55 -18.80 Formamide dimer -14.63 -17.44 -17.64 -16.30 -16.81 -15.98 -16.60 -13.95 -15.03 -16.12 Uracile dimer C 2h -18.86 -22.62 -22.82 * -20.52 -21.77 -20.53 -21.78 † -18.41 -19.60 -20.69 2-pyridoxine/2-aminopyridine -18.65 -18.86 -18.60 * -17.43 -17.93 -17.04 -17.55 † -15.56 -16.64 -17.00 Adenine/thymine WC -17.52 -18.26 -18.12 * -16.47 -17.28 -16.23 -17.04 † -14.71 -15.80 -16.74 MAE 1.16 1.34 1.42 0.26 0.68 0.23 0.51 1.70 0.75 ME 0.46 -1.33 -1.42 -0.07 -0.68 0.22 -0.50 1.70 0.75 RMSD 1.28 1.56 1.66 0.29 0.81 0.29 0.64 1.88 0.85 MA%E 9.00 8.36 9.03 2.04 4.14 2.35 3.01 12.63 5.62 Complexes with predominant dispersion contribution Methane dimer -0.25 -0.46 -0.48 -0.42 -0.47 -0.42 -0.47 -0.39 -0.46 -0.53 Ethene dimer -0.84 -1.45 -1.55 -1.38 -1.68 -1.33 -1.55 -1.18 -1.46 -1.50 Benzene/methane -0.87 -1.62 -1.71 -1.56 -1.70 -1.56 -1.63 -1.47 -1.71 -1.45 Benzene dimer C 2h -7.21 -4.08 -4.24 * -3.52 -3.78 -4.14 -4.40 † -4.25 -4.70 -2.62 Pyrazine dimer -8.97 -5.97 -6.04 * -6.50 -6.21 -6.02 -5.73 † -6.00 -6.55 -4.20 Uracil dimer C 2 66 MAE 4.54 1.42 1.43 1.67 1.33 1.50 1.19 1.01 1.43 ME -4.16 -1.39 -1.42 -1.61 -1.31 -1.43 -1.11 -0.90 -1.40 RMSD 6.14 1.83 1.80 2.23 1.67 2.10 1.65 1.37 1.85 MA%E 102.09 28.48 29.60 34.02 28.31 34.42 27.45 27.96 33.65 Mixed complexes Ethene/ethyne -1.28 -1.62 -1.68 -1.57 -1.68 -1.43 -1.67 -1.39 -1.58 -1.51 Benzene/water -2.66 -3.49 -3.68 -3.33 -3.55 -3.29 -3.53 -2.98 -3.35 -3.29 Benzene/ammonia -1.70 -2.49 -2.63 -2.39 -2.58 -2.38 -2.59 -2.21 -2.52 -2.32 Benzene/hydrogen cyanide -3.86 -5.31 -5.38 -4.93 -5.26 -4.89 -5.26 -4.37 -4.92 -4.55 Benzene dimer C 2v -4.57 -3.33 -3.49 * -3.26 -3.47 -3.26 -3.47 † -3.09 -3.46 -2.71 Indole/benzene T-shaped -11.71 -6.55 -6.85 * -7.49 -6.50 -7.91 -6.92 † -6.10 -6.71 -5.62 Phenol dimer -8.05 -8.05 -8.09 * -6.89 -7.57 -7.15 -7.83 † -6.79 -7.36 -7.09 MAE 1.58 0.54 0.67 0.45 0.50 0.48 0.60 0.27 0.40 ME -0.97 -0.54 -0.67 -0.40 -0.50 -0.46 -0.60 0.02 -0.40 RMSD 2.31 0.76 0.86 0.75 0.57 0.90 0.71 0.50 0.69 MA%E 38.01 12.34 17.40 10.43 13.78 11.03 15.28 7.55 10.58 Total MAE 2.52 1.11 1.18 0.83 0.86 0.77 0.75 0.99 0.89 Total ME -1.67 -1.09 -1.18 -0.73 -0.85 -0.60 -0.75 0.22 -0.40 Total RMSD 4.03 1.45 1.49 1.42 1.15 1.37 1.13 1.35 1.25 Total MA%E 52.08 16.95 19.06 16.34 16.00 16.77 15.80 16.59 17.36 Acknowledgements We thank Bastien Mussard for help with the MOLPRO software. We also thank Labex MiChem for providing PhD financial support for C. Kalai. Here, we generalize the uniform coordinate scaling relation, known for the KS correlation functional E c [n] [51,[START_REF] Levy | Density Functional Theory[END_REF]90] and for the complement short-range correlation functional Ēsr,µ c [n] [56,91], to the λdependent complement short-range correlation functional Ēsr,µ,λ c [n]. We first define the universal density functional, for arbitrary parameters µ ≥ 0, λ ≥ 0, and ξ ≥ 0, which is a simple generalization of the universal functional F µ,λ [n] in Eq. ( 7) such that The minimizing wave function in Eq. (A.1) will be denoted by Ψ [n] defined by, for N electrons, where γ > 0 is a scaling factor. The wave function Ψ µ/γ,λ/γ,ξ/γ γ [n] yields the scaled density n γ (r) = γ 3 n(γr) and minimizes Ψ| T + ξ Ŵ lr,µ ee where the right-hand side is minimal by definition of Ψ µ/γ,λ/γ,ξ/γ [n]. Therefore, we conclude that and Consequently, the corresponding correlation functional, with the KS single-determinant wave function Φ[n] = Ψ µ=0,λ=0,ξ [n], satisfies the same scaling relation Similarly, the associated short-range complement correlation functional, Applying this relation for ξ = 1 gives the scaling relation for Ēsr,µ,λ from which we see that the high-density limit γ → ∞ is related to the Coulomb limit µ → 0 and the low-density limit γ → 0 is related to the short-range limit µ → ∞ of Ēsr,µ,λ c [n]. Note that by applying Eq. (A.9) with λ = 0 and γ = ξ we obtain the short-range complement correlation functional associated with the interaction ξw lr,µ ee in terms of the short-range complement correlation functional associated with the interaction w lr,µ/ξ ee , i.e. Ēsr,µ,0,ξ as already explained in Ref. 91. Also, by applying Eq. (A.9) with ξ = 1 and γ = λ we obtain the short-range complement correlation functional associated with the interaction w lr,µ ee + λw sr,µ ee in terms of the short-range complement correlation functional associated with the interaction (1/λ)w We first give the limit of Ēsr,µ,λ,ξ c [n] as µ → 0. Starting from Eq. (A.8) and noting that E µ=0,λ,ξ where we have used the well-known relation, 51,[START_REF] Levy | Density Functional Theory[END_REF] [a special case of Eq. (A.7)]. In particular, for ξ = 1 we obtain the limit of Ēsr,µ,λ (A.12) We can now derive the high-density limit of Ēsr,µ,λ c [n] using the scaling relation in Eq. (A.10) and the limit µ → 0 in Eq. (A.11) where we have used [n] [START_REF] Görling | [END_REF] assuming a KS system with a non-degenerate ground state. 3. Short-range limit and low-density limit of Ēsr,µ,λ We first derive the leading term of the asymptotic expansion of Ēsr,µ,λ,ξ c [n] as µ → ∞. Taking the derivative with respect to λ of Eq. (A.6), and using the Hellmann-Feynman theorem which states that the derivative of Ψ µ,λ,ξ [n] does not contribute, we obtain: where Using now the asymptotic expansion of the short-range interaction [START_REF] Toulouse | [END_REF], we obtain the leading term of the asymptotic expansion of Ēsr,µ,λ,ξ where [n](r, r) is the correlation part of the on-top pair density associated with the scaled Coulomb interaction ξw ee (r 12 ). For the special case ξ = 1, we obtain the leading term of the asymptotic expansion of Ēsr,µ,λ where n 2,c [n](r, r) is the correlation part of the on-top pair density associated with the Coulomb interaction. We can now derive the low-density limit of Ēsr,µ,λ c [n] using the scaling relation in Eq. (A.10) and the asymptotic expansion as µ → ∞ in Eq. (A.17 where we have used the strong-interaction limit of the on-top pair density, lim γ→0 n 1/γ 2,c [n](r, r) = -n(r) 2 /2 + m(r) 2 /2 = -2n ↑ (r)n ↓ (r) [54] where m(r) is the spin magnetization and n σ (r) are the spin densities (σ =↑, ↓).
64,046
[ "1103" ]
[ "541753", "541753" ]
01758912
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01758912v2/file/report.pdf
Céline Comte email: [email protected] Dynamic Load Balancing with Tokens * Efficiently exploiting the resources of data centers is a complex task that requires efficient and reliable load balancing and resource allocation algorithms. The former are in charge of assigning jobs to servers upon their arrival in the system, while the latter are responsible for sharing server resources between their assigned jobs. These algorithms should take account of various constraints, such as data locality, that restrict the feasible job assignments. In this paper, we propose a token-based mechanism that efficiently balances load between servers without requiring any knowledge on job arrival rates and server capacities. Assuming a balanced fair sharing of the server resources, we show that the resulting dynamic load balancing is insensitive to the job size distribution. Its performance is compared to that obtained under the best static load balancing and in an ideal system that would constantly optimize the resource utilization. Introduction The success of cloud services encourages operators to scale out their data centers and optimize the resource utilization. The current trend consists in virtualizing applications instead of running them on dedicated physical resources [START_REF] Barroso | The Datacenter As a Computer: An Introduction to the Design of Warehouse-Scale Machines[END_REF]. Each server may then process several applications in parallel and each application may be distributed among several servers. Better understanding the dynamics of such server pools is a prerequisite for developing load balancing and resource allocation policies that fully exploit this new degree of flexibility. Some recent works have tackled this problem from the point of view of queueing theory [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF]. Their common feature is the adoption of a bipartite graph that translates practical constraints such as data locality into compatibility relations between jobs and servers. These models apply in various systems such as computer clusters, where the shared resource is the CPU [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF], and content delivery networks, where the shared resource is the server upload bandwidth [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. However, these pool models do not consider simultaneously the impact of complex load balancing and resource allocation policies. The model of [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF] lays emphasis on dynamic load balancing, assuming neither server multitasking nor job parallelism. The bipartite graph describes the initial compatibilities of incoming jobs, each of them being eventually assigned to a µ 1 µ 2 µ 3 1 2 ν 1 ν 2 Servers Job classes Job types Figure 1: A compatibility graph between types, classes and servers. Two consecutive servers can be pooled to process jobs in parallel. Thus there are two classes, one for servers 1 and 2 and another for servers 2 and 3. Type-1 jobs can be assigned to any class, while type-2 jobs can only be assigned to the latter. This restriction may result from data locality constraints for instance. single server. On the other hand, [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] focus on the problem of resource allocation, assuming a static load balancing that assigns incoming jobs to classes at random, independently of the system state. The class of a job in the system identifies the set of servers that can be pooled to process it in parallel. The corresponding bipartite graph, connecting classes to servers, restricts the set of feasible resource allocations. In this paper, we introduce a tripartite graph that explicitly differentiates the compatibilities of an incoming job from its actual assignment by the load balancer. This new model allows us to study the joint effect of load balancing and resource allocation. A toy example is shown in Figure 1. Each incoming job has a type that defines its compatibilities; these may reflect its parallelization degree or locality constraints, for instance. Depending on the system state, the load balancer matches the job with a compatible class that subsequently determines its assigned servers. The upper part of our graph, which puts constraints on load balancing, corresponds to the bipartite graph of [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF]; the lower part, which restricts the resource allocation, corresponds to the bipartite graph of [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF]. We use this new framework to study load balancing and resource allocation policies that are insensitive, in the sense that they make the system performance independent of fine-grained traffic characteristics. This property is highly desirable as it allows service providers to dimension their infrastructure based on average traffic predictions only. It has been extensively studied in the queueing literature [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF][START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF][START_REF] Bonald | Insensitive load balancing[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. In particular, insensitive load balancing policies were introduced in [START_REF] Bonald | Insensitive load balancing[END_REF] in a generic queueing model, assuming an arbitrary insensitive allocation of the resources. These load balancing policies were defined as a generalization of the static load balancing described above, where the assignment probabilities of jobs to classes depend on both the job type and the system state, and are chosen to preserve insensitivity. Our main contribution is an algorithm based on tokens that enforces such an insensitive load balancing without performing randomized assignments. More precisely, this is a deterministic implementation of an insensitive load balancing that adapts dynamically to the system state, under an arbitrary compatibility graph. The principle is as follows. The assignments are regulated through a bucket containing a fixed number of tokens of each class. An incoming job seizes the longest available token among those that identify a compatible class, and is blocked if it does not find any. The rationale behind this algorithm is to use the release order of tokens as an information on the relative load of their servers: a token that has been available for a long time without being seized is likely to identify a server set that is less loaded than others. As we will see, our algorithm mirrors the first-come, first-served (FCFS) service discipline proposed in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] to implement balanced fairness, which was defined in [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF] as the most efficient insensitive resource allocation. The closest existing algorithm we know is assign longest idle server (ALIS), introduced in reference [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF] cited above. This work focuses on server pools without job parallel processing nor server multitasking. Hence, ALIS can be seen as a special case of our algorithm where each class identifies a server with a single token. The algorithm we propose is also related to the blocking version of Join-Idle-Queue [START_REF] Lu | Join-Idle-Queue: A novel load balancing algorithm for dynamically scalable web services[END_REF] studied in [START_REF] Van Der Boor | Load balancing in large-scale systems with multiple dispatchers[END_REF]. More precisely, we could easily generalize our algorithm to server pools with several load balancers, each with their own bucket. The corresponding queueing model, still tractable using known results on networks of quasireversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF], extends that of [START_REF] Van Der Boor | Load balancing in large-scale systems with multiple dispatchers[END_REF]. Organization of the paper Section 2 recalls known facts about resource allocation in server pools. We describe a standard pool model based on a bipartite compatibility graph and explain how to apply balanced fairness in this model. Section 3 contains our main contributions. We describe our pool model based on a tripartite graph and introduce a new token-based insensitive load balancing mechanism. Numerical results are presented in Section 4. Resource allocation We first recall the model considered in [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] to study the problem of resource allocation in server pools. This model will be extended in Section 3 to integrate dynamic load balancing. Model We consider a pool of S servers. There are N job classes and we let I = {1, . . . , N } denote the set of class indices. For now, each incoming job is assigned to a compatible class at random, independently of the system state. For each i ∈ I, the resulting arrival process of jobs assigned to class i is assumed to be Poisson with a rate λ i > 0 that may depend on the job arrival rates, compatibilities and assignment probabilities. The number of jobs of class i in the system is limited by i , for each i ∈ I, so that a new job is blocked if its assigned class is already full. Job sizes are independent and exponentially distributed with unit mean. Each job leaves the system immediately after service completion. The class of a job defines the set of servers that can be pooled to process it. Specifically, for each i ∈ I, a job of class i can be served in parallel by any subset of servers within the non-empty set S i ⊂ {1, . . . , S}. This defines a bipartite compatibility graph between classes and servers, where there is an edge between a class and a server if the jobs of this class can be processed by this server. Figure 2 shows a toy example. µ 1 µ 2 µ 3 λ 1 λ 2 Servers Job classes Figure 2: A compatibility graph between classes and servers. Servers 1 and 3 are dedicated, while server 2 can serve both classes. The server sets associated with classes 1 and 2 are S 1 = {1, 2} and S 2 = {2, 3}, respectively. When a job is in service on several servers, its service rate is the sum of the rates allocated by each server to this job. For each s = 1, . . . , S, the capacity of server s is denoted by µ s > 0. We can then define a function µ on the power set of I as follows: for each A ⊂ I, µ(A) = s∈ i∈A S i µ s denotes the aggregate capacity of the servers that can process at least one class in A, i.e., the maximum rate at which jobs of these classes can be served. µ is a submodular, non-decreasing set function [START_REF] Fujishige | Submodular Functions and Optimization[END_REF]. It is said to be normalized because µ(∅) = 0. Balanced fairness We first recall the definition of balanced fairness [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF], which was initially applied to server pools in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. Like processor sharing (PS) policy, balanced fairness assumes that the capacity of each server can be divided continuously between its jobs. It is further assumed that the resource allocation only depends on the number of jobs of each class in the system; in particular, all jobs of the same class receive service at the same rate. The system state is described by the vector x = (x i : i ∈ I) of numbers of jobs of each class in the system. The state space is X = {x ∈ N N : x ≤ }, where = ( i : i ∈ I) is the vector of per-class constraints and the comparison ≤ is taken componentwise. For each i ∈ I, we let φ i (x) denote the total service rate allocated to class-i jobs in state x. It is assumed to be nonzero if and only if x i > 0, in which case each job of class i receives service at rate φ i (x)/x i . Queueing model Since all jobs of the same class receive service at the same rate, we can describe the evolution of the system with a network of N PS queues with state-dependent service capacities. For each i ∈ I, queue i contains jobs of class i; the arrival rate at this queue is λ i and its service capacity is φ i (x) when the network state is x. An example is shown in Figure 3 for the configuration of Figure 2. φ 1 (x) x 1 = 3 λ 1 φ 2 (x) x 2 = 2 λ 2 Capacity set The compatibilities between classes and servers restrict the set of feasible resource allocations. Specifically, the vector (φ i (x) : i ∈ I) of per-class service rates belongs to the following capacity set in any state x ∈ X : Σ = φ ∈ R N + : i∈A φ i ≤ µ(A), ∀A ⊂ I . As observed in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF], the properties satisfied by µ guarantee that Σ is a polymatroid [START_REF] Fujishige | Submodular Functions and Optimization[END_REF]. Balance function It was shown in [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] that the resource allocation is insensitive if and only if there is a balance function Φ defined on X such that Φ(0) = 1 and φ i (x) = Φ(x -e i ) Φ(x) , ∀x ∈ X , ∀i ∈ I(x), (1) where e i is the N -dimensional vector with 1 in component i and 0 elsewhere and I(x) = {i ∈ I : x i > 0} is the set of active classes in state x. Under this condition, the network of PS queues defined above is a Whittle network [START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. The insensitive resource allocations that respect the capacity constraints of the system are characterized by a balance function Φ such that, for all x ∈ X \ {0}, Φ(x) ≥ 1 µ(A) i∈A Φ(x -e i ), ∀A ⊂ I(x), A = ∅. Recursively maximizing the overall service rate in the system is then equivalent to minimizing Φ by choosing Φ(x) = max A⊂I(x), A =∅ 1 µ(A) i∈A Φ(x -e i ) , ∀x ∈ X \ {0}. The resource allocation defined by this balance function is called balanced fairness. It was shown in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF] that balanced fairness is Pareto-efficient in polymatroid capacity sets, meaning that the total service rate i∈I(x) φ i (x) is always equal to the aggregate capacity µ(I(x)) of the servers that can process at least one active class. By [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF], this is equivalent to Φ(x) = 1 µ(I(x)) i∈I(x) Φ(x -e i ), ∀x ∈ X \ {0}. (2) Stationary distribution The Markov process defined by the system state x is reversible, with stationary distribution π(x) = π(0)Φ(x) i∈I λ i x i , ∀x ∈ X . (3) By insensitivity, the system state has the same stationary distribution if the jobs sizes within each class are only i.i.d., as long as the traffic intensity of class i (defined as the average quantity of work brought by jobs of this class per unit of time) is λ i , for each i ∈ I. A proof of this result is given in [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] for Cox distributions, which form a dense subset within the set of distributions of nonnegative random variables. Job scheduling We now describe the sequential implementation of balanced fairness that was proposed in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF]. This will lay the foundations for the results of Section 3. We still assume that a job can be distributed among several servers, but we relax the assumption that servers can process several jobs at the same time. Instead, each server processes its jobs sequentially in FCFS order. When a job arrives, it enters in service on every idle server within its assignment, if any, so that its service rate is the sum of the capacities of these servers. When the service of a job is complete, it leaves the system immediately and its servers are reallocated to the first job they can serve in the queue. Note that this sequential implementation also makes sense in a model where jobs are replicated over several servers instead of being processed in parallel. For more details, we refer the reader to [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF] where the model with redundant requests was introduced. Since the arrival order of jobs impacts the rate allocation, we need to detail the system state. We consider the sequence c = (c 1 , . . . , c n ) ∈ I * , where n is the number of jobs in the system and c p is the class of the p-th oldest job, for each p = 1, . . . , n. ∅ denotes the empty state, with n = 0. The vector of numbers of jobs of each class in the system, corresponding to the state introduced in §2.2, is denoted by |c| = (|c| i : i ∈ I) ∈ X . It does not define a Markov process in general. We let I(c) = I(|c|) denote the set of active classes in state c. The state space of this detailed system state is C = {c ∈ I * : |c| ≤ }. Queueing model Each job is in service on all the servers that were assigned this job but not those that arrived earlier. For each p = 1, . . . , n, the service rate of the job in position p is thus given by s∈Sc p \ p-1 q=1 Sc q µ s = µ(I(c 1 , . . . , c p )) -µ(I(c 1 , . . . , c p-1 )), with the convention that (c 1 , . . . , c p-1 ) = ∅ if p = 1. The service rate of a job is independent of the jobs arrived later in the system. Additionally, the total service rate µ(I(c)) is independent of the arrival order of jobs. The corresponding queueing model is an order-independent (OI) queue [START_REF] Berezner | Order independent loss queues[END_REF][START_REF] Krzesinski | Order independent queues[END_REF]. An example is shown in Figure 4 for the configuration of Figure 2. 2 1 2 1 1 c = (1, 1, 2, 1, 2) µ(I(c)) λ 1 λ 2 Stationary distribution The Markov process defined by the system state c is irreducible. The results of [START_REF] Krzesinski | Order independent queues[END_REF] show that this process is quasi-reversible, with stationary distribution π(c) = π(∅)Φ(c) i∈I λ i |c| i , ∀c ∈ C, (4) where Φ is defined recursively on C by Φ(∅) = 1 and Φ(c) = 1 µ(I(c)) Φ(c 1 , . . . , c n-1 ), ∀c ∈ C \ {∅}. (5) We now go back to the aggregate state x giving the number of jobs of each class in the system. With a slight abuse of notation, we let π(x) = c:|c|=x π(c) and Φ(x) = c:|c|=x Φ(c), ∀x ∈ X . As observed in [START_REF] Krzesinski | Order independent queues[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF], if follows from (4) that π(x) = π(∅)   c:|c|=x Φ(c)   i∈I λ i x i = π(0)Φ(x) i∈I λ i x i in any state x. Using (5), we can show that Φ satisfies (2) with the initial condition Φ(0) = Φ(∅) = 1. Hence, the stationary distribution of the aggregate system state x is exactly that obtained in §2.2 under balanced fairness. It was also shown in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] that the average per-class resource allocation resulting from FCFS service discipline is balanced fairness. In other words, we have φ i (x) = c:|c|=x π(c) π(x) µ i (c), ∀x ∈ X , ∀i ∈ I(x), where φ i (x) is the total service rate allocated to class-i jobs in state x under balanced fairness, given by ( 1), and µ i (c) denotes the service rate received by the first job of class i in state c under FCFS service discipline: µ i (c) = n p=1 cp=i (µ(I(c 1 , . . . , c p )) -µ(I(c 1 , . . . , c p-1 ))). Observe that, by ( 3) and ( 4), the rate equality simplifies to φ i (x) = c:|c|=x Φ(c) Φ(x) µ i (c), ∀x ∈ X , ∀i ∈ I(x). (6) We will use this last equality later. As it is, the FCFS service discipline is very sensitive to the job size distribution. [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] mitigates this sensitivity by frequently interrupting jobs and moving them to the end of the queue, in the same way as round-robin scheduling algorithm in the single-server case. In the queueing model, these interruptions and resumptions are represented approximately by random routing, which leaves the stationary distribution unchanged by quasi-reversibility [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF][START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. If the interruptions are frequent enough, then all jobs of a class tend to receive the same service rate on average, which is that obtained under balanced fairness. In particular, performance becomes approximately insensitive to the job size distribution within each class. Load balancing The previous section has considered the problem of resource sharing. We now focus on dynamic load balancing, using the fact that each job may be a priori compatible with several classes and assigned to one of them upon arrival. We first extend the model of §2.1 to add this new degree of flexibility. Model We again consider a pool of S servers. There are N job classes and we let I = {1, . . . , N } denote the set of class indices. The compatibilities between job classes and servers are described by a bipartite graph, as explained in §2.1. Additionally, we assume that the arrivals are divided into K types, so that the jobs of each type enter the system according to an independent Poisson process. Job sizes are independent and exponentially distributed with unit mean. Each job leaves the system immediately after service completion. The type of a job defines the set of classes it can be assigned to. This assignment is performed instantaneously upon the job arrival, according to some decision rule that will be detailed later. For each i ∈ I, we let K i ⊂ {1, . . . , K} denote the non-empty set of job types that can be assigned to class i. This defines a bipartite compatibility graph between types and classes, where there is an edge between a type and a class if the jobs of this type can be assigned to this class. Overall, the compatibilities are described by a tripartite graph between types, classes, and servers. Figure 1 shows a toy example. For each k = 1, . . . , K, the arrival rate of type-k jobs in the system is denoted by ν k > 0. We can then define a function ν on the power set of I as follows: for each A ⊂ I, ν(A) = k∈ i∈A K i ν k denotes the aggregate arrival rate of the types that can be assigned to at least one class in A. ν satisfies the submodularity, monotonicity and normalization properties satisfied by the function µ of §2.1. Randomized load balancing We now express the insensitive load balancing of [START_REF] Bonald | Insensitive load balancing[END_REF] in our new server pool model. This extends the static load balancing considered earlier. Incoming jobs are assigned to classes at random, and the assignment probabilities depend not only on the job type but also on the system state. As in §2.2, we assume that the capacity of each server can be divided continuously between its jobs. The resources are allocated by applying balanced fairness in the capacity set defined by the bipartite compatibility graph between job classes and servers. Open queueing model We first recall the queueing model considered in [START_REF] Bonald | Insensitive load balancing[END_REF] to describe the randomized load balancing. As in §2.2, jobs are gathered by class in PS queues with statedependent service capacities given by (1). Hence, the type of a job is forgotten once it is assigned to a class. Similarly, we record the job arrivals depending on the class they are assigned to, regardless of their type before the assignment. The Poisson arrival assumption ensures that, given the system state, the time before the next arrival at each class is exponentially distributed and independent of the arrivals at other classes. The rates of these arrivals result from the load balancing. We write them as functions of the vector y = -x of numbers of available positions at each class. Specifically, λ i (y) denotes the arrival rate of jobs assigned to class i when there are y j available positions in class j, for each j ∈ I. λ 1 (y) φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2 λ 1 ( -x) λ 2 ( -x) y 1 = 1 λ 2 (y) y 2 = 2 φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2 Class-1 tokens Class-2 tokens (b) A closed queueing system consisting of two Whittle networks. The system can thus be modeled by a network of N PS queues with state-dependent arrival rates, as shown in Figure 5a. Closed queueing model We introduce a second queueing model that describes the system dynamics differently. It will later simplify the study of the insensitive load balancing by drawing a parallel with the resource allocation of §2.2. Our alternative model stems from the following observation: since we impose limits on the number of jobs of each class, we can indifferently assume that the arrivals are limited by the intermediary of buckets containing tokens. Specifically, for each i ∈ I, the assignments to class i are controlled through a bucket filled with i tokens. A job that is assigned to class i removes a token from this bucket and holds it until its service is complete. The assignments to a class are suspended when the bucket of this class is empty, and they are resumed when a token of this class is released. Each token is either held by a job in service or waiting to be seized by an incoming job. We consider a closed queueing model that reflects this alternation: a first network of N queues contains tokens held by jobs in service, as before, and a second network of N queues contains available tokens. For each i ∈ I, a token of class i alternates between the queues indexed by i in the two networks. This is illustrated in Figure 5b. The state of the network containing tokens held by jobs in service is x. The queues in this network apply PS service discipline and their service capacities are given by [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF]. The state of the network containing available tokens is y = -x. For each i ∈ I, the service of a token at queue i in this network is triggered by the arrival of a job assigned to class i. The service capacity of this queue is thus equal to λ i (y) in state y. Since all tokens of the same class are exchangeable, we can assume indifferently that we pick one of them at random, so that the service discipline of the queue is PS. Capacity set The compatibilities between job types and classes restrict the set of feasible load balancings. Specifically, the vector (λ i (y) : i ∈ I) of per-class arrival rates belongs to the following capacity set in any state y ∈ X : Γ = λ ∈ R N + : i∈A λ i ≤ ν(A), ∀A ⊂ I . The properties satisfied by ν guarantee that Γ is a polymatroid. Balance function Our token-based reformulation allows us to interpret dynamic load balancing as a problem of resource allocation in the network of queues containing available tokens. This will allow us to apply the results of §2.2. It was shown in [START_REF] Bonald | Insensitive load balancing[END_REF] that the load balancing is insensitive if and only if there is a balance function Λ defined on X such that Λ(0) = 1, and λ i (y) = Λ(y -e i ) Λ(y) , ∀y ∈ X , ∀i ∈ I(y). (7) Under this condition, the network of PS queues containing available tokens is a Whittle network. The Pareto-efficiency of balanced fairness in polymatroid capacity sets can be understood as follows in terms of load balancing. We consider the balance function Λ defined recursively on X by Λ(0) = 1 and Λ(y) = 1 ν(I(y)) i∈I(y) Λ(y -e i ), ∀y ∈ X \ {0}. (8) Then Λ defines a load balancing that belongs to the capacity set Γ in each state y. By [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF], this load balancing satisfies i∈I(y) λ i (y) = ν(I(y)), ∀y ∈ X , meaning that an incoming job is accepted whenever it is compatible with at least one available token. Stationary distribution The Markov process defined by the system state x is reversible, with stationary distribution π(x) = 1 G Φ(x)Λ( -x), ∀x ∈ X , (9) where G is a normalization constant. Note that we could symmetrically give the stationary distribution of the Markov process defined by the vector y = -x of numbers of available tokens. As mentioned earlier, the insensitivity of balanced fairness is preserved by the load balancing. Deterministic token mechanism Our closed queueing model reveals that the randomized load balancing is dual to the balanced fair resource allocation. This allows us to propose a new deterministic load balancing algorithm that mirrors the FCFS service discipline of §2.3. This algorithm can be combined indifferently with balanced fairness or with the sequential FCFS scheduling; in both cases, we show that it implements the load balancing defined by [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF]. All available tokens are now sorted in order of release in a single bucket. The longest available tokens are in front. An incoming job scans the bucket from beginning to end and seizes the first compatible token; it is blocked if it does not find any. For now, we assume that the server resources are allocated to the accepted jobs by applying the FCFS service discipline of §2.3. When the service of a job is complete, its token is released and added to the end of the bucket. We describe the system state with a couple (c, t) retaining both the arrival order of jobs and the release order of tokens. Specifically, c = (c 1 , . . . , c n ) ∈ C is the sequence of classes of (tokens held by) jobs in service, as before, and t = (t 1 , . . . , t m ) ∈ C is the sequence of classes of available tokens, ordered by release, so that t 1 is the class of the longest available token. Given the total number of tokens of each class in the system, any feasible state satisfies |c| + |t| = . Queueing model Depending on its position in the bucket, each available token is seized by any incoming job whose type is compatible with this token but not with the tokens released earlier. For each p = 1, . . . , m, the token in position p is thus seized at rate k∈Kt p \ p-1 q=1 Kt q ν k = ν(I(t 1 , . . . , t p )) -ν(I(t 1 , . . . , t p-1 )). The seizing rate of a token is independent of the tokens released later. Additionally, the total rate at which available tokens are seized is ν(I(y)), independently of their release order. The bucket can thus be modeled by an OI queue, where the service of a token is triggered by the arrival of a job that seizes this token. The evolution of the sequence of tokens held by jobs in service also defines an OI queue, with the same dynamics as in §2.3. Overall, the system can be modeled by a closed tandem network of two OI queues, as shown in Figure 6. 2 1 2 1 1 c = (1, 1, 2, 1, 2) µ(I(c)) 1 2 2 t = (1, 2, 2) ν(I(t)) Figure 6: A closed tandem network of two OI queues associated with the server pool of Figure 1. At most 1 = 2 = 4 jobs can be assigned to each class. The state is (c, t), with c = (1, 1, 2, 1, 2) and t = (1, 2, 2). The corresponding aggregate state is that of the network of Figure 5. An incoming job of type 1 would seize the available token in first position (of class 1), while an incoming job of type 2 would seize the available token in second position (of class 2). Stationary distribution Assuming S i = S j or K i = K j for each pair {i, j} ⊂ I of classes, the Markov process defined by the detailed state (c, t) is irreducible. The proof is provided in the appendix. Known results on networks of quasi-reversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF] then show that this process is quasi-reversible, with stationary distribution π(c, t) = 1 G Φ(c)Λ(t), ∀c, t ∈ C : |c| + |t| = , where Φ is defined by the recursion (5) and the initial step Φ(∅) = 1, as in §2.3; similarly, Λ is defined recursively on C by Λ(∅) = 1 and Λ(t) = 1 ν(I(t)) Λ(t 1 , . . . , t m-1 ), ∀t ∈ C \ {∅}. We go back to the aggregate state x giving the number of tokens of each class held by jobs in service. With a slight abuse of notation, we define its stationary distribution by π(x) = c:|c|=x t:|t|= -x π(c, t), ∀x ∈ X . (10) As in §2.3, we can show that we have π(x) = 1 G Φ(x)Λ( -x), ∀x ∈ X , where the functions Φ and Λ are defined on X by Φ(x) = c:|c|=x Φ(c) and Λ(y) = t:|t|=y Λ(t), ∀x, y ∈ X , respectively. These functions Φ and Λ satisfy the recursions ( 2) and ( 8), respectively, with the initial conditions Φ(0) = Λ(0) = 1. Hence, the aggregate stationary distribution of the system state x is exactly that obtained in §3.2 by combining the randomized load balancing with balanced fairness. Also, using the definition of Λ, we can rewrite (6) as follows: for each x ∈ X and i ∈ I(x), φ i (x) = c:|c|=x 1 G Φ(c) t:|t|= -x Λ(t) 1 G Φ(x)Λ( -x) µ i (c), = c:|c|=x t:|t|= -x π(c, t) π(x) µ i (c). Hence, the average per-class service rates are still as defined by balanced fairness. By symmetry, it follows that the average per-class arrival rates, ignoring the release order of tokens, are as defined by the randomized load balancing. Specifically, for each y ∈ X and i ∈ I(y), we have λ i (y) = c:|c|= -y t:|t|=y π(c, t) π( -y) ν i (t), where λ i (y) is the arrival rate of jobs assigned to class i in state y under the randomized load balancing, given by ( 7), and ν i (t) denotes the rate at which the first available token of class i is seized under the deterministic load balancing: ν i (t) = m p=1 tp=i (ν(I(t 1 , . . . , t p )) -ν(I(t 1 , . . . , t p-1 ))). As in §2.3, the stationary distribution of the system state is unchanged by the addition of random routing, as long as the average traffic intensity of each class remains constant. Hence we can again reach some approximate insensitivity to the job size distribution within each class by enforcing frequent job interruptions and resumptions. Application with balanced fairness As announced earlier, we can also combine our tokenbased load balancing algorithm with balanced fairness. The assignment of jobs to classes is still regulated by a single bucket containing available tokens, sorted in release order, but the resources are now allocated according to balanced fairness. The corresponding queueing model consists of an OI queue and a Whittle network, as represented in Figure 7. The intermediary state (x, t), retaining the release order of available tokens but not the arrival order of jobs, defines a Markov process. Its stationary distribution follows from known results on networks of quasi-reversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF]: 2 2 1 t = (1, 2, 2) ν(I(t)) φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2 Class-1 tokens Class-2 tokens π(x, t) = 1 G Φ(x)Λ(t), ∀x ∈ X , ∀t ∈ C : x + |t| = . We can show as before that the average per-class arrival rates, ignoring the release order of tokens, are as defined by the dynamic load balancing of §3.2. The insensitivity of balanced fairness to the job size distribution within each class is again preserved. The proof of [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] for Cox distributions extends directly. Note that this does no imply that performance is insensitive to the job size distribution within each type. Indeed, if two job types with different size distributions can be assigned to the same class, then the distribution of the job sizes within this class may be correlated to the system state upon their arrival. This point will be assessed by simulation in Section 4. Observe that our token-based mechanism can be applied to balance the load between the queues of an arbitrary Whittle network, as represented in Figure 7, independently of the system considered. Examples or such systems are given in [START_REF] Bonald | Insensitive load balancing[END_REF]. Numerical results We finally consider two examples that give insights on the performance of our token-based algorithm. We especially make a comparison with the static load balancing of Section 2 and assess the insensitivity to the job size distribution within each type. We refer the reader to [START_REF] Jonckheere | Asymptotics of insensitive load balancing and blocking phases[END_REF] for a large-scale analysis in homogeneous pools with a single job type, along with a comparison with other (non-insensitive) standard policies. Performance metrics for Poisson arrival processes and exponentially distributed sizes with unit mean follow from [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF]. By insensitivity, these also give the performance when job sizes within each class are i.i.d., as long as the traffic intensity is unchanged. We resort to simulations to evaluate performance when the job size distribution is type-dependent. Performance is measured by the job blocking probability and the resource occupancy. For each k = 1, . . . , K, we let β k = 1 G x≤ : x i = i , ∀i∈I:k∈K i Φ(x)Λ( -x) denote the probability that a job of type k is blocked upon arrival. The equality follows from PASTA property [START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. Symmetrically, for each s = 1, . . . , S, we let ψ s = 1 G x≤ : x i =0, ∀i∈I:s∈S i Φ(x)Λ( -x) denote the probability that server s is idle. These quantities are related by the conservation equation K k=1 ν k (1 -β k ) = S s=1 µ s (1 -ψ s ). ( 11 ) We define respectively the average blocking probability and the average resource occupancy by β = K k=1 ν k β k K k=1 ν k and η = S s=1 µ s (1 -ψ s ) S s=1 µ s . There is a simple relation between β and η. Indeed, if we let ρ = ( K k=1 ν k )/( S s=1 µ s ) denote the total load in the system, then we can rewrite [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF] as ρ(1 -β) = η. As expected, minimizing the average blocking probability is equivalent to maximizing the average resource occupancy. It is however convenient to look at both metrics in parallel. As we will see, when the system is underloaded, jobs are almost never blocked and it is easier to describe the (almost linear) evolution of the resource occupancy. On the contrary, when the system is overloaded, resources tend to be maximally occupied and it is more interesting to focus on the blocking probability. Observe that any stable server pool satisfies the conservation equation [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF]. In particular, the average blocking probability β in a stable system cannot be less than 1 -1 ρ when ρ > 1. A similar argument applied to each job type imposes that β k ≥ max   0, 1 - 1 ν k s∈ i:k∈K i S i µ s    , (12) for each k = 1, . . . , K. A single job type We first consider a pool of S = 10 servers with a single type of jobs (K = 1), as shown in Figure 8. Each class identifies a unique server and each job can be assigned to any class. Half of the servers have a unit capacity µ and the other half have capacity 4µ. Each server has = 6 tokens and applies PS policy to its jobs. We do not look at the insensitivity to the job size distribution in this case, as there is a single job type. Servers with capacity µ Servers with capacity 4µ ν Figure 8: A server pool with a single job type. Classes are omitted because each of them corresponds to a single server. Comparison We compare the performance of our algorithm with that of the static load balancing of Section 2, where each job is assigned to a server at random, independently of system state, and blocked if its assigned server is already full. We consider two variants, best static and uniform static, where the assignment probabilities are proportional to the server capacities and uniform, respectively. Ideal refers to the lowest average blocking probability that complies with the system stability. According to [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF], it is 0 when ρ ≤ 1 and 1 -1 ρ when ρ > 1. One can think of it as the performance in an ideal server pool where resources would be constantly optimally utilized. The results are shown in Figure 9. The performance gain of our algorithm compared to the static policies is maximal near the critical load ρ = 1, which is also the area where the delta with ideal is maximal. Elsewhere, all load balancing policies have a comparable performance. Our intuition is as follows: when the system is underloaded, servers are often available and the blocking probability is low anyway; when the system is overloaded, resources are congested and the blocking probability is high whichever scheme is utilized. Observe that the performance under uniform static deteriorates faster, even when ρ < 1, because the servers with the lowest capacity, concentrating half of the arrivals with only 1 5 -th of the service capacity, are congested whenever ρ > 2 5 . This stresses the need for accurate rate estimations under a static load balancing. Asymptotics when the number of tokens increases We now focus on the impact of the number of tokens on the performance of the dynamic load balancing. A direct calculation shows that the average blocking probability decreases with the number of tokens per server, and tends to ideal as → +∞. Intuitively, having many tokens gives a long run feedback on the server loads without blocking arrivals more than necessary (to preserve stability). The results are shown in Figure 10. We observe that the convergence to the asymptotic ideal is quite fast. The largest gain is obtained with small values of and the performance is already close to the optimal with = 10 tokens per server. Hence, we can reach a low blocking probability even when the number of tokens is limited, for instance to guarantee a minimum service rate per job or respect multitasking constraints on the servers. Several job types We now consider a pool of S = 6 servers, all with the same unit capacity µ, as shown in Figure 11. As before, there is no parallel processing. Each class identifies a unique server that applies PS policy to its jobs and has = 6 tokens. There are two job types with different arrival rates and compatibilities. Type-1 jobs have a unit arrival rate ν and can be assigned to any of the first four servers. Type-2 jobs arrive at rate 4ν and can be assigned to any of the last four servers. Thus only two servers can be accessed by both types. Note that heterogeneity now lies in the job arrival rates and not in the server capacities. ν 4ν Servers with capacity µ Figure 11: A server pool with two job types. Comparison We again consider two variants of the static load balancing: best static, in which the assignment probabilities are chosen so as to homogenize the arrival rates at the servers as far as possible, and uniform static, in which the assignment probabilities are uniform. Note that best static assumes that the arrival rates of the job types are known, while uniform static does not. As before, ideal refers to the lowest average blocking probability that complies with the system stability. The results are shown in Figure 12. Regardless of the policy, the slope of the resource occupancy breaks down near the critical load ρ = 5 6 . The reason is that the last four servers support at least 4 5 -th of the arrivals with only 2 3 -rd of the service capacity, so that their effective load is 6 5 ρ. It follows from ( 12) that the average blocking probability in a stable system cannot be less than 4 5 (1 -5 6 1 ρ ) when ρ ≥ 5 6 . Under ideal, the slope of the resource occupancy breaks down again at ρ = 5 3 . This is the point where the first two servers cannot support the load of type-1 jobs by themselves anymore. Otherwise, most of the observations of §4.1 are still valid. The performance gain of the dynamic load balancing compared to best static is maximal near the first critical load ρ = 5 6 . Its delta with ideal is maximal near ρ = 5 6 and ρ = 5 3 . Elsewhere, all schemes have a similar performance, except for uniform static that deteriorates faster. Overall, these numerical results show that our dynamic load balancing algorithm often outperforms best static and is close to ideal. The configurations (not shown here) where it was not the case involved very small pools, with job arrival rates and compatibilities opposite to the server capacities. Our intuition is that our algorithm performs better when the pool size or the number of tokens allow for some diversity in the assignments. (In)sensitivity We finally evaluate the sensitivity of our algorithm to the job size distribution within each type. Figure 13 shows the results. Lines give the performance when job sizes are exponentially distributed with unit mean, as before. Marks, obtained by simulation, give the performance when the job size distribution within each type is hyperexponential: 1 3 -rd of type-1 jobs have an exponentially distributed size with mean 2 and the other 2 3 -rd have an exponentially distributed size with mean 1 2 ; similarly, 1 6 -th of type-2 jobs have an exponentially distributed size with mean 5 and the other 5 6 -th have an exponentially distributed size with mean 1 5 . The similarity of the exact and simulation results suggests that insensitivity is preserved even when the job size distribution is type-dependent. Further evaluations, involving other job size distributions, would be necessary to conclude. Also observe that the blocking probability of type-1 jobs increases near the load ρ = 5 3 , which is twice less than the upper bound ρ = 10 3 given by [START_REF] Krzesinski | Order independent queues[END_REF]. This suggests that the dynamic load balancing compensates the overload of type-2 jobs by rejecting more jobs of type 1. Conclusion We have introduced a new server pool model that explicitly distinguishes the compatibilities of a job from its actual assignment by the load balancer. Expressing the results of [START_REF] Bonald | Insensitive load balancing[END_REF] in this new model has allowed us to see the problem of load balancing in a new light. We have derived a deterministic, token-based implementation of a dynamic load balancing that preserves the insensitivity of balanced fairness to the job size distribution within each class. Numerical results have assessed the performance of this algorithm. For the future works, we would like to evaluate the performance of our algorithm in broader classes of server pools. We are also interested in proving its insensitivity to the job size distribution within each type. µ 1 µ 2 µ 3 ν 1 ν 2 Job types Job classes Servers Figure 14: A technically interesting toy configuration. We have K 2 = K 3 and S 3 S 2 , so that class-2 tokens can overtake class-3 tokens in the queue of tokens held by jobs in service but not in the queue of available tokens. On the other hand, K 1 K 2 and S 1 = S 2 , so that class-2 tokens can overtake class-1 tokens in the queue of available tokens but not in the queue of tokens held by jobs in service. In none of the queues can class-2 tokens overtake tokens of classes 1 and 3 at once. It is tempting to consider more sophisticated transitions, for instance where a token overtakes several other tokens at once. Unfortunately, our assumptions do not guarantee that such transitions can occur with a nonzero probability. An example is shown in Figure 14. The two operations circular shift and overtaking will prove to be sufficient. We first combine them to show the following intermediary result: • From any feasible state, we can reach the state where all class-N tokens are gathered at some selected position in one of the two queues while the position of the other tokens is unchanged. We finally prove the irreducibility result by induction on the number N of classes. As announced, the proof is constructive: it gives a series of transitions leading from any state to any other state. The induction step can be decomposed in two parts: • By repeatedly moving class-N tokens at a position where they do not prevent other tokens from overtaking each other, we can order the tokens of classes 1 to N -1 as if class-N tokens were absent. The induction assumption ensures that we can perform this reordering. • Once the tokens of classes 1 to N -1 are well ordered, class-N tokens can be positioned among them. We now detail the steps of the proof one after the other. Circular shift Because of the positive service rate assumption, a token at the head of either of the two queues has a nonzero probability of completing service and moving to the end of the other queue. We refer to such a transition as a circular shift. Now let (c, t) ∈ S and (c , t ) ∈ S, with c = (c 1 , . . . , c n ), t = (t 1 , . . . , t m ), c = (c 1 , . . . , c n ) and t = (t 1 , . . . , t m ). Assume that the sequence (c 1 , . . . , c n , t 1 , . . . , t m ) is a circular shift of the sequence (c 1 , . . . , c n , t 1 , . . . , t m ). Then we can reach state (c , t ) from state (c, t) by applying many circular shifts if necessary. An example is shown in Figure 15 for the configuration of Figure 14. All states that are circular shifts of each other can therefore communicate. Overtaking We say that a token in second position of one of the two queues overtakes its predecessor if it completes service first. Such a transition allows us to exchange the positions of these two tokens, therefore escaping circular shifts to access other states. Can such a transition occur with a nonzero probability? It depends on the classes of the tokens in second and first positions, denoted by i and j respectively. The token in second position can overtake its predecessor if it receives a nonzero service rate. In the queue of tokens held by jobs in service, this means that there is at least one server that can process class-i jobs but not class-j jobs, that is S i S j . In the queue of available tokens, this means that there is at least one job type that can seize class-i tokens but not class-j tokens, that is K i K j . Since states that are circular shifts of each other can communicate, the queue where the overtaking actually occurs does not matter. The separability assumption ensures that, for each pair of classes, the tokens of at least one of the two classes can overtake the tokens of the other class, in at least one of the two queues. We now show a stronger result: by reindexing classes if necessary, we can work on the assumption that class-i tokens can overtake the tokens of classes 1 to i -1 in at least one of the two queues (possibly not the same), for each i = 2, . . . , N . We first use the inclusion relation on the power set of {1, . . . , K} to order the type sets K i for i ∈ I. Specifically, we consider a topological ordering of these sets induced by their Hasse diagram, so that a given type set is not a subset of any type set with a lower index. An example is shown in Figure 16a. The tokens of a class with a given type set can thus overtake (in the first queue) the tokens of all classes with a lower type set index. Only classes with the same type set are not dissociated. Symmetrically, we use the inclusion relation on the power set of {1, . . . , S} to order the server sets S i for i ∈ I. We consider a topological ordering of these sets induced by their Hasse diagram, so that a given server set is not a subset of any server set with a lower index, as illustrated in Figure 16b. The tokens of a class with a given server set can thus overtake (in the second queue) the tokens of all classes with a lower server set index. Thanks to the separability assumption, if two classes are not dissociated by their type sets, then they are dissociated by their server sets. This allows us to define a permutation of the classes as follows: first, we order classes by increasing type set order, and then, we order the classes that have the same type set by increasing server set order. The separability assumption ensures that all classes are eventually sorted. The tokens of a given class can overtake the tokens of all classes with a lower index, either in the queue of available tokens or in the queue of tokens held by jobs in service (or both). Moving class-N tokens Using the two operations circular shift and overtaking, we show that, from any given state, we can reach the state where all class-N tokens are gathered at some selected position in one of the two queues, while the position of the other tokens is unchanged. We proceed by moving class-N tokens one after the other, starting with the token that is closest to the destination (in number of tokens to overtake) and finishing with the one that is furthest. Consider the class-N token that is closest to the destination but not well positioned yet (if any). This token can move to the destination by overtaking its predecessors one after the other. Indeed, the token that precedes our class-N token has a class between 1 and N -1, so that our class-N token can overtake it in (at least) one of the two queues. By applying many circular shifts if necessary, we can reach the state where this overtaking can occur. Once this state is reached, our class-N token can then overtake its predecessor, therefore arriving one step closer to the destination. We reiterate this operation until our class-N token is well positioned. For example, consider the state of Figure 15a and assume that we want to move all tokens of class 2 between the two tokens of classes 1 and 3 that are closest to each other. One of the class-2 tokens is already in the correct position. Let us consider the next class-2 token, initially positioned between tokens of classes 3 and 4. We first apply circular shifts to reach the state depicted in Figure 15b. In this state, there is a nonzero probability that our class-2 token overtakes the class-3 token, which would bring our class-2 token directly in the correct position. Proof by induction We finally prove the stated irreducibility result by induction on the number N of classes. For N = 1, applying circular shifts is enough to show the irreducibility because all tokens are exchangeable. We now give the induction step. Let N > 1. Assume that the Markov process defined by the state of any tandem network with N -1 classes that satisfies the positive service rate and separability assumptions is irreducible. Now consider a tandem network with N classes that also satisfies these assumptions. We have shown that, starting from any feasible state, we can move class-N tokens at a position where they do not prevent other tokens from overtaking each other. In particular, to reach a state from another one, we can first focus on ordering the tokens of classes 1 and N -1, as if class-N tokens were absent. This is equivalent to ordering tokens in a tandem network with N -1 classes that satisfies the positive service rate and separability assumptions. This reordering is feasible by the induction assumption. Once it is performed, we can move class-N tokens in a correct position, by applying the same type of transitions as in the previous paragraph. Figure 3 : 3 Figure 3: An open Whittle network of N = 2 queues associated with the server pool of Figure 2. Figure 4 : 4 Figure 4: An OI queue with N = 2 job classes associated with the server pool of Figure 2. The job of class 1 at the head of the queue is in service on servers 1 and 2. The third job, of class 2, is in service on server 3. Aggregating the state c yields the state x of the Whittle network of Figure 3. (a) An open Whittle network with state-dependent arrival rates. Figure 5 : 5 Figure 5: Alternative representations of a Whittle network associated with the server pool of Figure 1. At most 1 = 2 = 4 jobs can be assigned to each class. Figure 7 : 7 Figure 7: A closed queueing system, consisting of an OI queue and a Whittle network, associated with the server pool of Figure 1. At most 1 = 2 = 4 jobs can be assigned to each class. Figure 9 : 9 Figure 9: Performance of the dynamic load balancing in the pool of Figure 8. Average blocking probability (bottom plot) and resource occupancy (top plot). Figure 10 : 10 Figure 10: Impact of the number of tokens on the average blocking probability under the dynamic load balancing in the pool of Figure 8. Figure 12 : 12 Figure 12: Performance of the dynamic load balancing in the pool of Figure 11. Average blocking probability (bottom plot) and resource occupancy (top plot). Figure 13 : 13 Figure 13: Blocking probability under the dynamic load balancing in the server pool of Figure 11, with either exponentially distributed job sizes (line plots) or hyperexponentially distributed sizes (marks). Each simulation point is the average of 100 independent runs, each built up of 10 6 jumps after a warm-up period of 10 6 jumps. The corresponding 95% confidence interval, not shown on the figure, does not exceed ±0.001 around the point. Figure 15 : 15 Figure 15: Circular shift. Sequence of transitions to reach state (b) from state (a): all tokens complete service in the first queue; all tokens before that of class 3 complete service in the second queue; the first two tokens complete service in the first queue. K 2 = K 3 ( 23 a) Hasse diagram of the type sets. (K1 = {1}, K4 = {2}, K2 = K3 = {1, 2}) is a possible topological ordering. Hasse diagram of the server sets. (S3 = {2}, S4 = {3}, S1 = S2 = {1, 2}) is a possible topological ordering. Figure 16 : 16 Figure 16: A possible ordering of the classes of Figure 14 is 1, 4, 3, 2. Appendix: Proof of the irreducibility We prove the irreducibility of the Markov process defined by the state (c, t) of a tandem network of two OI queues, as described in §3.3. Throughout the proof, we will simply refer to such a network as a tandem network, implicitly meaning that it is as described in §3.3. Assumptions We first recall and name the two main assumptions that we use in the proof. • Positive service rate. For each i ∈ I, K i = ∅ and S i = ∅. • Separability. For each pair {i, j} ⊂ I, either S i = S j or K i = K j (or both). Result statement The Markov process defined by the state of the tandem network is irreducible on the state space S = {(c, t) ∈ C 2 : |c| + |t| = } comprising all states with i tokens of class i, for each i ∈ I. Outline of the proof We provide a constructive proof that exhibits a series of transitions leading from any feasible state to any feasible state with a nonzero probability. We first describe two types of transitions and specify the states where they can occur with a nonzero probability. • Circular shift: service completion of a token at the head of a queue. This transition is always possible thanks to the positive service rate assumption. Consequently, states that are circular shifts of each other can communicate. We will therefore focus on ordering tokens relative to each other, keeping in mind that we can eventually apply circular shifts to move them in the correct queue. • Overtaking: service completion of a token that is in second position of a queue, before its predecessor completes service. Such a transition has the effect of swapping the order of these two tokens. By reindexing classes if necessary, we can work on the assumption that class-i tokens can overtake the tokens of classes 1 to i -1 in (at least) one of the two queues, for each i = 2, . . . , N . The proof of this statement relies on the separability assumption.
60,801
[ "12716" ]
[ "446305", "300362", "541966" ]
01761224
en
[ "shs" ]
2024/03/05 22:32:13
1990
https://hal.science/cel-01761224/file/FITZGERALD.pdf
Paul F S Carmignani Fitzgerald TENDER IS THE NIGHT F S Fitzgerald des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. The stay at Princeton reinforced his sense of being "déclassé": "I was one of the poorest boys in a rich boys'school". Life on the campus followed the same pattern as before: little academic work, a lot of extracurricular activities (including petting and necking!) and a steady output of librettos for musicals and various literary material including the 1st draft of This Side of Paradise, his 1 st novel. F. suffered at Princeton one of the great blows of his life: his failure to become a member of the most prestigious clubs: "I was always trying to be one of them! [...] I wanted to belong to what every other bastard belonged to: the greatest club in history. And I was barred from that too" ("The Triangle Club"). If F. was foiled in his social aims, he was more successful in his cultural ambitions: he became acquainted with J. Peale Bishop E. Wilson, "his intellectual conscience", and Father Fay, a man of taste and cultivation a Catholic with a catholic mind who exerted a great intellectual influence on F. (possible connection with Diver's priestlike attitude in Tender; I'll revert to that point) and read the great classics. He met Ginevra King, a beautiful and wealthy girl from Chicago it was a romantic love-affair but F. was turned down by her parents who had better hopes for their daughter. Nevertheless, F. was to make her the ideal girl of his generation to the point of having all her letters bound in a volume. In 1917, he left Princeton: "I retired not on my profits, but on my liabilities and crept home to St Paul to finish a novel". In the words of a critic, K. Eble: "The Princeton failure was one of the number of experiences which helped create the pattern to be found in F's fiction: success coming out of abysmal failure or failure following hard upon success". On leaving P., he enlisted in the Army, was commissioned 2 nd Lieutenant but never went overseas and never had any combat experience: 2 nd juvenile regret war heroism (connection with the novel: visit to battlefields under the guidance of Dick, who behaves as if he had been there, pp. 67-69 and the character of "Barban", the thunderbolt of war. See also p. 215 "Tommy B. was a ruler, Tommy was a hero"). The same year, he met Zelda Sayre, a Southern Belle, but F. came up against the same opposition as before; he did not have enough money to win the girl's parents'approval; so he went to New York in search of a job for Zelda was not overanxious to "marry into a life of poverty and struggle". In 1919, F. worked in an advertising agency; quit to rewrite novel and staked all his hopes on its publication: "I have so many things dependent upon its successincluding of course a girl not that I expect it to make me a fortune but it will have a psychological effect on me and all my surroundings". An amusing detail: at the time of publication, telegrams to Zelda consisted largely of reports of book-sales. Married Zelda in 1920 F's early success (This Side of Paradise) gave him the things his romantic self most admired: fame, money and a glamourous girl. THE GROWTH OF THE LEGEND With the publication of This Side of Paradise (sold 40,000 copies in less than a year), F. had become a hero to his generation, "a kind of king of our American youth"; Zelda was seen as "a barbarian princess from the South"); both made a strong impression on people and caused a stir wherever they went. The Fitzgeralds, as a critic put it, "were two kinds of brilliance unable to outdazzle the other; two innocents living by the infinite promise of American advertising, two restless temperaments craving both excitement and repose". (Obvious parallel between this description and the Diver couple). Z. played a prominent part in F's life. She admired her husband, but was also jealous of his literary success: "She almost certainly served to accentuate his tendency to waste himself in fruitless endeavours; she stimulated a certain fatuity of attitude in him and accentuated the split between a taste for popular success and the obligations imposed upon him by his literary talent. […] she speeded up the pace of his existence and the rhythm of his activity" (Perosa). The kind of life they led meant making money hand over fist so F. set to work, all the more earnestly as he was convinced that money in large quantities was the proper reward of virtue. He averaged $ 16,000 to 17,000 a year during his career as a writer but he was in constant debt and had to write himself out of his financial plight, which led to the composition of numberless short pieces of little or no value. Made 1 st trip to Europe in 1920; met J. Joyce in Paris; disappointment. Back to St Paul for the birth of Frances Scott F. in 1921. Next year was marked by a burst of creativity: published The Beautiful and Damned, Tales of the Jazz Age, The Vegetable, a play was a failure. 1923: made the acquaintance of Ring Lardner, the model for Abe North and the symbol for the man of genius dissipating his energy and wasting his talent. 1924: With the illusion of reducing expenses the family went to live in France: trips to St Raphaël, Rome, Capri and Paris. F. finished Gatsby but his serenity was marred by Z's restlessness and by her affair with a French aviator, Edouard Josanne. F. was deeply hurt: "I knew that something had happened that could never be repaired" (Obvious parallel with Tender). 1925: Summer in Paris described as "one of 1,000 parties and no work". Meets E. Hemingway: a model cf. "Beware of Ernest" when F. was influenced by his stylistic devices and also "I talk with the authority of failure, Ernest with the authority of success". F. began to be drunk for periods of a week or ten days: "Parties are a form of suicide. I love them, but the old Catholic in me secretly disapproves." Began Tender... 1926: 3 rd collection of stories All the Sad Young Men, title betrays different attitude to his former subject matter. 1927: Stay in Hollywood; gathered new material in preparation for the chapter of Rosemary Hoyt. Failure: his script was rejected. Left for Delaware. Meanwhile Z. was getting more and more restless and showing signs of hysteria (as a friend of theirs put it: "Z. was a case not a person"). Z. gets dancing obsession and wants to become a ballet dancer. 1928: New trip abroad to Paris for Z. to take dancing lessons 1929: 2 nd long stay; F. continues to drink heavily; creates disturbances and disappoints his friends. Was even arrested once. 1930. Trip to Montreux to have Z. examined: diagnosis of schizophrenia. F. remembered with special horror that "Christmas in Switzerland". Z. suffers from nervous eczema (like the woman painter in the lunatic asylum in Tender, see p. 201-2). While in hospital, Z. composed in six weeks Save me the Waltz, a thinly fictionalized account of the Fs' lives; F. revised the manuscript and made important changes when some allusions were too obvious; he changed the original name of the hero (Amory Blaine, cf. This Side...) to David Knight. Hence, a possible connection with the title of the novel he was at the time working on: Tender is the Knight. 1931: Death of F's father (see corresponding episode in Tender: F used in Tender an unpublished paper that he wrote on the death of his father). In September, F. returned permanently to the USA; to Hollywood for Metro-Goldwyn-Mayer. 1933: Tender is completed. Z's 3 rd collapse. Confined to a sanitarium and was to be in and out of similar institutions through the rest of her life. F. tried to commit suicide; his mental state contributed to making Z's recovery more difficult, but as Mizener puts it "at bottom Z's trouble was something that was happening to them both". 1934 April 12t h publication of Tender which had a limited success while F's private life was getting more & more pathetic and lonely: "the dream of the writer was wrecked and so was his dream of eternal love". 1935-37: Period of "Crack Up", increasing alcoholism and physical illnesses. F. having apparently exhausted all literary material turned to introspection and laid his heart bare; defined exactly his own "malaise" and analyzed his own decay. June 1937 signed contract with MGM at $ 1,000 a week. Met Sheilah Graham and this affair had the effect of putting a little order into his life and of enabling him to work on The Last Tycoon. So, "the last three years of his life were marked by a reawakening of his creative forces and by his last desperate struggle with the specter of decadence. He worked with enthusiasm at various film scripts, minimized his social life and succeeded in abstaining from alcohol" (Perosa). November 1940: 1 st heart attack; December 21 st : 2 nd and fatal one. Death. His books were proscribed and as he had not died a good Catholic, the Bishop refused to allow him to be buried in hallowed ground. 1948: March, 10 th , Z. dies in fire at Highland Sanitarium. Difficult to find an appropriate conclusion after going over such a dramatic and tragic life; most striking characteristic however, "a sharply divided life": F. was "a man of self-discipline at war with the man of self-indulgence" (Lehan), a situation made even more complex by the rôle Z. played in his life. She hindered his ambitions because, he said, "she wanted to work too much for her and not enough for my dream". "[...] I had spent most of my ressources, spiritual and material, on her, but I struggled on for 5 years till my health collapsed, and all I cared about was drink and forgetting" (which points to the very theme of "expenditure" explored in Tender). A final ironical touch, however, to this portrait: F's life testifies to the truth of L. Fiedler's opinion that "Among us, nothing succeeds like failure" since F. is now considered as one of the literary greats and sales of his books have averaged 500,000 copies a year since he died in 1941. SOCIAL AND HISTORICAL BACKGROUND F. often assumed the rôle of "a commenter on the American scene". Well, what was the American scene like during the period depicted by Tender? The novel spans the period known as "the Jazz Age" i.e. from the riots on May Day, 1919, to the crash of the stock market in 1929. That period was marked by the impact of W.W.I, a conflict which ushered in a new era in the history of the USA. Until the outbreak of the war, the USA was a provincial nation with a naive and somewhat parochial outlook; the conflict changed all that: la guerre apparaît soudain comme le viol d'une conscience collective exceptionnellement paisible, satisfaite et pour tout dire innocente... Pour tous les jeunes Américains, ce fut un appel individuel à l'expérience. Avril 1917 sonne le glas d'une certaine innocence typiquement américaine [...] l'Amérique rentrait dans l'Histoire. (M. Gresset) The typical representative of that post-war period was "the lost generation" (a phrase coined by G. Stein), a war-wounded generation hurt in its nerves and bereft of all illusions concerning society and human nature after witnessing the atrocities committed on European battlefields. It was also suspicious of the spurious values of a society that had turned into "Vanity Fair". The Jazz Age was also a time of great upheaval in social mores: dans la société déboussolée et faussement libérée des années vingt, [la sexualité] commença à devenir une manière de prurit endémique, une obsession nourrie d'impuissance et de stérilité, ce que D.H. Lawrence allait appeler "le sexe dans la tête". [...] Les rôles traditionnels tendent à s'invertir: les hommes se féminisent, les femmes se masculinisent et les jeunes filles sont des garçonnes. On voit poindre là une "crise des différences" déjà manifeste dans la littérature du XIX e siècle, et dont témoigne, notamment dans le roman français (Balzac, Stendhal, Gautier, Flaubert) la multiplication des personnages masculins dévirilisés et des personnages féminins virils, sans parler des cas-limites androgynes et castrats dans le bas-romantisme et la littérature de la Décadence" (A. Bleikasten). As a historian put it: "soft femininity was out and figures were becoming almost boyish" and said another "the quest of slenderness, the vogue of short skirts, the juvenile effect of the long waist. […] all were signs […] that the woman of this decade worshiped not merely youth but unripened youth". Two typical figures of the period were the "flapper" (la garçonne) and the "sheik" (the young lady-killer, Rudolf Valentino style). All these elements went into the composition of Tender which, as we shall see, was aptly defined as "a portrait of society's disintegration" in the troubled post-war years. LITERARY BACKGROUND A short examination of F's literary consciousness and evolution as a writer. A just evaluation, both aesthetic and critical of the F. case is all the more difficult as he never attempted any comprehensive definition of his art and technique; apart from a few observations in his private correspondence, we have precious little to go on. His beginnings as a writer were marked by the dominant influence of such realistic or even naturalistic authors as H. G. Wells and Th. Dreiser or Frank Norris who insisted on a life-like and documentary representation of reality. But pretty soon, F. shifted his allegiance and acknowledged a debt to H. James, and J Conrad. When James's The Art of the Novel was published in 1934 F. immediately read the book; in this collection of prefaces, James put forward his arguments in favour of the "selective" type of novel as opposed to the "discursive" type advocated by Wells. This controversy dominated the literary stage in the early XX th century; Wells wanted to "have all life within the scope of the novel" and maintained that nothing was "irrelevant" in a novel i.e the novel was to be well-documented; he insisted that characterization rather than action should be the center of the novel, and claimed the author had a right to be "intrusive" since the novel was a vehicle for problem discussion. James on the contrary proposed "selection" as the preferred alternative to "saturation", the extract from life as a substitute for the slice of life. For him the true test of the artist was his tact of omission; moreover the novelist must have a "centre of interest", a "controlling idea" or a "pointed intention". This is of course a gross oversimplification of what was at stake in this controversy, but it enables us to place F. within the wider context of a particular literary tradition. Thus F. took sides with the advocates of "the novel démeublé " (as Willa Cather put it) i.e. presenting a scene by suggestion rather than by enumeration: "Whatever is felt upon the page without being specifically named there-that, one might say, is created" (W. Cather). Another preface played a decisive rôle in F.'s evolution as a writer: Conrad's preface to The Nigger of the Narcissus: "I keep hinking of Conrad's Nigger of the Narcissus' preface and I believe that the important thing about a work of fiction is that the essential reaction shall be profound and enduring". (F. in a letter). From Conrad F. borrowed several principles and devices: first of all he subscribed to Conrad's definition of the function of the artist which is "by the power of the written word to make you hear, to make you feel [...] to make you see". F. also learnt from Conrad that "the purpose of a work of fiction is to appeal to the lingering after-effects in the reader's mind". He also adopted the Conradian motif of "the dying fall" i.e. "la fin en lent decrescendo", which, in contrast to the dramatic ending, is a gradual letting down or tapering off, and the device known as "chronological muddlement" i.e. arranging narratives not as a chronological sequence of events, but as a series of gradual discoveries made by the narrator. Instead of going straight forward, from beginning to end, in order to gradually disclose the true nature of a particular character, a novel should first get the character in "with a strong impression, and then work backwards and forwards over his past" (F. Madox Ford). This has farreaching implications: when a novel has a straight unbroken narrative order, it usually means that the author and his readers share a certain confidence about the nature of moral and material reality. Their narrative world is orderly: chaos is elsewhere and unthreatening. But when we get books in which the narrative order has broken up, melted and regrouped into scattered fragments, when we find gaps and leaps in the time sequence, then we have moved into the modern age, when the author and his public are doubtful about the nature of the moral and material worlds. Conrad's dislocated narrative method "working backwards and forwards" reflects a conviction that the world is more like a "damaged kaleidoscope" than an "orderly panorama". Since we are dealing with technique, let me mention another device, which F. borrowed from James this time and made good use of in Tender: "the hour-glass situation" i.e. a form of reversed symmetry: A turns into B, while B turns into A or a strong character gradually deteriorates while a weak character becomes stronger etc., a situation perfectly illustrated in Tender Is The Night, a novel of deterioration. F. combined all these borrowings into a unique technique which enabled him, in his own words, to aim at "truth, or rather the equivalent of the truth, the attempt at honesty of imagination". To attain this goal he developed "a hard, colorful prose style" not unlike Hemingway's. In an independent way, F. recreated certain stylistic features typical of Hemingway: the hardness and precision of diction, the taste for the essential and the concrete, the predominance of the dialogue, the directness of statement, a refinement of language disguised as simplicity. However, F. himself was aware of the strong influence H's style exerted over him: "remember to avoid Hemingway" or "Beware of Hemingway" are warnings one can read in his manuscripts. He was also threatened by a certain facility: "I honestly believed that with no effort on my part I was a sort of magician with words..." Hence, no doubt, the tremendous amount of second rate work he turned out to pay off his debts. To achieve his hard colorful prose style, F. used verbs whenever possible ("all fine prose is based on the verbs carrying the sentences. They make sentences move"); he strove for naturalness ("People don't begin all sentences with and, but, for and if, do they? They simply break a thought in midparagraph...") and often resorted to the "dramatic method" i.e. what the characters do tells us what they are; and what they are shows us what they can do (cf. James: "What is character but the determination of incident?" "What is incident but the illustration of character ?"): The dramatic method is the method of direct presentation, and aims to give the readers the sense of being present, here and now in the scene of action...Description is dipensed with by the physical stage setting. Exposition and characterization are both conveyed through the dialogue and action of the characters (J. W. Beach). F. laid great stress upon the writer's need of self-conscious craft: "The necessity of the artist in every generation has been to give his work permanence in every way by a safe shaping and a constant pruning, lest he be confused with the journalistic material that has attracted lesser men". F. was not like Th. Wolfe or W. Faulkner a "putter-inner" (Faulkner said "I am trying to say it all in one sentence, between one Cap and one period") i.e. never tried to pile words upon words in an attempt to say everything, on the contrary, he was a "leaver-outer", he worked on the principle of selection and tried to achieve some sort of "magic suggestiveness" which did not preclude him from proclaiming his faith in the ideal of a hard and robust type of artistic achievement as witness the quotation opening his last novel: "Tout passe. L'art robuste Seul a l'éternité". (Gautier, Émaux et camées). 2) AN INTRODUCTION TO TENDER IS THE NIGHT It took F. nine years, from 1925 to 1934 to compose his most ambitious work: he stated that he wanted to write "something new in form, idea, structure, the model of the age that Joyce and Stein are searching for, that Conrad did not find". As we have seen, it was a difficult time in F.'s life when both Zelda and himself were beginning to crack up: F. was confronted with moral, sentimental and financial difficulties. So, it is obvious that the composition of the novel was both a reply to Zelda's Save me the Waltz and a form of therapy, a writing cure. There were 18 stages of composition, 3 different versions and 3 different reading publics since Tender Is The Night was first published as a serial (in Scribner's Magazine from January to April 1934); it came out in book form on April 12th 1934 and there was a revised edition in 1951. Tender Is The Night sold 13,000 copies only in the first year of publication and F's morale dropped lower than ever. Needless to say, I am not going to embark upon an analysis of the different stages of composition and various versions of the novel; it would be a tedious and unrewarding task. Suffice it to say that the first version, entitled The Melarky Case, related the story of Francis Melarky, a technician from Hollywood, who has a love affair on the Riviera and eventually kills his mother in a fit of rage. This version bore several titles: Our Type, The World's Fair, or The Boy who Killed his Mother. F. put it aside and sketched another plan for a new draft, The Drunkard's Holiday or Doctor Diver's Holiday, that was closer to the novel as we know it today since it purported to "Show a man who is a natural idealist, a spoiled priest, giving in for various causes to the ideas of the haute Bourgeoisie, and in his rise to the top of the social world losing his idealism, his talent and turning to drink and dis-sipation...". Somewhere else we read that "The Drunkard's Holiday will be a novel of our time showing the breakup of a fine personality. Unlike The Beautiful and Damned the break-up will be caused not by flabbiness but really tragic forces such as the inner conflicts of the idealist and the compromises forced upon him by circumstances". Such were the immediate predecessors of Tender Is The Night which integrated many aspects and elements from the earlier versions, but the final version is but a distant cousin to the original one. The action depicted in the novel spans 10 years: from 1919, Dick's second stay in Zurich, to 1929 when he leaves for the States. The novel is a "triple-decker": the first book covers the summer of 1925; the first half of the second is a retrospection bringing us back to 1919 then to the years 1919-1925. The 2 nd half (starting from chapter XI) picks up the narrative thread where it had been broken in the first book, i.e. 1925 to describe the lives of the Divers from autumn to Xmas, then skips a year and a half to give a detailed account of a few weeks. The 3rd book just follows from there and takes place between the summer of 1928 and july 1929. Except for two brief passages where Nicole speaks in her own voice, it is a 3 rd -person account by an omniscient narrator relayed by a character-narrator who is used as a reflector; we get 3 different points of view: Rosemary's in Book I, Dick's in Book II, Nicole in Book III. Dick gradually becomes "a diminishing figure" disappearing from the novel as from Nicole's life. This, as you can see, was an application of the Conradian principle of "chronological muddlement" but such an arrangement of the narrative material had a drawback of which F. was painfully conscious: "its great fault is that the true beginning -the young psychiatrist in Switzerland -is tucked away in the middle of the book" (F.). Morever, the reader is often under the impression that Rosemary is the centre and focus of the story and it takes him almost half the book to realize the deceptiveness of such a beginning: the real protagonists are Dick and Nicole. To remedy such faults, F. proposed to reorganize the structure of the novel, which he did in 1939 (it was published in 1953). There is no question that the novel in this revised form is a more straightforward story but in the process it loses much of its charm and mystery; Dick's fate becomes too obvious, too predictable whereas the earlier version, in spite of being "a loose and baggy monster", started brilliantly with a strong impression and a sense of expectancy if not mystery that held the reader in suspense. But the debate over the merits or demerits of each version is still raging among critics, so we won't take sides. The subject of Tender Is The Night is a sort of transmuted biography which was always F's subject: it is the story of Dick's (and F.'s) emotional bankruptcy. Now I'd like to deal with four elements which, though external to the narrative proper, are to be reckoned with in any discussion of the novel: the title, the subtitle, the reference to Keats's poem and the dedication. F.'s novel is placed under the patronage of Keats as witness the title and the extract from "Ode to a Nightingale" which F. could never read through without tears in his eyes. It is however useless to seek a point by point parallelism between the structure of Tender Is The Night and that of the poem; the resemblance bears only on the mood and some of the motifs. "The Ode" is a dramatized contrasting of actuality and the world of imagination; it also evinces a desire for reason's utter dissolution, a longing for a state of eternality as opposed to man's painful awareness of his subjection to tem-porality. Thus seen against such background, the title bears a vague hint of dissolution and death, a foreboding of the protagonist's gradual sinking into darkness and oblivion. In the novel, there are also echoes of another Keatsian motif: that of "La Belle Dame Sans Merci". The subtitle, "A Romance", reminds us that American fiction is traditionally categorized into the novel proper and the romance. We owe N. Hawthorne this fundamental distinction; the main difference between those two forms is the way in which they view reality. The novel renders reality closely and in comprehensive detail; it attaches great importance to character, psychology, and strains after verisimilitude. Romance is free from the ordinary novelistic requirements of verisimilitude; it shows a tendency to plunge into the underside of consciousness and often expresses dark and complex truths unavailable to realism. In the Introduction to The House of the Seven Gables (1851), N. Hawthorne defined the field of action of romance as being the borderland of the human mind where the actual and the imaginary intermingle. The distinction is still valid and may account, as some critics have argued, notably R. Chase in The American Novel and its Tradition, for the original and characteristic form of the American novel which Chase calls "romance-novel" to highlight its hybrid nature ("Since the earliest days, the American novel, in its most original and characteristic form, has worked out its destiny and defined itself by incorporating an element of romance"). Of course, this is not the only meaning of the word "romance"; it also refers to "a medieval narrative [...] treating of heroic, fantastic, or supernatural events, often in the form of an allegory" (Random Dict.); the novel, as we shall see, can indeed be interpreted in this light (bear in mind the pun on Night and Knight). A third meaning is apposite: romance is the equivalent of "a love affair", with the traditional connotations of idealism and sentimentalism. It is useful to stress other charecteristics of "romanticism", for instance Th. Mann stated that: "Romanticism bears in its heart the germ of morbidity, as the rose bears the worm; its innermost character is seduction, seduction to death". One should also bear in mind the example of Gatsby whose sense of wonder, trust in life's boundless possibility and opportunity, and lastly, sense of yearning are hallmarks of the true romantic. Cf. a critic's opinion: If Romanticism is an artistic perspective which makes men more conscious of the terror and the beauty, the wonder of the possible forms of being [...] and, finally, if Romanticism is the endeavor [...] to achieve [...] the illusioned view of human life which is produced by an imaginative fusion of the familiar and the strange the known and the unknown, the real and the ideal, then F. Scott Fitzgerald is a Romantic". So, be careful not to overlook any of these possible meanings if you are called upon to discuss the nature of Tender Is The Night. Lastly, a word about the identity of the people to whom Tender Is The Night is dedicated. Gerald and Sarah Murphy were a rich American couple F. met in 1924. The Murphys made the Cap d'Antibes the holiday-resort of wealthy Americans; they were famous for their charm, their social skill, and parties. F. drew on G. Murphy to portray Dick Diver. So much for externals; from now on we'll come to grips with the narrative proper. 3) MEN AND WOMEN IN TENDER IS THE NIGHT F.'s novel anatomizes not only the break-up of a fine personality but also of various couples; it is a study of the relationship between men and women at a particular period in American history when both the times and people were "out of joint". The period was unique in that it witnessed a great switch-over in rôles as a consequence of the war and of America's coming of age. I have already alluded to "la crise des différences" in the historical and social background to the novel and by way of illustration I'd like to point out that in F's world, the distinction between sexes is always fluid and shifting; the boy Francis in The World's Fair (an earlier version of Tender) becomes with no difficulty at all the girl Rosemary in Tender and with the exception of Tommy Barban, all male characters in the novel evince obvious feminine traits; their virility is called into question by feminine or homosexual connotations: Dick appears "clad in transparent black lace drawers" (30), which is clearly described as "a pansy's trick" by one of the guests. Nicole bluntly asks him if he is a sissy (cf. p. 136 "Are you a sissy?"). Luis Campion is sometimes hard put to it to "restrain his most blatant effeminacy...and motherliness" (43). Mr. Dumphry is also "an effeminate young man" (16). Women on the contrary display certain male connotations: Nicole is decribed as a "hard woman" (29) with a "harsh voice" (25). Her sister Baby Warren, despite her nickname, is also "hard", with "something wooden and onanistic about her" (168); she is likened to an "Amazon" (195) and said to resemble her "grandfather" (193). Oddly enough, Rosemary herself, is said to be economically at least "a boy, not a girl" (50). Thus, there is an obvious reversal of traditional roles or at least attributes, and Tender may be interpreted as a new version or re-enactment of the war between sexes (cf. motif of "La Belle Dame Sans Merci"). Let's review the forces in presence whose battle array can be represented as follows: two trios (two males vying for the same female) with Dick in between. DICK Belongs to the tradition of romantic characters such as Jay Gatsby: "an old romantic like me" (68); entertaining "the illusions of eternal strength and health, and of the essential goodness of people" (132). He's got "charm" and "the power of arousing a fascinated and uncritical love" (36) and like Gatsby makes resolutions: he wants to become "a good psychologist [...] maybe to be the greatest one that ever lived" (147) and also likes "showing off". At the same time, Dick's personality is divided and reveals contradictory facets: there is in him a "layer of hardness [...] of self-control and of self-discipline" (28) which is to be contrasted with his self-indulgence. Dick is the "organizer of private gaiety" (87) yet there is in him a streak of "asceticism" (cf. p. 221) prompting him to take pattern by his father and to cultivate the old virtues: "good instincts, honor, courtesy, and courage" (223). Cf. also p. 149 "he used to think that he wanted to be good, he wanted to be kind, he wanted to be brave and wise, but it was all pretty difficult. He wanted to be loved, too, if he could fit it in". The most prepossessing aspect of Dick's personality is a certain form of generosity; he spares no effort and gives away his spiritual and material riches to make people happy and complete: "They were waiting for him and incomplete with-out him. He was still the incalculable element..." (166) almost like a cipher which has no value in itself but increases that of the figure it is added to. Dick sometimes assumes the rôle and function of a "buttress" (cf. p. 265: "it was as if he was condemned to carry with him the egos of certain people, early met and early loved"), or even of a "Saviour" (p. 325: "he was the last hope of a decaying clan"). His relationship with Nicole is based on the same principle: his main function is to serve as a prop to keep her from falling to pieces ("he had stitched her together", p. 153). Dick's downfall will be brought about by two factors: his confusing the rôle of a psychiatrist with that of a husband and the lure of money ("Throw us together! Sweet propinquity and the Warren money" 173); I'll revert to the question of money, but for the time being, I'd like to stress its evil nature and the double penalty Dick incurs for yielding to his desire for money: castration and corruption (p. 220: "He had lost himself [...] Watching his father's [...] his arsenal to be locked up in the Warren safety-deposit vaults"). But the most fateful consequence is that, Nicole gradually depriving him of his vital energy, Dick loses his creative power, enthusiasm and even his soul: p. 187 "Naturally, Nicole wanting to own him [...] goods and money"; p. 227 "a lesion of enthusiasm" and lastly p. 242 "a distinct lesion of his own vitality". At this stage it is useful to introduce the motif of the "hour glass" situation already mentioned in the "Introduction": as Dick grows weaker and weaker, Nicole gets "stronger every day…[her] illness follows the law of diminishing returns" (p. 288). Dick's process of deterioration parallels the emergence of Nicole's "new self" (254). Dick takes to drink, becomes less and less presentable and efficient as a psychiatrist; morever he's no longer able to perform his usual physical stunts (cf. 304-6). There's a complete reversal: "I'm not much like myself any more" (280) and "But you used to want to create thingsnow you seem to want to smash them up" (287). He eventually loses and this is the final blow his moral superiority over his associates ("They now possessed a moral superiority over him for as long as he proved of any use", 256). Dick, being aware of his degradation, tries to bring retribution on himself by accusing himself of raping a young girl (256). His deterioration assumes spiritual dimensions since it imperils not only his physical being but his soul ("I'm trying to save myself" "From my contamination?" 323). Dick proves to be a tragic character cf. reference to Ophelia (325) whose main defect is incompleteness ("the price of his intactness was incomple teness", 131) and like tragic heroes he has his fall. The dispenser of romance and happiness eventually conjures up the image of the "Black Death" ("I don't seem to bring people happiness any more", 239) and of the "deposed ruler" (301) whose kingdom has been laid waste by some great Evil (Tender calls to mind the motifs of The Waste Land and the Fisher King). There are two important stages in Dick's progress: his meeting with Rosemary and his father's death. Meeting Rosemary arouses in dick a characteristically "paternal interest" (38) and "attitude" (75); Dick sees Rosemary as a child (77) and she, in turn, unconsciously considers him as a surrogate father figure. It is interesting to note that with, her youthful qualities, Rosemary fulfills the same function towards Dick as Dick does to Nicole i.e. she is a source of strength, renewal and rejuvenation. Dick is attracted by Rosemary's vitality (47) and he uses her to restore his own diminishing vigour. But, at the same time, his affair with Rosemary is a "time of self-indulgence" (233), a lapse from virtue and morals, a turning-point: "He knew that what he was doing..." (103). Dick is unable to resist temptation the spirit is willing but the flesh is weak, as the saying goes, so this is the 1 st step (or the 2 nd if we take into account his marrying Nicole) to hell whose way, as you all know, is paved with good intentions. However the promises of love are blighted by the revelation of Rosemary's promiscuity cf. the episode on the train and the leitmotif "Do you mind if I pull down the curtain?" (113). Dick eventually realizes that the affair with Rosemary is just a passing fancy: "he was not in love with her nor she with him" ( 236) and ( 240) "Rome was the end of his dream of Rosemary". His father's death deprives Dick of a moral guide, of one the props of his existence ("how will it affect me now that this earliest and strongest of protections is gone?", p. 222 "he referred judgments to what his father would probably have thought and done"). Altough a minor figure at least in terms of space devoted to his delineation, Dick's father plays an important rôle as a representative of an old-world aristocracy with a high sense of honour, a belief in public service and maintenance of domestic decorum. He represents the Southern Cavalier (gentleman, a descendant of the English squire) as opposed to the Yankee, the product of a society absorbed in money-making and pleasure-seeking. He is a sort of relic from the past, a survival from a phased-out order of things, hence the slightly anachronistic observation to be found on p. 181: "From his father Dick had learned the somewhat conscious good manners of the young Southerner coming north after the Civil War". Dick, whose name, ironically enough, means "powerful and hard", will prove unable, as heir to a genteel tradition, to live up to the values and standards of the Southern Gentleman. He is hard on the outside and soft inside and reminds one of F's desire "to get hard. I'm sick of the flabby semi-intellectual softness in which I floundered with my generation" (Mizener). Hence also, the note of disappointment struck by F. when he stated that "My generation of radicals and breakersdown never found anything to take the place of the old virtues of work and courage and the old graces of courtesy and politeness" (Lehan) i.e. the very same virtues advocated by Dick's father. THE WARREN SISTERS Both sisters are the obverse and reverse of the same medal. Nicole is the seducer, as witness her portrait on p. 25; the reference to sculpture (Rodin) and architecture reminds one of an American writer's description of the American leisure woman as a "magnificently shining edifice" (P. Rosenfeld, Port of New York, 1961). F. uses the same image since N. is compared to "a beautiful shell" (134) "a fine slim edifice" (312). Nicole is endowed with a complex and deceptive personality combining both ingenuousness and the innate knowledge of the "mechanics of love and cohabitation" (Wild Palms, 41) that W. Faulkner attributes to his female characters. Nicole also assumes almost allegorical, symbolical dimensions in that she "represents the exact furthermost evolution of a class" (30) and stands for the American woman, some sort of archetype, if not for America itself. Her own family history is an epitome of the creation of the New Republic since it combines the most characteristic types evolved by Europe and the New World: "Nicole was [...] the House of Lippe" (63). This emblematic quality is further emphasized by the fact that Nicole is defined as "the product of much ingenuity and toil" (65), so she brings Dick "the essence of a Continent" (152). She is quite in keeping with the new spirit of the times; she partakes both of "the Virgin and of the Dynamo" (H. Adams) and is described as an industrial object cf. 301: "Nicole had been designed for change [...] its original self". Not unlike Dick, Nicole has a double personality, which is to be expected from a schizophrenic; so there is a "cleavage between Nicole sick and Nicole well" (185). She is also incomplete and dependent upon other people to preserve a precarious mental balance: "she sought in them the vitality..." (198). The relation ship between Nicole and Dick is not a give-and-take affair but a one-way process; Nicole literally depletes, preys upon him, saps his strength: "she had thought of him as an inexhaustible energy, incapable of fatigue" (323) and she eventually absorbs him: "somehow Dick and nicole had become one and equal..." (209). Cf. also the fusion of the two names in "Dicole" (116). Thus there is a double transference both psychological and vital (cf. the image of breast feeding p. 300). Actually, Nicole "cherishes her illness as an instrument of power" (259) and uses her money in much the same way: ("owning Dick who did not want to be owned" 198). However, after "playing planet to Dick's sun" (310) for several years, N. gradually comes to "feel almost complete" (311) and comes to a realization that "either you think...sterilize you" (311). From then on, begins what might be called N's bid for independence ("cutting the cord", 324); after ruining Dick, she "takes possession of Tommy Barban" (293), unless it is the other way round; however that may be the experience is akin to a rebirth: "You're all new like a baby" (317). The "coy maiden" gives way to the ruthless huntress: "no longer was she the huntress of corralled game" (322). By the way, it is worth noting that "Nicole" etymologically means "Victory" and that "Warren" means "game park". Thus Nicole reverts to type: "And being well perhaps [...] so there we are" (314) and "better a sane crook than a mad puritan" (315). She becomes exactly like her sister Baby Warren, both "formidable and vulnerable" (166), who had anticipated her in that transformation: "Baby suddenly became her grandfather, cool and experimental" (193) i.e. Sid Warren, "the horse-trader" (159). She is described as "a tall, restless virgin" (167), with "something wooden and onanistic about her" (168). She is the high priestess of Money, the Bitch-Goddess, and as such she symbolizes sterility. So much for the portrait of those two society women. ROSEMARY HOIT Cf. the connotations of the name: Hoit→"Hoity-toity": "riotous", "frolicsome". A hoyden (a boisterous girl)? One preliminary observation: Rosemary plays an important rôle as a catalytic agent and stands between the two groups of people i.e. the hard, practical people and the dissipated, run-down romantics. She is a complex character in that she combines several contradictory facets the child woman (ash blonde, childlike etc p. 12) "embodying all the immaturity of the race" (80) and its wildest dreams of success, everlasting youth and charm (she is surrounded by a halo of glamour and the magic of the pictures; note also that she deals with reality as a controlled set and life as a production) and the woman of the world. In spite of her idealism, grafted onto "an Irish, romantic and illogical" nature (181), she is said to be "hard" (21) and to have been "brought up on the idea of work" (49), she embodies certain "virtues" traditionally attached to a Puritan ethos. Her infatuation with Dick is just a case of puppy love, a stepping stone to numerous amorous adventures. In spite of the rôle she plays in the opening chapter of the novel, she turns out to be no more than a "catalytic agent" (63) in Dick and Nicole's evolution but she also fulfills an important symbolical function. This is precisely what I propose to do now, i.e. take a fresh look, from a symbolical standpoint, at the question of men and women in order to bring to light less obvious yet fundamental aspects. THE FATHER DAUGHTER RELATIONSHIP & INCEST The father-daughter relationship is of paramount importance in Tender and most encounters between men and women tend to function on that pattern so much so that I am tempted to subscribe to Callahan's opinion that Tender describes a new version of the American Eden where, the male being ousted, the female is left in blessed communion or tête à tête with the Father i.e. God. A case in point being of course Nicole & Devereux Warren, who was in his words "father, mother both to her" (141) and candidly confesses that "They were just like lovers-and then all at once we were lovers..." (144). Nicole will, to a certain extent, continue the same type of relationship with Dick, thus fulfilling the female fantasy of having both father and husband in the same person; Nicole is a kind of orphan adopted by Dick. Rosemary seeks the very situation responsible for Nicole's schizophrenia: uniting protector and violator in the same man. Brady for instance turns her down by "refusing the fatherly office" (41) that Dick eventually assumes but there is such a difference in their ages as to render their embrace a kind of reenactment of the incestuous affair between Nicole and her father. It is also worth noting that Rosemary owes her celebrity to a film "Daddy's Girl" which is obviously a euphemistic fantasy of the Warren incest, a fact which does escape Dick's attention: "Rosemary and her parent...the vicious sentimentality" (80). Such interpretation is borne out by the reference to the "Arbuckle case" (124) ('Fatty', grown-up fat boy of American silent cinema whose career was ruined after his involvement in a 1921 scandal in which a girl died. Though he never again appeared before a camera, he directed a few films under the name 'Will B. Good'). To this must be added Dick's own involvement in a similar scandal since he is accused of seducing one of his patients' daughter (205). THE WAR BETWEEN THE SEXES The relationships between men and women are also described in terms of war between the sexes, cf. what one of the patients in the asylum says: "I'm sharing the fate of the women of my time who challenged men to battle" (203). Thus, just as war is seen through the language of love, so love is told through the metaphor of war. In the description of the battlefield (67-70), we find echoes of D. H. Lawrence's interpretation of WWI as the fulfilment in history of the death urge of men whose marriages were sexually desolate. The diversion of erotic energy to war is further illustrated by Mrs Speers (a very apt name: "Spear" refers to a weapon for thrusting) who often applies military metaphors to sex: "Wound yourself or him.." (50). Notice the recurrence of the word "spear" to describe Dick's emasculation and also the use of the word "arsenal": "the spear had been blunted" ( 220) "Yet he had been swallowed up like a gigolo, and somehow permitted his arsenal to be locked up in the Warren safety-deposit vault". Last but not least, there is a kind of cannibalism or vampirism going on in the novel. Women, metaphorically feed or batten on their mates and the novel teems with images turning women into predatory females, cf. "dissection" (180), "suckling" (300) or even "spooks" (300) i.e. terms having to do with the transference of vital energy from one person to another. On a symbolical and unconscious level, woman and America are seen as vampires; L. Fiedler, who maintains that many female characters in American fiction can be divided into two categories "the fair maiden" (the pure, gentle virgin) and "the dark lady" (the dangerous seducer, the embodiment of the sexuality denied the snow maiden), also stated that very often, in the same fiction, "the hero finds in his bed not the white bride but the dark destroyer" (313): characteristically enough, Nicole's hair, once fair, darkens (34). In the American version of Eden, Eve is always vying with Lilith (Adam's wife before Eve was created, symbolizing the seducer, "l'instigatrice des amours illégitimes, la perturbatrice du lit conjugal"). Moreover, it is also interesting to note that if (the) man can create (the) woman, she seems to absorb his strength and very being; there is in Tender an echo of Schopenhauer's philosophy (cf. Faulkner's Wild Palms) which claimed that "the masculine will to creativeness is absorbed in the feminine will to reproduction"; cf the description on p. 253 "The American Woman, aroused, stood over him; the clean..that had made a nursery out of a continent". However, in Tender it is absorbed in woman's will to assert herself. Consequently, in F.'s fiction, women are seen as embodiments either of innocence (woman and the Continent before discovery, cf. Gatsby) or of corruption (woman and the Continent spoiled by male exploitation). As in Faulkner's fiction, it is actually the male protagonist and not woman who symbolizes innocence and the American dream although the hero is too ineffectual to carry out its promise. 4) MOTIFS, IMAGES, AND SYMBOLS We are far from having exhausted the symbolical connotations attaching to the various characters appearing in Tender; a work of art aims at plurality of meanings and several levels of significance are at play in any novel. To give you an example of such complexity of how things work I'd like to call your attention to the fact that women are consistently described in terms of flowers; there runs throughout the narrative a symbolic vein pertaining to flora. As a starting-point, I'll refer to an observation F. recorded in The Lost Decade: "girls had become gossamer again, perambulatory flora..." (91), an image perfectly illustrated by Tender. FLORA & FAUNA Rosemary Hoit is a case in point; her first name means "romarin" and derives from the Latin "ros marinus" i.e. "sea-dew". Now, "dew", as you all know, or at least are about to know, symbolizes "spiritual refreshment; benediction; blessing. Sweet dew is peace and prosperity. Dew can also represent change, illusion and evanescence. It is also related to the moon, nightfall and sleep". Consequently, the various references to "dew" that may have escaped the notice of the unweary reader, assume with the benefit of hindsight, deeper significance than one at first realizes: "she was almost eighteen, nearly complete, but the dew was still on her" (12); "Rosemary, [...] dewy with belief" (43) etc. All these images stress the youth and alleged innocence of Rosemary. Morever, her growth to adulthood is seen as the flowering of talent: "blossomed out at 16" (49); "looks like something blooming" (30); "bright bouquet" (89); "her body calculated to a millimeter to suggest a bud yet guarantee a flower" (117). However, Rosemary, "the white carnation" (75) evolves into different types of flower or plant such as the "blinding belladonna [...] the mandragora that imposes harmony" (181). The connotations are totally different and point to a lapse from innocence or virtue on the part of Rosemary: mandrake, "the plant of enchantment is the emblem of Circe" (a goddess and sorceress who changed Odysseus's men into pigs"). Incidentally, Rosemary is also compared to animals; she is seen as "a young mustang" (181), a "young horse" (226) and Nicole in her turn is likened to "a colt" (157); Dick with his crop and jockey cap rides them both before being thrown (cf. also the reference to "other women with flower-like mouths grooved for bits" p. 166). Nicole too is associated with "gardens" (cf. description on p. 34 and on p. 172 "waiting for you in the garden-holding all myself in my arms like a basket of flowers") and such flowers as "camellia" (34) and "lilac"; note also that she is in charge of the decoration of two wards called "the Eglantine" and "the Beeches" (201). Nicole is also said to "be blooming away" (48) and the more selfreliant and self-confident she grows the more numerous are the references to flora: "Nicole flowering" (220); "She reasoned as gaily as a flower" (297); "Her ego began blooming like a great rich rose" (310). After her preparations to greet her lover, Tommy Barban, she is "the trimmest of gardens" (312). As far as animal imagery is concerned, N.'s tragedy turns her into "the young bird with wings crushed" (143); while she is under the sway of Dick's personality, she's like "an obedient retriever" (35). All these references to flora (cf. Violet McKisko) and fauna with their attractive and repulsive connotations culminate, as regards female characters, in the description of the trio of women on p. 84: "They were all tall and slender...cobra's hoods"; cf. a few lines further down the phrase "cobra-women" or the reference to "Amazons" (195). Thus, women in the the world depicted by Tender are, to use Keats's words, seen as "poison flowers", evil flowers, one has to guard against. Far from being gratuitous, such a bias can be accounted for by the rôle F. ascribed to women in his own life and in American society at large (cf. previous lecture). Two further observations as far as Nicole is concerned: they have to do with two classical refernces to Diana (name of her villa p. 38) and to Pallas Athene (177). They merely highlight her twofold nature since Diana/Artemis is a goddess associated with wooded places, women and childbirth and a virgin huntress, associated with uncultivated places and wild animals. As for Pallas Athene, she was the goddess of war, the patron of the arts and crafts. Lastly, to restore the balance in this presentation, it is necessary to point out that animal similes are also applied to men (McKisko is a rabbit, 59; Dick a cat, 136;, Tommy a watch-dog, 53 etc.,) even if they don't fall into such a coherent pattern as in the case of women. RELIGIOUS DIMENSIONS F. called Tender "my testament of faith" and the novel abounds in images having a clearly religious flavour quite in keeping with F.'s avowal that "I guess I am too much a moralist at heart and real-ly want to preach at people in some acceptable form rather than to entertain them". As critic A. Mizener stated: all his best work is a product of the tension between these two sides of his nature, of his ability to hold in balance "the impulses to achieve and to enjoy, to be prodigal and open-hearted, and yet ambitious and wise, to be strong and self-controlled, yet to miss nothing--to do and to symbolize". Not until 1936 did he lose faith in his ability to realize in his personal life what he called "the old dream of being an entire man in the Goethe-Byron-Shaw tradition, with an opulent American touch, a sort of combination of J.P. Morgan, Topham Beauclerk and St. Francis of Assisi" (64). Tender is to a certain extent an allegory of sin and penitence, fall and retribution with Everyman Diver journeying through a multitude of temptations and yielding to all of them: money, liquor, anarchy, self-betrayal and sex. In the General Plan, F. calls Dick "a spoiled priest" and the hero of Tender, "the son of a clergyman now retired" (175), can indeed be seen as the high priest of a group of devotees of a new religion worshipping leisure, entertainment and money, the Bitch-Goddess, eventually deposed by his followers. This religious motif is introduced in the opening pages with various images which, on second reading, endow the setting or the characters with new significance: see, for instance "the hotel and its bright tan prayer rug of a beach" (11); the woman with "a tiara" ( 14) and the general atmosphere of a community upon which Rosemary dare not intrude. She looks like a novice eager to be admitted into the Sacred College. Dick holds the group together; in a true spirit of ecumenicism and "as a final apostolic gesture" (36) he invites Mrs Abrams to one of his ritualistic parties. His relationship with Rosemary conjures up the notions of "adoration" and "conversion", see for instance the description on p. 48 "She was stricken [...] chasuble [...] fall to her knees". Nicole too is said "to bring everything to his feet, gifts of sacrificial ambrosia, of worshipping myrtle". Note that Nicole herself who has "the face of a saint" and looks like "a Viking madonna" (43) is also idolized by Tommy Barban whose vindication of the Divers' honour prompts Mrs McKisko to ask: "Are they so sacred?" (53). Dick's process of deterioration is also punctuated with religious references (cf. p. 104 "in sackcloth and ashes"; "let them pray for him", 281) down to the two final scenes when Dick is confronted, "like a priest in a confessional" (326), with the spectacle of depravity, lust and corruption (Mary North and Lady Caroline as embodiments of the Scarlet Woman) and when he makes a sign of (pronounces the) benediction before turning his back on his former associates:(337) "He raised his right hand and with a papal cross he blessed the beach from the high terrace". With this highly symbolical gesture, Dick assumes the rôle of a scapegoat, taking upon himself the sins of the community before being sent out into the wilderness. Now is the time to call attention to Dick's double personality as "spoiled priest" and "man about town"; Dick's yearning for essential virtues is at odds with the kind of flamboyant life he's living, hence the numerous references to "asceticism" ("the old asceticism triumphed", p. 221; "living rather ascetically", p. 187; "the boundaries of asceticism", p. 148) and a "hermit's life", p. 234. There is an obvious kinship between H. Wilbourne, the hero of Faulkner's Wild Palms and Dick Diver; both are would-be ascetics, whose naiveté or innocence are shattered by experience i.e. the encounter with woman and society at large. Dick, after assuming the rôle of a missionary or an apostle and "wast[ing] 8 years teaching the rich the ABC's of human decency" (325), is sent into exile, like "a deposed ruler" (301), "his beach perverted now to the tastes of the tasteless" (301). All the values he stood for are crushed by contact with reality; the forces both external and internal against which the hero conducted a struggle eventually destroy him. Thus, the romantic idealist who wanted to be brave and kind and loved, and thought he "was the last hope of a decaying clan" (325), is utterly defeated and even if his fate illustrates the "futility of effort and the necessity of struggle" one may doubt it is a "splendid failure". PLACES The characters of Tender, apparently bereft of any spirit of place, are constantly on the move, and the reader is vicariously taken to numerous foreign countries: France, Italy, Switzerland, the French Riviera and the USA. Some of them even seem to overlap as their distinctive features are not always clearly defined and the characters tend to move in similar circles. Places in Tender, besides providing a setting for the events depicted in the novel, also convey symbolic oppositions. Europe is a sort of vantage point from which America is, to use the biblical phrase, "weighed in the scales and found wanting". Broadly speaking, the novel is the locale of a confrontation between East and West and in Dick Diver, F. questioned the adequacy of postwar America to sustain its heroic past; as I have already pointed out the virtues of the Old South as embodied by Dick's father were discarded in favour of the wealth and power associated with the North. Hence, F.'s vision of Dick as "a sort of superhuman, an approximation of the hero seen in over-civilized terms". Dick is too refined for the world he lives in and he is torn between civilization and decadence; he is heir to a vanishing world whereas Nicole is a harbinger/herald of a brave new world; hence the characteristic pattern of decay and growth on which the novel is based. The protagonist, being confronted with the tragedy of a nation grown decadent without achieving maturity (as a psychiatrist he has to look after culture's casualties cf. what R. Ellison said of the USA: "perhaps to be sane in such society is the best proof of insanity") will move to milder climes i.e. the Riviera. This reverse migration from the New to the Old World represents a profound displacement of the American dream; Dick, at least in the first half of the novel, yields to the lure of the East and responds to "the pastoral quality of the summer Riviera" (197) (note by the way that American literature is particularly hospitable to pastoralism i.e. the theme of withdrawal from society into an idealized landscape, an oasis of peace and harmony in the bosom of Nature). Thus the Riviera, Mediterranean France, are seen as "a psychological Eden" in which F. and his heroes take refuge. The Riviera is a middle ground between East and West, an alternative milieu (it is a feature of "romances" to be located at a distance of time and place from their writer's world) where Dick tries to make a world fit for quality people to live in and above all for Nicole (see, for instance the way people are transfigured during one of Dick's parties; p. 42 "and now they were only their best selves and the Divers' guests"). So, L. Fiedler is perfectly right to define Tender as "an Eastern, a drama in which back-trailers reverse their westward drive to seek in the world which their ancestors abandoned the dream of riches and glory that has somehow evaded them". Yet Europe itself is not immune from decay; the Italian landscape exudes "a sweat of exhausted cultures taint[ing] the morning air" (244); the English "bearing aloft the pennon of decadence , last ensign of the fading empire" (291) "are doing a dance of death (292); and Switzerland is seen as the dumping-ground of the invalids and perverts of all nations: cf. description of the suite on p. 268). Actually, it is the whole of Western civilization that bears the stigmas of a gradual process of degeneracy; F. was deeply influenced by the historical theories of Oswald Spengler, a German philosopher, who in The Decline of the West predicted the triumph of money over aristocracy, the emergence of new Caesars (totalitarian states) and finally the rise of the "colored" races (by which Spengler meant not only the Negroes and Chinese but also the Russians!) that would use the technology of the West to destroy its inventors. F. incorporated some of Spengler's theories into Tender, all the more easily as they tallied with his racial prejudices (in a 1921 letter to E. Wilson, F. wrote such things as "the negroid streak creeps northward to defile the Nordic race", and "already the Italians have the souls of blackamoors", etc...). Tender prophesies not only the decline of the West but also the triumph of a barbaric race of a darker complexion.This enables one to see in a totally different light the affair between Tommy Barban and Nicole Warren. Tommy Barban whose last name is akin to "barb(ari)an" is said to be "less civilized" (28), "the end-product of an archaic world" (45); he is "a soldier" (45), "a ruler, a hero" (215), and the epithet "dark" always crops up in his physical portrait: "Tommy [...] dark, scarred and handsome [...] an earnest Satan" (316) "his figure was darker and stronger than Dick's, with high lights along the rope-twist of muscle" (Ibid.) "his handsome face was so dark as to have lost the pleasantness of deep tan, without attaining the blue beauty of negroes--it was just worn leather" (289). Tommy, "the emerging fascist", as a critic put it, literally preys upon Nicole, the representative of the waning aristocracy; he usurps the place of Dick, the dreamer, the idealist, the "over-civilized" anti-hero and F. suggests that Tommy's triumph is one of East over West when he tells us that Nicole "symbolically [...] lay across his saddle-bow as surely as if he had wolfed her away from Damascus and they had come out upon the Mongolian plain [...] she welcomed the anarchy of her lover" (320). The echo of hoof-beats conjures up the image of barbaric hordes sweeping across Western countries: "the industrial Warrens in coalition with the militarist Barban form a cartel eager for spoils, with all the world their prey" (Callahan). What are the grounds for such an indictment of western civilization in general and of American culture and society in particular? MONEY America seems to have rejected certain fundamental values, such as "pioneer simplicity" (279) and "the simpler virtues", to adopt an increasingly materialistic outlook. After the "first burst of luxury manufacturing" (27) that followed WWI, society became a "big bazaar" (30), a "Vanity Fair"; this adherence to a materialistic credo is illustrated by the wealth of objects and commodities of every description such characters as Nicole or Baby Warren are surrounded with; see for instance, the description of Nicole's trip list on p. 278. This brings us to the rôle of money whose importance will be underlined by two quotations from F.: "I have never been able to forgive the rich for being rich and it has colored my entire life and works" and also declared that they roused in him "not the conviction of a revolutionist but the smouldering hatred of a peasant". Money is indeed a key metaphor in F.'s fiction where we find the same denunciation of "the cheap money with which the world was now glutted and cluttered" (The Wild Palms, p. 210). Note also that both novels resort to the same monetary image to refer to human feelings; Faulkner speaks of "emotional currency" and Fitzgerald of "emotional bankruptcy". People have become devotees of a new cult whose divinity is Mammon (a personification of wealth). Modern society dehumanizes man by forcing him to cultivate false values and by encouraging atrophy of essential human virtues; money has taken their place and, as F. once said, "American men are incomplete without money" ("The Swimmers"). Thus love and vitality are expressed in terms of money; just as the rich throw money down the drain so Dick Diver wastes his talents and feelings on unworthy people. The motif of financial waste culminating in the 1929 crisis runs parallel to the motif of intellectual and emotional waste i.e. Dick's emotional bankruptcy ("la banqueroute du coeur"). At this stage it is necessary to stress the obvious similarity between the author and his character; F. also wanted to keep his emotional capital intact and was beset by fears of bad investment. Mizener points out that F. regarded "vitality as if it were a fixed sum, like money in the bank. Against this account you drew until, piece by piece, the sum was spent and you found yourself emotionally bankrupt". In The Crack-Up, F. says that he "had been mortgaging himself physically and spiritually up to the hilt". So Dick uses up the emotional energy which was the source of his personal discipline and of his power to feed other people. "I thought of him," said F., "as an 'homme épuisé', not only an 'homme manqué' ". This "lesion of vitality" turns Dick into a "hollow man" and here again F. transposed his personal experience "the question became one of finding why and where I had changed, where was the leak through which, unknown to myself, my enthusiasm and my vitality had been steadily and prematurely trickling away" ("Pasting It Together"). Cf. also letter to his daughter: "I don't know of anyone who has used up so much personal experience as I have at 27". So both Dick and Fitzgerald were victims of an extravagant expenditure of vitality, talents and emotions; the former wasted it upon "dull" people (336) and the latter upon secondrate literary productions. F. once declared "I have been a mediocre caretaker of most things left in my hands, even of my talent", so were both Dick Diver and America. PERVERSION AND VIOLENCE Although F.'s work is "innocent of mere sex", sexual perversions bulk large in Tender and they are just further proof of the fact that both society and the times are out of joint. There are so many references to incest, rape, homosexuality and lesbianism that one feels as if love could only find expression in pathological cases. But sexual perversion, whatever form it may assume in Tender, is but one variation of the other perversions that corrupted a society with an "empty harlot's mind" (80): the perversion of money, the perversion of ideas and talents. This emphasis also indicates that sex or lust has ousted love; Eros has taken the place of Agape (spiritual or brotherly love); relations between men and women are modeled on war or hunting cf. Nicole's definition of herself as "no longer a huntress of corralled game". There runs throughout the novel an undercurrent of violence, "the honorable and traditional resource of this land" (245); it assumes various forms: war, duelling, rape, murder (a Negro is killed in Rosemary's bedroom), to say nothing of bickerings, beatings, and shootings. As a critic noticed, moments of emotional pitch, are often interrupted by a loud report; when Dick falls in love with Nicole, when Abe North leaves on the train from Paris, and when Tommy becomes Nicole's lover, each time a shot is heard that breaks the illusion; it is a recall to reality, a descent from bliss. After the assault at the station, it is said that "the shots had entered into their lives: echoes of violence followed them out onto the pavement..."(p. 97). So, sex, money, violence contribute to the disruption of the fabric of human relationships; Tender is a sort of prose version of "The Wasteland" and the depiction of the disintegration of both the protagonist and society is all the more poignant as it is to be contrasted with the potentialities, the dreams and the promises they offered. Whereas The Great Gatsby was a novel about what could never be, Tender Is The Night is a novel about what could have been. Dick diver had the talent to succeed and he also had in his youth the necessary talent and sense of commitment but he is betrayed by the very rich and by his own weaknesses. Hence the circularity of the plot and of the protagonist's journey that begins with the quest for success and ends with the reality of failure. Thus Dick returns to the States to become a wandering exile in his own country: "to be in hell is to drift; to be in heaven is to steer" (G. B. Shaw). DREAMS, ROMANCE & MAGIC As a starting-point for this section, I'd like to remind you of the words of L. Trilling: "Ours is the only nation that prides itself upon a dream and gives its name to one, 'the American dream'". F. is a perfect illustration of the fascination of that dream; in a letter to his 17-year-old daughter, Scottie, he wrote: When I was your age, I lived with a great dream. The dream grew and I learned how to speak of it and make people listen. Then the dream divided one day when I decided to marry your mother after all, even though I knew she was spoiled and meant no good to me... Dreams and illusions are thus the hallmark of the romantic character who, like Dick, entertains "illusions of eternal strength and health and of the essential goodness of people" (132). A profession of faith that is to be contrasted with the final description of Dick on p. 334 "he was no t young any more with a lot of nice thoughts and dreams to have about himself". Dick's function is to dispense romantic wonder to others and the same function is fulfilled by the cinema with its glamour and magic suggestiveness. Rosemary is also another version of the romantic the romantic at a discount her emotions are fed on "things she had read, seen, dreamed" (75); she often has the "false-and-exalted feeling of being on a set" (p. 83) but unlike Dick, she does not know yet "that splendor is something in the heart" (74). Her dark magic is nonetheless dangerous for it encourages a certain form of escapism and infantilism. So there's in the novel a constant interplay between actuality and illusion; certain scenes seem to take place on the borderline between the two: the characters and the objects surrounding them seem to be suspended in the air; see, for instance, the description of the party on p. 44: "There were fireflies...and became part of them". The motif of "suspension" emphasizes the contrast between actuality and the world of the imagination. NIGHT AND DAY The opposition of night and day has always held symbolic meaning and has been used by writers for centuries to suggest evil, confusion, the dark side of human nature as opposed to light, honesty, reason, wisdom. It is a fundamental opposition and motif in Fitzgerald's fiction (see for instance The Great Gatsby and the title of the novel he started on after Tender : The Count of Darkness) and in his life; in The Crack Up he wrote that he was "hating the night when I couldn't sleep and hating the day because it went towards night" (p. 43); however, he welcomes "the blessed hour of nightmare" (Ibid.,). Thus the symbolic structure of Tender reveals a contrast between the night and the day, darkness and night. The title indicates that night is tender, which seems to imply that the day is harsh and cruel. Actually, the opposition is not so clear-cut (the narrative mentions "the unstable balance between night and day", p. 247) and night and day carry ambivalent connotations. The opening scene is dominated by the glare of sunlight; it troubles the Divers and their friends who seek shelter from it under their umbrellas; Rosemary retreats from the "hot light" (12) and the Mediterranean "yields up its pigments...to the brutal sunshine" (12). Noon dominates sea and sky (19) so that it seems "there was no life anywhere in all this expanse of coast except under the filtered sunlight of those umbrellas" (19). Dick also protects Rosemary from the hot sun on p. 20 and 26. It is also interesting to observe that when Nicole has a fit of temper or madness "a high sun is beating fiercely on the children's hats" (206). Thus the sun is seen as something harsh, painful and even maddening. Conversely, darkness and night are at first referred to in positive terms: cf. "the lovely night", the "soft warm darkness" (296), the "soft rolling night" and also on p. 294 "she felt the beauty of the night"; Rosemary is at one point "cloaked by the erotic darkness" (49). The opposition between night and day is pointed up in F;'s description of Amiens cf. p. 69: "In the daytime one is deflated by such towns [...] the satisfactory inexpensiveness of nowhere". Thus night is the time of enchantment, obliterating the ugliness of reality that the day mercilessly exposes; night is the time of illusion and merriment cf. p. 91 "All of them began to laugh...hot morning". However the symbolism of the night is not merely opposite in meaning to that of the day, for night itself in ambivalent; it signifies according to The Dictionary of Symbols "chaos, death, madness and desintegration, reversal to the foetal state of the world"; such sinister connotations are apparent in the reference to "mental darkness" (236), to "the darkness ahead" (263) to mean death and in Nicole's statement that after the birth of her second child "everything went dark again" (177). Night is threatening and deceptive; it is the refuge of those who are unable to cope with practical daylight reality which is totally uncongenial to the romantic.
75,393
[ "17905" ]
[ "178707", "420086" ]
01761227
en
[ "math" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01761227/file/greenart3.pdf
Alexey D Agaltsov email: [email protected] Thorsten Hohage email: [email protected] AND Roman G Novikov email: [email protected] THE IMAGINARY PART OF THE SCATTERING GREEN FUNCTION: MONOCHROMATIC RELATIONS TO THE REAL PART AND UNIQUENESS RESULTS FOR INVERSE PROBLEMS Keywords: inverse scattering problems, uniqueness for inverse problems, passive imaging, correlation data, imaginary part of Green's function AMS subject classifications. 35R30, 35J08, 35Q86, 78A46 Monochromatic identities for the Green function and uniqueness results for passive imaging Alexey Agaltsov, Thorsten Hohage, Roman 1. Introduction. In classical inverse scattering problems one considers a known deterministic source or incident wave and aims to reconstruct a scatterer (e.g. the inhomogeniety of a medium) from measurements of scattered waves. In the case of point sources this amounts to measuring the Green function of the underlying wave equation on some observation manifold. From the extensive literature on such problems we only mention uniqueness results in [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Nachman | Reconstructions from boundary measurements[END_REF][START_REF] Bukhgeim | Recovering a potential from Cauchy data in the two-dimensional case[END_REF][START_REF] Santos Ferreira | Determining a magnetic Schrödinger operator from partial Cauchy data[END_REF][START_REF] Agaltsov | Uniqueness and non-uniqueness in acoustic tomography of moving fluid[END_REF], stability results in [START_REF] Stefanov | Stability of the inverse problem in potential scattering at fixed energy[END_REF][START_REF] Hähner | New stability estimates for the inverse acoustic inhomogeneous medium problem and applications[END_REF][START_REF] Isaev | New global stability estimates for monochromatic inverse acoustic scattering[END_REF], and the books [26,[START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF] concerning many further aspects. Recently there has been a growing interest in inverse wave propagation problem with random sources. This includes passive imaging in seismology ( [START_REF] Snieder | A comparison of strategies for seismic interferometry[END_REF]), ocean acoustics ( [START_REF] Burov | The use of low-frequency noise in passive tomography of the ocean[END_REF]), ultrasonics ( [START_REF] Weaver | Ultrasonics without a source: Thermal fluctuation correlations at mhz frequencies[END_REF]), and local helioseismology ( [START_REF] Gizon | Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows[END_REF]). It is known that in such situations cross correlations of randomly excited waves contain a lot of information about the medium. In particular, it has been demonstrated both theoretically and numerically that under certain assumptions such cross correlations are proportional to the imaginary part of the Green function in the frequency domain. This leads to inverse problems where some coefficient(s) in a wave equation have to be recovered given only the imaginary part of Green's function. The purpose of this paper is to prove some first uniqueness results for such inverse problems. For results on related problems in the time domain see, e.g., [START_REF] Garnier | Passive sensor imaging using cross correlations of noisy signals in a scattering medium[END_REF] and references therein. Recall that for a random solution u(x, t) of a wave equation modeled as a stationary random process, the empirical cross correlation function over an interval [0, T ] with time lag τ is defined by C T (x 1 , x 2 , τ ) := 1 T T 0 u(x 1 , t)u(x 2 , t + τ ) dt, τ ∈ R, In numerous papers it has been demonstrated that under certain conditions the time derivative of the cross correlation function is proportional to the symmetrized outgoing Green function ∂ ∂τ E [C T (x 1 , x 2 , τ )] ∼ -[G(x 1 , x 2 , τ ) -G(x 1 , x 2 , -τ )], τ ∈ R. Taking a Fourier transform of the last equation with respect to τ one arrives at the relation E C T (x 1 , x 2 , k) ∼ 1 ki G + (x 1 , x 2 , k) -G + (x 1 , x 2 , k) = 2 k ℑG + (x 1 , x 2 , k), k ∈ R. Generally speaking, these relations have been shown to hold true in situations where the energy is equipartitioned, e.g. in an open domain the recorded signals are a superposition of plane waves in all directions with uncorellated and identically distributed amplitudes or in a bounded domain that amplitudes of normal modes are uncorrelated and identically distributed, see [START_REF] Garnier | Passive sensor imaging using cross correlations of noisy signals in a scattering medium[END_REF][START_REF] Roux | Ambient noise cross correlation in free space: Theoretical approach[END_REF][START_REF] Gizon | Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows[END_REF][START_REF] Snieder | Extracting the Green's function of attenuating heterogeneous acoustic media from uncorrelated waves[END_REF][START_REF] Snieder | Extracting earth's elastic wave response from noise measurements[END_REF]. This condition is fulfilled if the sources are uncorrelated and appropriately distributed over the domain or if there is enough scattering. This paper has mainly been motivated by two inverse problems in local helioseismology and in ocean tomography. In both cases we consider the problem of recovering the density ρ and the compressibility κ (or equivalently the sound velocity c = 1/ √ ρκ) in the acoustic wave equation ∇ • 1 ρ(x) ∇p + ω 2 κ(x)p = f, x ∈ R d , d ≥ 2, (1.1) with random sources f . We assume that correlation data proportional to the imaginary part of Green's function for this differential equation are available on the boundary of a bounded domain Ω ⊂ R d for two different values of the frequency ω > 0 and that ρ and κ are constant outside of Ω. As a main result we will show that ρ and κ are uniquely determined by such data in some open neighborhood of any reference model (ρ 0 , κ 0 ). Let us first discuss the case of helioseismology in some more detail: Data on the line of sight velocity of the solar surface have been collected at high resolution for several decades by satellite based Doppler shift measurements (see [START_REF] Gizon | Local helioseismology: Three-dimensional imaging of the solar interior[END_REF]). Based on these data, correlations of acoustic waves excited by turbulent convection can be computed, which are proportional to the imaginary part of Green's functions under assumptions mentioned above. These data are used to reconstruct quantities in the interior of the Sun such as sound velocity, density, or flows (see e.g. [START_REF] Hanasoge | Seismic sounding of convection in the sun[END_REF]). The aim of this paper is to contribute to the theoretical foundations of such reconstruction method by showing local uniqueness in the simplified model above. In the case of ocean tomography we consider measurements of correlations of ambient noise by hydrophones placed at the boundary of a circular area of interest. If the ocean is modeled as a layered separable waveguide, modes do not interact, and each horizontal mode satisfies the two-dimensional wave equation of the form (1.1) (see [START_REF] Burov | The possibility of reconstructing the seasonal variability of the ocean using acoustic tomography methods[END_REF][START_REF] Burov | The use of low-frequency noise in passive tomography of the ocean[END_REF]). The problem above can be reduced to the following simpler problem of independent interest: Determine the real-valued potential v in the Schrödinger equation -∆ψ + v(x)ψ = k 2 ψ, x ∈ R d , d ≥ 2, k > 0 (1.2) given the imaginary part of the outgoing Green function G + v (x, y, k) for one k > 0 and all x, y on the boundary of a domain containing the support of v. This problem is a natural fixed energy version of the multi-dimensional inverse spectral problem formulated originally by M.G. Krein, I.M. Gelfand and B.M. Levitan at a conference on differential equations in Moscow in 1952 (see [START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF]). In this connection recall that the Schrödinger operator admits the following spectral decomposition in L 2 (R d ): (1.3) -∆ + v(x) = ∞ 0 λ 2 dE λ + N j=1 E j e j ⊗ e j , dE λ = 2 π ℑR + v (λ)λdλ, where dE λ is the positive part of the spectral measure for -∆ + v(x), E j are nonpositive eigenvalues of -∆ + v(x) corresponding to normalized eigenfunctions e j , known as bound states, and R + v (λ) = (-∆ + v -λ 2 -i0) -1 is the limiting absorption resolvent for -∆ + v(x), whose Schwartz kernel is given by G + v (x, y, λ), see, e.g., [START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF]Lem.14.6.1]. The plan of the rest of this paper is as follows: In the following section we present our main results, in particular algebraic relations between ℑG + v and ℜG + v on ∂Ω at fixed k (Theorem 2.4 and Theorem 2.5) and local uniqueness results given only the imaginary part of Green's function (Theorem 2.7 and Theorem 2.11). The remainder of the paper is devoted to the proof of these results (see Figure 1). After discussing the mapping properties of some boundary integral operators in section 3 we give the rather elementary proof of the relations between ℑG + v and ℜG + v at fixed k in section 4. By these relations, ℜG + v is uniquely determined by ℑG + v up to the signs of a finite number of certain eigenvalues. To fix these signs, we will have to derive an inertia theorem in section 5, before we can finally show in section 6 that ℜG + v is locally uniquely determined by ℑG + v and appeal to known uniqueness results for the full Green function to complete the proof of our uniqueness theorems. Finally, in section 7 we discuss the assumptions of our uniqueness theorems before the paper ends with some conclusions. ℑG v T BT * T AT * ℜG v ℑG + v B B | A| A A G v v (2.7) (2.9) (2.13) Thm.2.5 Prp.6.1 Prp.6.2 T inj. [ [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF] Fig. 1. Schema of demonstration of Theorems 2.7 and 2.11 2. Main results. For the Schrödinger equation (1.2) we will assume that v ∈ L ∞ (Ω, R), v = 0 on R d \ Ω, (2.1) Ω is an open bounded domain in R d with ∂Ω ∈ C 2,1 , (2.2) where by definition a ∂Ω ∈ C 2,1 means that ∂Ω is locally a graph of a C 2 function with Lipschitz continuous second derivatives, see [22, p.90] for more details. Moreover, we suppose that k 2 is not a Dirichlet eigenvalue of -∆ + v(x) in Ω. (2.3) For equation (1.2) at fixed k > 0 we consider the outgoing Green function G + v = G + v (x, y, k) , which is for any y ∈ R d the solution to the following problem: (-∆ + v -k 2 )G + v (•, y, k) = δ y , (2.4a) ∂ ∂|x| -ik G + v (x, y, k) = o |x| 1-d 2 , |x| → +∞. (2.4b) Recall that G v (x, y, k) = G v (y, x, k) by the reciprocity principle. In the present work we consider, in particular, the following problem: For the acoustic equation (1.1) we impose the assumptions that ρ ∈ W 2,∞ (R d , R), ρ(x) > 0, x ∈ Ω, ρ(x) = ρ c > 0, x ∈ Ω, (2.5a) κ ∈ L ∞ (Ω, R), κ(x) = κ c > 0, x ∈ Ω (2.5b) for some constants ρ c and κ c . For equation (1.1) we consider the radiating Green function P = P ρ,κ (x, y, ω), which is the solution of the following problem: (2.6) ∇ • 1 ρ ∇P (•, y, ω) + ω 2 κP (•, y, ω) = -δ y , ω > 0, ∂ ∂|x| -iω √ ρ c κ c P (x, y, ω) = o |x| 1-d 2 , |x| → +∞. In the present work we consider the following problem for equation (1.1): Problem 2.3. Determine the coefficients ρ, κ in the acoustic equation (1.1) from ℑP ρ,κ (x, y, ω) given at all x, y ∈ ∂Ω, and for a finite number of ω. Notation. If X and Y are Banach spaces, we will denote the space of bounded linear operators from X to Y by L(X, Y ) and write L(X) := L(X, X). Moreover, we will denote the subspace of compact operators in L(X, Y ) by K(X, Y ), and the subset of operators with a bounded inverse by GL(X, Y ). Besides, we denote by • ∞ the norm in L ∞ (Ω), and by •, • , • 2 the scalar product and the norm in L 2 (∂Ω). Furthermore, we use the standard notation H s (∂Ω) for L 2 -based Sobolev spaces of index s on ∂Ω (under the regularity assumption (2.2) we need |s| ≤ 3). In addition, the adjoint of an operator A is denoted by A * . 2.1. Relations between ℜG and ℑG. For fixed k > 0 let us introduce the integral operator G v (k) ∈ L(L 2 (∂Ω)) by (2.7) (G v (k)ϕ)(x) := ∂Ω G + v (x, y, k)ϕ(y) ds(y), x ∈ ∂Ω where ds(y) is the hypersurface measure on ∂Ω. For the basic properties of G v (k) see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Chapter 7]. Note that for the case v = 0 the Green function G + 0 is the outgoing fundamental solution to the Helmholtz equation, and G 0 is the corresponding single layer potential operator. Recall that σ Ω (-∆) := k > 0 : k 2 is a Dirichlet eigenvalue of -∆ in Ω is a discrete subset of (0, +∞) without accumulation points. Theorem 2.4. Suppose that Ω, k, and v satisfy the conditions (2.1), (2.2), (2.3). Then: 1. The mapping (2.8) (0, +∞) \ σ Ω (-∆) → L(H 1 (∂Ω), L 2 (∂Ω)), λ → Q(λ) := ℑG -1 0 (λ) has a unique continuous extension to (0, +∞). In following we will often write Q instead of Q(k). 2. G v (k) ∈ L(L 2 (∂Ω), H 1 (∂Ω) ) and the operators (2.9) A := ℜG v (k), B := ℑG v (k) satisfy the following relations: AQA + BQB = -B (2.10a) AQB -BQA = 0. (2.10b) Theorem 2.4 is proved in subsection 4.1. We would like to emphasize that relations (2.10a), (2.10b) are valid in any dimension d ≥ 1. For the next theorem, recall that the exterior boundary value problem ∆u + k 2 u = 0 in R d \ Ω, (2.11a) u = u 0 on ∂Ω, (2.11b) ∂u ∂|x| -iku = o |x| (1-d)/2 as |x| → ∞ (2.11c) has a unique solution for all u 0 ∈ C(∂Ω), which has the asymptotic behavior u(x) = e ik|x| |x| (d-1)/2 u ∞ x |x| 1 + O 1 |x| , |x| → ∞. Here u ∞ ∈ L 2 (S d-1 ) is called the farfield pattern of u. Theorem 2.5. Suppose that Ω, k, and v satisfy the conditions (2.1), (2.2), and (2.3). Then: I. The operator C(∂Ω) → L 2 (S d-1 ), u 0 → √ ku ∞ mapping Dirichlet boundary values u 0 to the scaled farfield pattern u ∞ of the solution to (2.11) has a continuous extension to an operator T (k) ∈ L L 2 (∂Ω), L 2 (S d-1 )), and T (k) is compact, injective, and has dense range. Moreover, Q(k) defined in Theorem 2.4 has a continuous extension to L(L 2 (∂Ω)) satisfying Q(k) = -T * (k)T (k). (2.12) II. The operators A, B ∈ L L 2 (S d-1 ) defined by (2.13) A := ℜ G v (k), B := ℑ G v (k), G v (k) := T (k)G v (k)T * (k) are compact and symmetric and satisfy the relations A 2 = B -B 2 , (2.14a) A B = B A. (2.14b) III. The operators A, B are simultaneously diagonalisable in L 2 (S d-1 ). Moreover, if G v (k)f = (λ A + iλ B )f for some f = 0 and λ A , λ B ∈ R, then (2.15) λ 2 A = λ B -λ 2 B . Theorem 2.5 is proved in subsection 4.2. We could replace T (k) by any operator satisfying (2.12) in most of this paper, e.g. -Q(k). However, G * v (k) has a physical interpretation given in Lemma 7.1, and this will be used to verify condition (2.16) below. In analogy to the relations (2.10a) and (2.10b), the relations (2.14a) and (2.14b) are also valid in any dimension d ≥ 1. Remark 2.6. The algebraic relations between ℜG + v and ℑG + v given in Theorem 2.4 and Theorem 2.5 involve only one frequency in contrast to well-known Kramers-Kronig relations which under certain conditions are as follows: ℜG + v (x, y, k) = 1 π p.v. +∞ -∞ ℑG + v (x, y, k ′ ) k ′ -k dk ′ , ℑG + v (x, y, k) = - 1 π p.v. +∞ -∞ ℜG + v (x, y, k ′ ) k ′ -k dk ′ , where x = y, k ∈ R for d = 3 or k ∈ R\{0} for d = 2, and G + v (x, y, -k) := G + v (x, y, k), G + 0 (x, y, -k) := G + 0 (x, y, k), k > 0. In this simplest form the Kramers-Kronig relations are valid, for example, for the Schrödinger equation (1.2) under conditions (2.1), (2.2), d = 2, 3, if the discrete spectrum of -∆ + v in L 2 (R d ) is empty and 0 is not a resonance (that is, a pole of the meromorphic continuation of the resolvent k → (-∆ + v -k 2 ) -1 ). Identifiability of v from ℑG v . We suppose that (2.16) if λ 1 , λ 2 are eigenvalues of G v (k) with ℑλ 1 = ℑλ 2 , then ℜλ 1 = ℜλ 2 , where G v (k) = A + i B is the operator defined in Theorem 2.5. Under this assumption any eigenbasis of B in L 2 (S d-1 ) is also an eigenbasis for A in L 2 (S d-1 ) in view of Theorem 2.5 (III). Theorem 2.7. Let Ω satisfy (2.2), d ≥ 2, v 0 satisfy (2.1) and let k > 0 be such that ℜG v0 (k) is injective in H -1 2 (∂Ω) and (2.16) holds true with v = v 0 . Then there exists δ = δ(Ω, k, v 0 ) > 0 such that for any v 1 , v 2 satisfying (2.1) and v 1 -v 0 ∞ ≤ δ, v 2 -v 0 ∞ ≤ δ, the equality ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all x, y ∈ ∂Ω implies that v 1 = v 2 . Theorem 2.7 is proved in subsection 6.1. In section 7 we present results indicating that the assumptions of this theorem are "generically" satisfied. We also mention the following simpler uniqueness result for ℜG + v based on analytic continuation if ℑG + v is given not only for one frequency, but for an interval of frequencies. This uniqueness result is even global. However, analytic continuation is notoriously unstable, and computing ℑG + v on an interval of frequencies from time dependent data would require an infinite time window. Therefore, it is preferable to work with a discrete set of frequencies. [START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF][START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF]. Proposition 2.8. Let Ω satisfy (2.2), d ∈ {2, 3}, and v 1 , v 2 satisfy (2. 1). Suppose that the discrete spectrum of the operators -∆ + v j in L 2 (R d ) is empty and 0 is not a resonance (that is, a pole of the meromorphic continuation of the resolvent R + vj (k) = (-∆ + v j -k 2 -i0) -1 ), j = 1, 2. Besides, let x, y ∈ R d , x = y, be fixed. Then if ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all k ∈ (k 0 -ε, k 0 + ε) for some fixed k 0 > 0, ε > 0, then G + v1 (x, y, k) = G + v2 (x, y, k) for all k > 0. In addition, if ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all x, y ∈ ∂Ω, k ∈ (k 0 -ε, k 0 + ε), then v 1 = v 2 . Proof. Under the assumptions of Proposition 2.8 the functions G + vj (x, y, k) at fixed x = y admit analytic continuation to a neighborhood of each k ∈ R (k = 0 for d = 2) in C. It follows that ℑG + vj (x, y, k) are real-analytic functions of k ∈ R (k = 0 for d = 2). Moreover, ℑG + vj (x, y, -k) = -ℑG + vj (x, y, k) for all k > 0. Hence, the equality ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for k ∈ (k 0 -ε, k 0 + ε) implies the same equality for all k ∈ R (k = 0 for d = 2). Taking into account Kramers-Kronig relations recalled in Remark 2.6, we obtain, in particular, that G + v1 (x, y, k) = G + v2 (x, y, k), k > 0. Moreover, the equality G + v1 (x, y, k) = G + v2 (x, y, k), x, y ∈ ∂Ω, k > 0, implies v 1 = v 2 see, e.g., 2.3. Identifiability of ρ and κ from ℑP ρ,κ . Let P ρ,κ (x, y, ω) be the function of (2.6) and define P ρ,κ,ω , P ρ,κ,ω as P ρ,κ,ω u (x) := ∂Ω P ρ,κ (x, y, ω)u(y) ds(y), x ∈ ∂Ω, u ∈ H -1 2 (∂Ω), P ρ,κ,ω := T (k)P ρ,κ,ω T * (k), k := ω √ ρ c κ c , where T (k) is the same as in Theorem 2.5. We suppose that (2.17) if λ 1 , λ 2 are eigenvalues of P ρ,κ,ω with ℑλ 1 = ℑλ 2 , then ℜλ 1 = ℜλ 2 . Let W 2,∞ (Ω) denote the L ∞ -based Sobolev space of index 2. The following theorems are local uniqueness results for the acoustic equation (1.1). Theorem 2.9. Let Ω satisfy (2.2), d ≥ 2, and suppose that ρ 0 , κ 0 satisfy (2.5a), (2.5b) for some known ρ c , κ c . Let ω 1 , ω 2 be such ℜP ρ0,κ0,ωj is injective in H -1 2 (∂Ω) and (2.17) holds true with ρ = ρ 0 , κ = κ 0 , ω = ω j , j = 1, 2. Besides, let ρ 1 , κ 1 and ρ 2 , κ 2 be two pairs of functions satisfying (2.5a), (2.5b). Then there exist constants δ 1,2 = δ 1,2 (Ω, ω 1 , ω 2 , κ 0 , ρ 0 ) such that if ρ 1 -ρ 0 W 2,∞ ≤ δ 1 , ρ 2 -ρ 0 W 2,∞ ≤ δ 1 , κ 1 -κ 0 ∞ ≤ δ 2 , κ 2 -κ 0 ∞ ≤ δ 2 , then the equality ℑP ρ1,κ1 (x, y, ω j ) = ℑP ρ2,κ2 (x, y, ω j ) for all x, y ∈ ∂Ω and j ∈ {1, 2} implies that ρ 1 = ρ 2 and κ 1 = κ 2 . Proof of Theorem 2.9. Put v j (x, ω) = ρ 1 2 j (x)∆ρ -1 2 j (x) + ω 2 (ρ c κ c -κ j (x)ρ j (x)), k 2 = ω 2 ρ c κ c . Then P ρj ,κj (x, y, ω) = ρ c G + vj (x, y, k), where G + vj denotes the Green function for equation (1.2) defined according to (2.4). By assumptions we obtain that ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k), x, y ∈ ∂Ω, k = k 1 , k 2 , k j = ω j √ ρ c κ c . Using Theorem 2.7, we obtain that v 1 (x, ω j ) = v 2 (x, ω j ), x ∈ Ω, j = 1, 2. Together with the definition of v j it follows that ρ 1 2 1 ∆ρ -1 2 1 = ρ 1 2 2 ∆ρ -1 2 2 and κ 1 = κ 2 . In turn, the equality ρ 1 2 1 ∆ρ -1 2 1 = ρ 1 2 2 ∆ρ -1 2 2 together with the boundary conditions ρ 1 | ∂Ω = ρ 2 | ∂Ω = ρ c imply that ρ 1 = ρ 2 , see, e.g., [1]. Theorem 2.10. Let Ω satisfy (2.2), d ≥ 2, and suppose that ρ 0 , κ 0 satisfy (2.5a), (2.5b) for some known ρ c , κ c . Let ω be such that ℜP ρ0,κ0,ω is injective in H -1 2 (∂Ω) and (2.17) holds true with ρ = ρ 0 , κ = κ 0 . Besides, let κ 1 , κ 2 satisfy (2.5b). Then there exists δ = δ(Ω, ω, κ 0 , ρ 0 ) such that the bounds κ 1 -κ 0 ∞ < δ, κ 2 -κ 0 ∞ < δ, and the equality ℑP ρ0,κ1 (x, y, ω) = ℑP ρ0,κ2 (x, y, ω) for all x, y ∈ ∂Ω imply that κ 1 = κ 2 . Proof of Theorem 2.10. In analogy to the proof of Theorem 2.9, put v j (x, ω) = ρ 1 2 0 (x)∆ρ -1 2 0 (x) + ω 2 (ρ c κ c -κ j (x)ρ 0 (x)). Then P ρ0,κj (x, y, ω) = ρ c G + vj (x, y, k) , where G + vj denotes the Green function for equation (1.2) defined according to (2.4). By assumptions we obtain that ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k), x, y ∈ ∂Ω. Using Theorem 2.7 we obtain that v 1 (x, ω) = v 2 (x, ω), x ∈ Ω. Now it follows from the definition of v j that κ 1 = κ 2 . The following uniqueness theorem for the coefficient κ only does not require smallness of this coefficient, but only smallness of the frequency ω. Note that it is not an immediate corollary to Theorem 2.7 since the constant δ in Theorem 2.7 depends on k. Theorem 2.11. Let Ω satisfy (2.2), d ≥ 2, and assume that ρ ≡ 1 and κ c = 1 so that (1.1) reduces to the Helmholtz equation ∆p + ω 2 κ(x)p = f. Moreover, suppose that κ 1 and κ 2 are two functions satisfying (2.5b) and κ 1 ∞ ≤ M, κ 2 ∞ ≤ M for some M > 0. Then for there exists ω 0 = ω 0 (Ω, M ) > 0 such that if ℑP 1,κ1 (x, y, ω) = ℑP 1,κ2 (x, y, ω) for all x, y ∈ ∂Ω, for some fixed ω ∈ (0, ω 0 ], then κ 1 = κ 2 . Theorem 2.11 is proved in section 6. Mapping properties of some boundary integral operators. In what follows we use the following notation: R + v (k)f (x) = Ω G + v (x, y, k)f (y) dy, x ∈ Ω, k > 0. Remark 3.1. The operator R + v (k) is the restriction from R d to Ω of the outgoing (limiting absorption) resolvent k → (-∆ + v -k 2 -i0) -1 . It is known that if v satisfies (2.1) and k 2 is not an embedded eigenvalue of -∆ + v(x) in L 2 (R d ), then R + v (k) ∈ L L 2 (Ω), H 2 (Ω) , G + 0 (x, y, k) = i 4 k 2π|x-y| ν H (1) ν (k|x -y|) with ν := d 2 -1. In addition, we denote the single layer potential operator for the Laplace equation by (3.2) (Ef )(x) := ∂Ω E(x -y)f (y) ds(y), x ∈ ∂Ω with E(x -y) := -1 2π ln |x -y|, d = 2, 1 d(d-2)ω d |x -y| 2-d , d ≥ 3, where ω d is the volume of the unit d-ball and E is the fundamental solution for the Laplace equation in R d . Note that -∆ x E(x -y) = δ y (x). Lemma 3.2. Let v, v 0 ∈ L ∞ (Ω, R) and let k > 0 be fixed. There exist c 1 = c 1 (Ω, k, v 0 ), δ 1 = δ 1 (Ω, k, v 0 ) such that if v -v 0 ∞ ≤ δ 1 , then R + v (k)f H 2 (Ω) ≤ c 1 (Ω, k, v 0 ) f L 2 (Ω) , for any f ∈ L 2 (Ω). In addition, for any M > 0 there exist constants c ′ 1 = c ′ 1 (Ω, M ) and k 1 = k 1 (Ω, M ) such that if v ∞ ≤ M , then R + k 2 v (k)f H 2 (Ω) ≤ c ′ 1 (Ω, M ) f L 2 (Ω) , d ≥ 3, | ln k| c ′ 1 (Ω, M ) f L 2 (Ω) , d = 2 for all f ∈ L 2 (Ω) and all k ∈ (0, k 1 ). Proof. We begin by proving the first statement of the lemma. The operators R + v (k) and R + v0 (k) are related by a resolvent identity in L 2 (Ω): (3.3) R + v (k) = R + v0 (k) Id + (v -v 0 )R + v0 (k) -1 , see, e.g., [19, p.248] for a proof. The resolvent identity is valid, in particular, if v -v 0 ∞ < R + v0 (k) -1 , where the norm is taken in L L 2 (Ω), H 2 (Ω) . It follows from (3.3) that R + v (k) ≤ R + v0 (k) 1 -v -v 0 ∞ R + v0 (k) , where the norms are taken in L L 2 (Ω), H 2 (Ω) . Taking δ 1 < R + v0 (k) -1 , we get the first statement of the lemma. To prove the second statement of the lemma, we begin with the case of v = 0. The Schwartz kernel of R + 0 (k) is given by the outgoing Green function for the Helmholtz equation defined in formula (3.1). In particular, ℑG + 0 (x -y, k) satisfies (3.4) ℑG + 0 (x, y, k) = 1 4 (2π) -ν k ν |x -y| -ν J ν (k|x -y|) = k 2ν 2 2ν+2 π ν ν! 1 + O(z 2 ) with the Bessel function J ν = ℜH (1) ν of order ν, where z = k|x -y| and O is an entire function with O(0) = 0. In addition, (3.5) ℜG + 0 (x -y, k) =          E(x -y) -1 2π ln k 2 + γ (1 + O 2 (z 2 )) + O 2 (z 2 ), d = 2, E(x -y) 1 + O 3 (z 2 ) , d ≥ 3 odd E(x -y) 1 + O d (z 2 ) -k d-2 2 2ν+3 π ν+1 ν! ln(z/2)(1 + O d (z 2 )), d ≥ R + k 2 v (k) ≤ R + 0 (k) 1 -k 2 M R + 0 (k) , if k 2 M R + 0 (k) < 1 , where the norms are taken in L L 2 (Ω), H 2 (Ω) . This inequality together with the second statement of the lemma for v = 0 imply the second statement of the lemma for general v. Lemma 3.3. Let v 0 , v 1 , v 2 ∈ L ∞ (Ω, R). Then for any k > 0 (3.6) G v1 (k) -G v2 (k) ∈ L H -3 2 (∂Ω), H 3 2 (∂Ω) . In addition, there exist c 2 (Ω, k, v 0 ), δ 2 (Ω, k, v 0 ) such that if v 1 -v 0 ∞ ≤ δ 2 , v 2 - v 0 ∞ ≤ δ 2 , then (3.7) G v1 (k) -G v2 (k) ≤ c 2 (Ω, k, v 0 ) v 1 -v 2 ∞ , where the norm is taken in L H -3 2 (∂Ω), H 3 2 (∂Ω) . Furthermore, for any M > 0 there exist constants c ′ 2 = c ′ 2 (Ω, M ) and k 2 = k 2 (Ω, M ) such that if v 1 ∞ ≤ M , v 2 ∞ ≤ M , then (3.8) G k 2 v1 (k) -G k 2 v2 (k) ≤ c ′ 2 (Ω, M )k 2 v 1 -v 2 ∞ , for d ≥ 3, c ′ 2 (Ω, M )k 2 | ln k| 2 v 1 -v 2 ∞ , for d = 2 holds true for all k ∈ (0, k 2 ), where the norms are taken in L H -3 2 (∂Ω), H Proof. Note that (3.9) G vj (k) = γR + vj (k)γ * , j = 1, 2, where γ ∈ L H s (Ω), H s-1 2 (∂Ω) and γ * ∈ L H -s+ 1 2 (∂Ω), H -s (Ω) for s ∈ ( 1 2 , 2] are the trace map and its dual (see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.3.37]). Here H s (Ω) denotes the space of distributions u on Ω, which are the restriction of some U ∈ H s (R d ) to Ω, i.e. u = U | Ω , whereas H s (Ω) denotes the closure of the space of distributions on Ω in H s (R d ) (see [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]). Recall that for a Lipschitz domain we have H s (Ω) * = H -s (Ω) for all s ∈ R ( [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm. 3.30]). The operators R + v1 (k) and R + v2 (k) are subject to the resolvent identity (3.10) R + v2 (k) -R + v1 (k) = R + v1 (k) v 1 -v 2 R + v2 (k), see, e.g., [19, p.248] for the proof. Together with (3.10) we obtain that (3.11) G v2 (k) -G v1 (k) = γR + v1 (k)(v 1 -v 2 )R + v2 (k)γ * . It follows from Remark 3.1 and from a duality argument that R + v (k) is bounded in L L 2 (Ω), H 2 (Ω) and in L H -2 (Ω), L 2 (Ω) . Taking into account that all maps in the sequence H -3 2 (∂Ω) γ * -→ H -2 (Ω) R + v 2 (k) -→ L 2 (Ω) v1-v2 -→ L 2 (Ω) R + v 1 (k) -→ H 2 (Ω) γ -→ H 3 2 (∂Ω) are continuous, we get (3.6). It follows from (3.11) that there exists c ′′ 2 = c ′′ 2 (Ω) such that G v1 (k) -G v2 (k) ≤ c ′′ 2 (Ω) R + v1 (k) R + v2 (k) v 1 -v 2 ∞ , where the norm on the left is taken in L H -3 2 (∂Ω), H 3 2 (∂Ω) , and the norms on the right are taken in L L 2 (Ω), H 2 (Ω) . Using this estimate and Lemma 3.2 we obtain the second assertion of the present lemma. Using the estimate for a pair of potentials (k 2 v 1 , k 2 v 2 ) instead of (v 1 , v 2 ) and using Lemma 3.2 we obtain the third assertion of the present lemma. Lemma 3.4. Suppose that (2.2) holds true and v satisfies (2.1). Then G v (k)and ℜG v (k) are Fredholm operators of index zero in spaces L(H s-1 2 (∂Ω), H s+ 1 2 (∂Ω)), s ∈ [-1, 1], real analytic in k ∈ (0, +∞). If, in addition, v ∈ C ∞ (R d , R), then G v (k) is boundedly invertible in these spaces if and only if (2.3) holds. Proof. It is known that G 0 (k) is Fredholm of index zero in the aforementioned spaces, see [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.17]. Besides, it follows from Lemma 3.3 that G v (k) -G 0 (k) is compact in each of the aforementioned spaces, so that G v (k) is Fredholm of index zero, since the index is invariant with respect to compact perturbations. Moreover, it follows from (3.4) that ℑG 0 (k) has a smooth kernel, which implies that ℜG v (k) is also Fredholm of index zero. It follows from [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.17] and [24, Thm.1.6] that for any v (k) has analytic continuation to a neighborhood of each k > 0 in C and the this is also true for G v (k) in view of formula (3.9). Hence, ℜG v (k) is real analytic for k > 0 and the same is true for ℜG 0 (k). -1 ≤ s ≤ 1 the operator G v (k) is invertible in L H s- Let us introduce the operator (3.12) W (k) := ℜG 0 (k) -E, d ≥ 3, ℜG 0 (k) -E + 1 2π ln k 2 + γ 1, • 1, d = 2 , where γ is the Euler-Mascheroni constant, and 1, • denotes the scalar product with 1 in L 2 (∂Ω). Lemma 3.5. There exist c 3 = c 3 (Ω), k 3 = k 3 (Ω) such that W (k) belongs to K H -1 2 (∂Ω), H 1 2 (∂Ω) and W (k) ≤      k 2 c 3 (Ω), d = 3 or d ≥ 5, k| ln k|c 3 (Ω), d = 4, k 2 | ln k|c 3 (Ω), d = 2 for all k ∈ (0, k 3 ), where the norm is taken in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Proof. It follows from (3.5) that W (k) ∈ L L 2 (∂Ω), H 2 (∂Ω) . By duality and approximation we get W (k) ∈ K H -1 2 (∂Ω), H 1 v = 0 that if λ > 0 is such that λ 2 is not a Dirichlet eigenvalue of -∆ in Ω, then Q(λ) ∈ L(H 1 (∂Ω), L 2 (∂Ω)) is well defined as stated in (2.8). It also follows from Lemma 3.4 that G v (k) ∈ L(L 2 (∂Ω), H 1 (∂Ω)) if (2.3) holds. To prove the remaining assertions of Theorem 2.4 we suppose first that in addition to the initial assumptions of Theorem 2.4 the condition (2.3) holds true for v = 0. Let us define the Dirichlet-to-Neumann map Φ v ∈ L H 1/2 (∂Ω), H -1/2 (∂Ω) by Φ v f := ∂ψ ∂ν where ψ is the solution to -∆ψ + vψ = k 2 ψ in Ω, ψ = f on ∂Ω, and ν is the outward normal vector on ∂Ω. Moreover, let Φ 0 denote the corresponding operator for v = 0. It can be shown (see e.g., [START_REF] Nachman | Reconstructions from boundary measurements[END_REF]Thm.1.6]) that under the assumptions of Theorem 2.4 together with (2.3) for v = 0 these operators are related to G and G 0 as follows: (4.1) G -1 v -G -1 0 = Φ v -Φ 0 . For an operator T between complex function spaces let T f := T f denote the operator with complex conjugate Schwarz kernel, and note that T -1 = T -1 if T is invertible. Since v is assumed to be real-valued, it follows that Φ v = Φ v . Therefore, taking the complex conjugate in (4.1), we obtain (G v ) -1 -(G 0 ) -1 = Φ v -Φ 0 . Combining the last two equations yields (4.2) (G -1 v ) -(G v ) -1 = (G 0 ) -1 -(G 0 ) -1 . Together with the definitions (2.9) of A, B, and Q, we obtain the relation (A + iB)iQ(A -iB) = -iB, which can be rewritten as the two relations (2.10). Thus, Theorem 2.4 is proved under the additional assumption that (2.3) is satisfied for v = 0. Moreover, it follows from formula (4.2) that the mapping (2.8) extends to all k > 0, i.e. the assumption that k 2 is not a Dirichlet eigenvalue of -∆ in Ω can be dropped. More precisely, for any k > 0 one can always find v satisfying (2.1), (2.3) such that the expression on the hand side left of formula (4.2) is well-defined and can be used to define Q(k). The existence of such v follows from monotonicity and upper semicontinuity of Dirichlet eigenvalues. This completes the proof of Theorem 2.4. 4.2. Proof of Theorem 2.5. Part I. Let k > 0 be fixed. The fact that T (k) extends continuously to L 2 (∂Ω) and is injective there is shown in [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.28]. More precisely, the injectivity of T (k) in L(L 2 (∂Ω), L 2 (S d-1 )) is proved in this book only in dimension d = 3, but the proof works in any dimension d ≥ 2. In addition, T (k) ∈ L L 2 (∂Ω), L 2 (S d-1 ) is compact as an operator with continuous integral kernel (see [8, (3. 58)]). It follows from the considerations in the last paragraph of the proof of Theorem 2.4 that for any k > 0 there exists v ∈ C ∞ (R d , R) with supp v ⊂ Ω such that k 2 is not a Dirichlet eigenvalue of -∆ + v(x) in Ω and such that Q(k) = ℑG -1 v (k). Recall the formula (4.3) ℑG + v (x, y, k) = c 1 (d, k) S d-1 ψ + v (x, kω)ψ + v (y, kω) ds(ω), c 1 (d, k) := 1 8π k 2π d-2 where ψ + v (x, kω) is the total field corresponding to the incident plane wave e ikωx (i.e. ψ + v (•, kω) solves (1.2) and ψ + v (•, kω) -e ikω• satisfies the Sommerfeld radiation condition (2.11c)), see, e.g., [23, (2.26)]. It follows that the operator ℑG v (k) admits the factorization ℑG v (k) = c 1 (d, k)H v (k)H * v (k), where the operator H v (k) ∈ L(L 2 (S d-1 ), L 2 (∂Ω) ) is defined as follows: H v (k)g (x) := S d-1 ψ + v (x, kω)g(ω) ds(ω) Recall that H v (k) with v = 0 is the Herglotz operator, see, e.g., [8, (5.62)]. Lemma 4.1. Under the assumption (2.3) H * v (k) extends to a compact, injective operator with dense range in L H -1 (∂Ω), L 2 (S d-1 ) . If (Rh)(ω) := h(-ω), the following formula holds in H -1 (∂Ω): (4.4) RH * v (k) = 1 √ k c 2 (d, k) T (k)G v (k) with c 2 (d, k) := 1 4π exp -iπ d-3 4 k 2π d-3 2 Proof of Lemma 4.1. We start from the following formula, which is sometimes referred to as mixed reciprocity relation (see [11, (4.15)] or [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 3.16]): G + v (x, y, k) = c 2 (d, k) e ik|x| |x| (d-1)/2 ψ + v y, -k x |x| + O 1 |x| (d+1)/2 , |x| → +∞ This implies (G v (k)h)(x) = c 2 (d, k) e i|k||x| |x| (d-1)/2 (RH * v (k)h)(x) + O 1 |x| (d+1)/2 , |x| → +∞ for h ∈ L 2 (∂Ω) , where (G v (k)ϕ)(x) is defined in the same way as G v ϕ(x) in formula (2.7) but with x ∈ R d \ Ω, and from this we obtain (4.4) in L 2 (∂Ω). Recall also that G v (k) ∈ GL(H -1 (∂Ω), L 2 (∂Ω)) if (2. 3) holds (see Lemma 3.4). This together with injectivity of T (k) in L(L 2 (∂Ω), L 2 (S d-1 )) and formula (4.4) imply that H * v (k) extends by continuity to a compact injective operator with dense range in L H -1 (∂Ω), L 2 (S d-1 ) where it satisfies (4.4). Using (4.3), (4.4), and the identities R * R = I = RR * and R = R, eq. (2.12) can be shown as follows: -Q(k) = -1 2i G -1 v -G -1 v = G -1 v ℑG v G -1 v = c 1 (d, k)(H * v G -1 v ) * H * v G -1 v = c1(d,k) k|c2(d,k)| 2 T * (k)RR * T (k) = T * (k)T (k). Part II. The operators A, B ∈ L L 2 (∂Ω) are compact in view of Lemma 3.4 and Part I of Theorem 2.5. The relations (2.14a) and (2.14b) are direct consequences of (2.10a), (2.10b) and of definition (2.13). Part III. The operators A, B ∈ L L 2 (∂Ω) are real, compact symmetric and commute by (2.14b). It is well known (see e.g. [32, Prop. 8.1.5] that under these conditions A and B must have a common eigenbasis in L 2 (∂Ω). Moreover, if follows from (2.14a) that if f ∈ L 2 (∂Ω) is a common eigenfunction of A and B, then the corresponding eigenvalues λ A and λ B of A and B, respectively, satisfy the equation (2.15). Theorem 2.5 is proved. Stability of indices of inertia. Let S be a compact topological manifold (in what follows it will be S d-1 or ∂Ω). Let A ∈ L L 2 (S) be a real symmetric operator and suppose that (5.1) L 2 R (S) admits an orthonormal basis of eigenvectors of A. We denote this basis by {ϕ n : n ≥ 1}, i.e. Aϕ n = λ n ϕ n . Property (5.1) is obviously satisfied if A is also compact. Let us define the projections onto the sum of the non negative and negative eigenspaces by (5.2) P A + x := n : λn≥0 x, ϕ n ϕ n P A -x := n : λn<0 x, ϕ n ϕ n In addition, let L A -, L A + denote the corresponding eigenspaces: (5.3) L A -= ran P A -, L A + = ran P A + . Then it follows from Ax = ∞ n=1 λ n x, ϕ n ϕ n that Ax, x = | Ax + , x + | -| Ax -, x -| with x ± := P A ± x. The numbers rk P A + and rk P A -in N 0 ∪ {∞} (where N 0 denotes the set of nonnegative integers) are called positive and negative index of inertia of A, and the triple rk P A + , dim ker A, rk P A -is called inertia of A. A generalization of the Sylvester inertia law to Hilbert spaces states that for a self-adjoint operator A ∈ L(X) on a separable Hilbert space X and an operator Λ ∈ GL(X), the inertias of A and Λ * AΛ coincide (see [START_REF] Cain | Inertia theory[END_REF]Thm.6.1,p.234]). We are only interested in the negative index of inertia, but we also have to consider operators Λ which are not necessarily surjective, but only have dense range. Lemma 5.1. Let S 1 , S 2 be two compact topological manifolds, (A, A, Λ) be a triple of operators such that A ∈ L(L 2 (S 1 )) and A ∈ L(L 2 (S 2 )) are real, symmetric, Λ ∈ L(L 2 (S 1 ), L 2 (S 2 )), A = ΛAΛ * and A, A satisfy (5.1). Then rk P A -≤ rk P A -. Moreover, if rk P A -< ∞ and Λ is injective, then rk P A -= rk P A -. Proof. For each x ∈ L A -\ {0} we have 0 > Ax, x = AΛ * x, Λ * x = AP A + Λ * x, P A + Λ * x -AP A -Λ * x, P A -Λ * x ≥ -AP A -Λ * x, P A -Λ * x , which shows that P A -Λ * x = 0. Hence, the linear mapping L A -→ L A -, x → P A -Λ * x, is injective. This shows that rk P A -≤ rk P A -. Now suppose that d = rk P A -< ∞ and that Λ is injective. Note that the injectivity of Λ implies that Λ * has dense range (see, e.g., [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 4.6]). Let y 1 , . . . , y d be an orthonormal basis of L A -with Ay j = λ j y j . Let l min = min{|λ 1 |, . . . , |λ d |} and let x 1 , . . . , x d be such that y j -Λ * x j 2 < ε, 5ε √ d A < l min , j = 1, . . . , d. Let α 1 , . . . , α d ∈ R be such that j |α j | 2 = 1 and put x = j α j x j , y = j α j y j . Note that y -Λ * x 2 ≤ ε j |α j | ≤ ε √ d, Ay, y ≥ Ax, x -5ε √ d A > Ax, x -l min . Then we have -l min ≥ Ay, y > Ax, x -l min = AP A + x, P A + x -AP A -x, P A -x -l min ≥ -AP A -x, P A -x -l min , j = 1, . . . , d. Hence, P A -x = 0. Thus, the linear mapping L A -→ L A -, defined on the basis by y j → P A -x j , j = 1, . . . , d, is injective and rk P A -≤ rk P A -. The assumption (5.1) in Lemma 5.1 can be dropped, but then the operators P A ± , P A ± must be defined using the general spectral theorem for self-adjoint operators. The following two lemmas address the stability of the negative index of inertia under perturbations. We first look at perturbations of finite rank. Lemma 5.2. Let S be a compact topological manifold, let A 1 , A 2 be compact selfadjoint operators in L 2 (S) and set n j := rk P Aj -for j = 1, 2. If n 1 < ∞ and rk(A 1 - A 2 ) < ∞, then n 2 ≤ n 1 + rk(A 1 -A 2 ). Proof. Let λ Aj 1 ≤ λ Aj 2 ≤ • • • < 0 denote the negative eigenvalues of A j in L 2 (S) , sorted in ascending order with multiplicities. By the min-max principle we have that max S k-1 min x∈S ⊥ k-1 , x =1 A j x, x = λ Aj k , 1 ≤ k ≤ n j , sup S k-1 min x∈S ⊥ k-1 , x =1 A j x, x = 0, k > n j where the maximum is taken over all (k -1)-dimensional subspaces S k-1 of L 2 (S) and S ⊥ denotes the orthogonal complement of S k-1 in L 2 (S). Let K = A 1 -A 2 , r = rk K and note that A 1 x = A 2 x for any x ∈ ker K = (ran K) ⊥ . Also note that (S k-1 ⊕ ker K) ⊥ ⊂ S ⊥ k-1 ∩ ran K . For k = n 1 + 1 we obtain 0 = sup S k-1 min x∈S ⊥ k-1 , x =1 A 1 x, x ≤ sup S k-1 min A 2 x, x : x ∈ (S k-1 ⊕ ker K ⊥ , x = 1 ≤ sup S k-1+r min x∈S k-1+r , x =1 A 2 x, x = λ A2 k+r if n 2 ≥ k + r, 0 else. Taking into account that λ A2 k+r < 0, we obtain that only the second case is possible, and this implies n 2 ≤ k -1 + r. In the next lemma we look at "small" perturbations. The analysis is complicated by the fact that we have to deal with operators with eigenvalues tending to 0. Moreover, we not only have to show stability of rk P A -but also of L A -. Lemma 5.3. Let S 1 be a C 1 compact manifold and S 2 a topological compact manifold. Let (A, A 0 , Λ) be a triple of operators such that A, A 0 ∈ L L 2 (S 1 ) are real, symmetric, satisfying (5.1); Λ ∈ L L 2 (S 1 ), L 2 (S 2 ) is injective and A 0 ∈ GL H -1 2 (S 1 ), H 1 2 (S 1 ) ∩ GL L 2 (S 1 ), H 1 (S 1 ) , rk P A0 -< ∞, A -A 0 ∈ K H -1 2 (S 1 ), H 1 2 (S 1 ) Put A = ΛAΛ * , A 0 = ΛA 0 Λ * . The following statements hold true: 1. rk P A -< ∞. 2. For any σ > 0 there exists δ = δ(A 0 , Λ, σ) such that if A -A 0 < δ in L H -1 2 (S 1 ), H 1 2 (S 1 ) , then (a) rk P A -= rk P A0 -, (b) A is injective in H -1 2 (S 1 ), (c) if Af = λf for some f ∈ L 2 (S 2 ) with f 2 = 1, then λ < 0 if and only if d(f, L A0 -) < 1 2 , (d) all negative eigenvalues of A in L 2 (S d-1 ) belong to the σ-neighborhood of negative eigenvalues of A 0 . Proof. First part. We have that A = |A 0 | 1 2 Id + R + |A 0 | -1 2 (A -A 0 )|A 0 | -1 2 |A 0 | 1 2 , with R finite rank compact operator in L 2 (S 1 ). More precisely, starting from the orthonormal eigendecomposition of A 0 in L 2 (S 1 ), A 0 f = ∞ n=1 λ n f, ϕ n ϕ n , we define |A 0 | α for α ∈ R and R as follows: |A 0 | α f = ∞ n=1 |λ n | α f, ϕ n ϕ n , Rf = -2 n:λn<0 f, ϕ n ϕ n . By our assumptions and the polar decomposition, |A 0 | -1 is a symmetric operator on L 2 (S 1 ) with domain H 1 (S 1 ), and |A 0 | -1 ∈ L(H 1 (S 1 ), L 2 (S 1 )). Consequently, by complex interpolation we get |A 0 | -1 2 ∈ L H 1 2 (S 1 ), L 2 (S 1 ) . In a similar way, we obtain |A 0 | -1 2 ∈ L L 2 (S 1 ), H -1 2 (S 1 ) , |A 0 | 1 2 ∈ L L 2 (S 1 ), H 1 2 (S 1 ) . Thus, the operator |A 0 | -1 2 (A-A 0 )|A 0 | -1 2 is compact in L 2 (S 1 ). Hence, its eigenvalues converge to zero. Let us introduce the operators D, D 0 and ∆D by D := D 0 + ∆D, D 0 := Id + R, ∆D := |A 0 | -1 2 (A -A 0 )|A 0 | -1 2 , Then the eigenvalues of D converge to 1, and only finite number of eigenvalues of D in L 2 (S 1 ) can be negative. Applying Lemma 5.1 to the triple (D, A, |A 0 | 1 2 ), we get the first statement of the present lemma. Second part. At first, we show that there exists δ ′ such that if A -A 0 < δ ′ , then rk P A -≤ rk P A0 -. Here the norm is taken in L H -1 2 (S 1 ), H 1 2 (S 1 ) . Note that the spectrum of D 0 in L 2 (S 1 ) consists at most of the two points -1 and 1. Thus, the spectrum σ D of D satisfies σ D ⊆ [-1 -∆D , -1 + ∆D ] ∪ [1 -∆D , 1 + ∆D ], where ∆D is the norm of ∆D in L L 2 (S 1 ) . It follows that if x ∈ L D -, x 2 = 1, then Dx, x ≤ -1 + ∆D . On the other hand, Dx, x ≥ D 0 x, x -∆D ≥ -D 0 x -, x --∆D for x -= P D0 -x. It follows from the last two inequalities that D 0 x -, x -≥ 1 -2 ∆D . Thus, if ∆D < 1 2 , the mapping L D -→ L D0 -, x → P D0 -x is injective, so rk P D -≤ rk P D0 -. Using Lemma 5.1 to the triple (D, A, |A 0 | 1 2 ) and taking into account that rk P D0 -= rk P A0 -we also get that rk P A -≤ rk P A0 -. Moreover, there exists δ ′ = δ ′ (A 0 , Λ) such that if A -A 0 < δ ′ in the norm of L H -1 2 (S 1 ), H 1 2 ( S 1 ) , then ∆D < 1 2 and consequently, rk P A -≤ rk P A0 -. In addition, taking into account that |A 0 | -1 2 ∈ GL L 2 (S 1 ), H -1 2 (S 1 ) , we obtain that A is injective in H -1 2 (S 1 ) if A -A 0 < δ ′ . Applying Lemma 5.1 to the triple (A, A, Λ) and using the assumption that Λ is injective, we obtain that rk P A -≤ rk P A0 -if A -A 0 < δ ′ in L H -1 2 (S 1 ), H 1 2 (S 1 ) . Now let Σ be the union of circles of radius σ > 0 in C centered at negative eigenvalues of A 0 in L 2 (S 2 ). It follows from [START_REF] Kato | Perturbation Theory for Linear Operators[END_REF]Thm.3.16 p.212] that there exists δ ′′ = δ ′′ (A 0 , Λ, σ), δ ′′ < δ ′ , such that if A -A 0 < δ ′′ , then Σ also encloses rk P A0 - negative eigenvalues of A. Taking into account that rk P A -≤ rk P A0 -if A -A 0 < δ ′′ , we get that rk P A -= rk P A0 -. In addition, it follows from [21, Thm.3.16 p.212] that there exists δ ′′′ = δ ′′′ (A 0 , Λ, σ), δ ′′′ < δ ′′ , such that if A -A 0 < δ ′′′ , then P A --P A0 - < 1 2 . The second statement now follows from the following standard fact: Lemma 5.4. The following inequalities are valid: d(f, L A0 -) ≤ P A --P A0 - for all f ∈ L A -with f 2 = 1 and d(f, L A0 -) ≥ 1 -P A --P A0 - for all f ∈ L A + with f 2 = 1. Lemma 5.3 is proved. 6. Derivation of the uniqueness results. The proof of the uniqueness theorems will be based on the following two propositions: Proposition 6.1. For all κ ∈ L ∞ (Ω, R) and all ω > 0 the operator ℜG -ω 2 κ (ω) (resp. ℜ G -ω 2 κ (ω)) can have at most a finite number of negative eigenvalues in L 2 (∂Ω) (resp. L 2 (S d-1 )), multiplicities taken into account. In addition, for all M > 0 there exists a constant ω 0 = ω 0 (Ω, M ) such that for all κ satisfying κ ∞ ≤ M the condition (2.3) with v = -ω 2 κ is satisfied and the operator ℜG -ω 2 κ (ω) (resp. ℜ G -ω 2 κ (ω)) is positive definite on L 2 (∂Ω) (resp. L 2 (S d-1 )) if ω ∈ (0, ω 0 ]. Proof. Step 1. We are going to prove that ℜG -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (∂Ω) and, in addition, there exists ω ′ 0 = ω ′ 0 (Ω, M ) such that if ω ∈ (0, ω ′ 0 ], then ℜG -ω 2 κ (ω) is positive definite in L 2 (∂Ω) . Let E be defined according to (3.2). The operator E is positive definite in L 2 (∂D) for d ≥ 3, see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Cor.8.13]. For the case d = 2, the operator E r = E + 1 2π 1, • ln r is positive definite in L 2 ( E -ℜG -ω 2 κ (ω) ∈ K H -1 2 (∂Ω), H 1 2 (∂Ω) with E -ℜG -ω 2 κ (ω) ≤ ω 2 c ′ 2 (Ω, M ) κ ∞ + ω| ln ω|c ′ 3 (Ω) for all ω ∈ (0, min{k 2 (Ω, M ), k 3 (Ω)}), with the norm in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Applying Lemma 5.3 to the triple ℜG -ω 2 κ (ω), E, Id , we find that ℜG -ω 2 κ (ω) can have at most finite number of negative eigenvalues in L 2 (∂Ω) and that there exists ω ′ 0 = ω ′ 0 (Ω, M ) such that ℜG -ω 2 κ (ω) is positive definite in L 2 (∂Ω) if ω ∈ (0, ω ′ 0 ]. d = 2. Let r > Cap ∂Ω . We have that E r ∈ GL H -1 2 (∂Ω), H 1 2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) . This follows from [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.18 & Thm.8.16]. Using Lemma 3.3 and Lemma 3.5 we also have that E r -ℜG -ω 2 κ (ω) ∈ K H -1 2 (∂Ω), H 1 2 (∂Ω) . Note that (6.1) ℜG -ω 2 κ (ω) -E r = ℜG -ω 2 κ (ω) -ℜG 0 (ω) + W (ω) -1 2π ln ωr 2 + γ 1, • 1, where W (ω), γ are defined according to (3.12). Fix r > Cap ∂Ω . Using Lemma 3.3, Lemma 3.5 and formula (6.1) we obtain that R(ω) -E r ≤ ω 2 | ln ω| 2 c ′ 2 (Ω, M ) κ ∞ + ω 2 | ln ω|c ′ 3 (Ω), with R(ω) := ℜG -ω 2 κ (ω) + 1 2π ln ωr 2 + γ 1, • 1 for all ω ∈ (0, min{k 2 (Ω, M ), k 3 (Ω)}) with the norm in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Applying Lemma 5.3 to the triple R(ω), E r , Id , we find that R(ω) can have only finite number of negative eigenvalues in L 2 (∂Ω) and, in addition, there exists ω ′ 0 = ω ′ 0 (Ω, M, r) such that if ω ∈ (0, ω ′ 0 ], then R(ω) is positive definite in L 2 (∂Ω). Applying Lemma 5.2 to the pair of operators R(ω), ℜG -ω 2 κ (ω) we obtain that ℜG -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (∂Ω), since it is true for R(ω) and 1 2π ln ωr 2 + γ 1, • 1 is a rank one operator. Assuming, without loss of generality, that ω ′ 0 < 2 r e -γ , one can also see that the operator ℜG -ω 2 κ (ω) is positive definite for ω ∈ (0, ω ′ 0 ], as long as R(ω) is positive definite and -1 2π ln ωr 2 + γ 1, • 1 is non-negative definite. Step 2. Applying Lemma 5.1 to the triple (ℜG -ω 2 κ (ω), ℜ G -ω 2 κ (ω), T (ω)), we find that the operator ℜ G -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (S d-1 ) and, in addition, there exists ω 0 = ω 0 (Ω, M ), ω 0 < ω ′ 0 , such that ℜ G -ω 2 κ (ω) is positive definite in L 2 (S d-1 ) if ω ∈ (0, ω 0 ]. Proposition 6.2. Let v, v 0 ∈ L ∞ (Ω, R). Suppose that k > 0 is such that ℜG v0 (k) is injective in H -1 2 (∂Ω) and (2.16) holds true for v 0 . Moreover, let L v0 -denote the linear space spanned by the eigenfunctions of ℜ G v0 (k) corresponding to negative eigenvalues, and let f ∈ L 2 (S d-1 ) with f 2 = 1, be such that ℜ G v k)f = λf . Then for any σ > 0 there exists δ = δ(Ω, k, v 0 , σ) such that if v -v 0 ∞ ≤ δ, then 1. ℜG v (k) is injective in H -1 2 (∂Ω), 2. (2.16) holds true for v, 3. λ < 0 if and only if d(f, L v0 -) < 1 2 , 4. all negative eigenvalues of ℜ G v (k) in L 2 (S d-1 ) belong to the σ-neighborhood of negative eigenvalues of ℜ G v0 (k). Proof. Put A := ℜG v (k), A 0 := ℜG v0 (k), A := ℜ G v (k), A 0 := ℜ G v0 (k). It follows from Proposition 6.1 that rk P A0 -< ∞. Using Lemma 3.4 and the injectivity of A 0 in H -1/2 (∂Ω), we obtain that A 0 ∈ GL H -1/2 (∂Ω), H 1/2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) . It also follows from Lemma 3.3 that A -A 0 ∈ K H -1/2 (∂Ω), H 1/2 (∂Ω) . Applying Lemma 5.3 to the triple A, A 0 , T , we find that there exists δ ′ = δ ′ (Ω, k, v 0 ) such that if A-A 0 ≤ δ ′ in L H -1/2 (∂Ω), H 1/2 (∂Ω) , then rk P A -= rk P A0 -and A is injective in H -1/2 (∂Ω). Moreover, if Af = λf with f 2 = 1, then λ < 0 if and only if d(f, L v0 -) < 1 2 . Also note that, in view of Lemma 3.3, there exists δ = δ(Ω, k, v 0 ) such that if v -v 0 ∞ ≤ δ, then A -A 0 ≤ δ ′ . It remains to show that if v -v 0 ∞ ≤ δ for δ small enough and (2.16) holds true for v 0 , then it also holds true for v 0 . But this property follows from the upper semicontinuity of a finite number of eigenvalues of G v (k) with respect to perturbations (see [START_REF] Kato | Perturbation Theory for Linear Operators[END_REF]Thm.3.16 p.212]), from Lemma 3.3 and from the fact that G v (k) has at most a finite number of eigenvalues with negative real part (see Proposition 6.1 with -ω 2 κ = v, ω = k). Proposition 6.2 is proved. 6.1. Proof of Theorem 2.7. Let k > 0 and v 0 be the same as in the formulation of Theorem 2.7. It follows from Proposition 6.1 with v 0 = -ω 2 κ that the operator ℜ G v0 (k) can have only finite number of negative eigenvalues in L 2 (∂Ω), multiplicities taken into account. Let δ = δ(Ω, k, v 0 ) be choosen as in Proposition 6.2. Suppose that v 1 , v 2 are two functions satisfying the conditions of Theorem 2.7 and put A j := ℜ G vj (k), B j := ℑ G vj (k), j = 1, 2. By the assumptions of the present theorem, B 1 = B 2 . Together with Theorem 2.5 and formula (2.16) it follows that the operators A 1 and A 2 have a common basis of eigenfunctions in L 2 (S d-1 ) and that if A 1 f = λ 1 f , A 2 f = λ 2 f , for some f ∈ L 2 (S d-1 ), f 2 = 1, then (6.2) |λ 1 | = |λ 2 |. More precisely, any eigenbasis of B 1 is a common eigenbasis for A 1 and A 2 . It follows from Proposition 6.2 that λ 1 < 0 if and only if d(f, L v0 -) < 1 2 , and the same condition holds true for λ 2 . Hence, λ 1 < 0 if and only if λ 2 < 0. Thus, we have (6.3) A 1 = A 2 . Since by Theorem 2.5 (I) the operator T is injective with dense range the same is true for T * by [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 4.6]. Injectivity of T and (6.3) imply A 1 T * = A 2 T * . This equality, density of the range of T * and continuity of A 1 , A 2 now imply that [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF]. In turn, property (2.3) for v = v j follows from injectivity of ℜG vj (k) in H -1/2 (∂Ω) (see Proposition 6.2). This completes the proof of Theorem 2.7. A 1 = A 2 and hence G v1 (k) = G v2 (k). Now we can use that fact that G v1 (k) = G v2 (k) implies v 1 = v 2 if (2.3) holds true for v = v 1 and v = v 2 , see 6.2. Proof of Theorem 2.11. Let ω 0 = ω 0 (Ω, M ) be as in Proposition 6.1 and let ω ∈ (0, ω 0 ] be fixed. Put A j := ℜ G -ω 2 κj (ω), B j := ℑ G -ω 2 κj (ω), j = 1, 2. It follows from Proposition 6.1 that all the eigenvalues of G -ω 2 κ (ω) have positive real parts so that condition (2.16) is valid. As in the proof of Theorem 2.7 one can show that A 1 , A 2 have a common basis of eigenfunctions in L 2 (S d-1 ) (any eigenbasis of B 1 is a common eigenbasis for A 1 and A 2 ) and the relation (6.2) holds. In view of Proposition 6.1 we also have that λ 1 > 0, λ 2 > 0 such that λ 1 = λ 2 . Thus, (6.3) holds true. Starting from equality (6.3) and reasoning as in the end of the proof of Theorem 2.7, we obtain that κ 1 = κ 2 , completing the proof of Theorem 2.11. 7. Discussion of the assumptions of Theorem 2.7. The aim of this section is to present results indicating that the assumptions of Theorem 2.7 are always satisfied except for a discrete set of exceptional parameters. As a first step we characterize the adjoint operator G * v (k) as a farfield operator for the scattering of distorted plane waves at Ω with Dirichlet boundary conditions. Note that, in particular, - (d, k) G * v (k)g = u ∞ for any g ∈ L 2 (S d-1 ) where u ∞ ∈ L 2 (S d-1 ) is the farfield pattern of the solution u to the exterior boundary value problem (2.11) with boundary values u 0 (x) = S d-1 ψ + v (x, -kω)g(ω) ds(ω), x ∈ ∂Ω. Proof. It follows from the definition of operators H v , T , R in subsection 4.2 that u 0 = H v Rg and u ∞ = k -1/2 T H v Rg. Using eq. (4.4) in Lemma 4.1 we also find that √ k c 2 (d, k)H v R = G * v T * . Hence, k c 2 (d, k)u ∞ = T G * v T * g = G * v g. Lemma 7.2. Let Ω satisfy (2. 2) and suppose that Ω is stricty starlike in the sense that xν x > 0 for all x ∈ ∂Ω, where ν x is the unit exterior normal to ∂Ω at x. Let v ∈ L ∞ (Ω, R) and let k > 0 be such that k 2 is not a Dirichlet eigenvalue of -∆ in Ω. Then there exist M = M (k, Ω) > 0, ε = ε(k, Ω) > 0, such that if v ∞ ≤ M , then G v (ξ) satisfies (2.16) for all but a finite number of ξ ∈ [k, k + ε). Proof. Part I. We first consider the case v = 0. It follows from Lemma 7.1 together with the equality ψ + 0 (x, kω) = e ikωx that the operator G * 0 (k) is the farfield operator for the classical obstacle scattering problem with obstacle Ω. Moreover, S(k) := Id -2i G * 0 (k), is the scattering matrix in the sense of [START_REF] Helton | The first variation of the scattering matrix[END_REF]. It follows from [18, (2.1) and the remark after (1.9)] that all the eigenvalues λ = 1 of S(k) move in the counter-clockwise direction on the circle |z| = 1 in C continuously and with strictly positive velocities as k grows. More precisely, if λ(k) = e iβ(k) , λ(k) = 1 is an eigenvalue of S(k) corresponding to the normalized eigenfunction g(•, k), then ∂β ∂k (k) = 1 4π k 2π d-2 ∂Ω ∂f ∂ν x (x, k) 2 xν x ds(x), f (x, k) = S d-1 g(θ, k) e -ikθx -u(x, θ) ds(θ), x ∈ R d \ Ω, where u(x, θ) is the solution of problem (2.11) with u 0 (x) = e -ikθx (note that [START_REF] Helton | The first variation of the scattering matrix[END_REF] uses a different sign convention in the radiation condition (2.11c) resulting in a different sign of ∂β/∂k). It follows from this formula that ∂β ∂k (k) > 0: 1. the term xη x is positive by assumption, 2. ∂f ∂νx cannot vanish on ∂Ω identically. Otherwise, f vanishes on the boundary together with its normal derivative, and Huygens' principle (see [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.14]) implies that the scattered field S d-1 g(θ, k)u(x, θ) ds(θ) vanishes identically, so that f is equal to f (x, k) = S d-1 g(θ, k)e -ikθx ds(θ). One can see from this formula that f extends uniquely to an entire solution of -∆f = k 2 f . Moreover, f is a Dirichlet eigenfunction for Ω and it implies that f is identically zero, because k 2 is not a Dirichlet eigenvalue of -∆ in Ω by assumption. Now it follows [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.19] that the Herglotz kernel g(•, k) of f vanishes, but it contradicts the fact that g(•, k) is a normalized eigenfunction of S(k). It follows that all the non-zero eigenvalues of G * 0 (k) move continuously in the clockwise direction on the circle |z + i/2| = 1/2 in C with non-zero velocities as k grows. Moreover, since G * 0 (k) is compact in L 2 (S d-1 ) (see Theorem 2.5), it follows that z = 0 is the only accumulation point for eigenvalues of G * 0 (k). This together with Proposition 6.1 for κ = 0, ω = k, implies that there exist δ(k, Ω) > 0, ε(k, Ω) > 0 such that all the eigenvalues λ of G * 0 (ξ) with ℜλ < 0 belong to the half plane ℑz < -δ for ξ ∈ [k, k + ε). This proves Lemma 7.2 with v = 0 if we take into account that the eigenvalues of G * 0 (k) and G 0 (k) are related by complex conjugation. Part II. Let k be such that (2.16) holds true for v = 0 and choose δ(k, Ω), ε(k, Ω) as in the first part of the proof. Now let v ∈ L ∞ (Ω, R). It follows from Proposition 6.2 that for any σ > 0 there exists M = M (ξ, σ) such that if v ∞ ≤ M , then G * v (ξ) has a finite number of eigenvalues λ with ℜλ < 0, multiplicities taken into account, and these eigenvalues belong to the σ-neighborhood of the eigenvalues of G * 0 (ξ) if ξ ∈ [k, k + ε). In addition, M (ξ, σ) can be choosen depending continuously on ξ. Hence, (2.16) holds true for v if it holds true for v = 0 and σ is sufficiently small. This together with part I finishes the proof of Lemma 7.2 for a general v if we take into account that the eigenvalues of G * v (k) and G v (k) are related by complex conjugation. Remark 7.3. It follows from analytic Fredholm theory (see, e.g., [START_REF] Gokhberg | An operator generalization of the logarithmic residue theorem and the theorem of Rouché[END_REF]Cor. 3.3]) and Lemma 3.4 below that the condition that ℜG v0 (k) be injective in H -1/2 (∂Ω) is "generically" satisfied. More precisely, it is either satisfied for all but a discrete set of k > 0 without accumulation points or it is violated for all k > 0. Applying analytic Fredholm theory again to z → ℜG z 2 v0 (zk) and taking into account Proposition 6.1, we see that the latter case may at most occur for a discrete set of z > 0 without accumulation points. Conclusions. In this paper we have presented, in particular, first local uniqueness results for inverse coefficient problems in wave equations with data given the imaginary part of Green's function on the boundary of a domain at a fixed frequency. In the case of local helioseismology it implies that small deviations of density and sound speed from the solar reference model are uniquely determined by correlation data of the solar surface within the given model. The algebraic relations between the real and the imaginary part of Green's function established in this paper can probably be extended to other wave equations. An important limitation of the proposed technique, however, is that it is not applicable in the presence of absorption. To increase the relevance of uniqueness results as established in this paper to helioseismology and other applications, many of the improvements achieved for standard uniqueness results would be desirable: This includes stability results or even variational source conditions to account for errors in the model and the data, the use of many and higher wave numbers to increase stability, and results for data given only on part of the surface. 4 even where z = k|x -y| and O d and O d are entire functions with O d (0) = 0 = O d (0), and γ is the Euler-Mascheroni constant, see, e.g., [22, p.279]. These formulas imply the second statement of the present lemma for v = 0.Using the resolvent identity (3.3) we obtain 3 2 ( 2 ∂Ω) . 2 ( 2 ∂Ω) . The estimates also follow from (3.5), taking into account that k ln k = o(1) as k ց 0. 4. Derivation of the relations between ℜG v and ℑG v . 4.1. Proof of Theorem 2.4. It follows from Lemma 3.4 for Remark 7 . 4 . 74 In the particular case of v 0 = 0, Ω = {x ∈ R d : |x| ≤ R}, d = 2, 3, the injectivity of ℜG v0 (k) in H -1/2 (∂Ω) is equivalent to the following finite number of inequalities:(7.1) j l (kR) = 0 and y l (kR) = 0 for 0 ≤ l < kR -π 2 , d = 3, J l (kR) = 0 and Y l (kR) = 0 for 0 ≤ l < kR -π-1 2 , d = 2where j l , y l are the spherical Bessel functions and J l , Y l are the Bessel functions of integer order l. The reason is that the eigenvalues of ℜG v0 (k) are explicitly computable in this case, see, e.g.,[9, p.104 & p.144]. Problem 2.1. Determine the coefficient v in the Schrödinger equation (1.2) from ℑG + v (x, y, k) given at all x, y ∈ ∂Ω, at fixed k. As discussed in the introduction, mathematical approaches to Problem 2.1 are not yet well developed in the literature in contrast with the case of the following inverse problem from G + v (and not only from ℑG + v ): Problem 2.2. Determine the coefficient v in the Schrödinger equation (1.2) from G + v (x, y, k) given at all x, y ∈ ∂Ω, at fixed k. see, e.g.,[START_REF] Agmon | Spectral properties of Schrödinger operators and scattering theory[END_REF] Thm.4.2]. In turn, it is known that for the operator -∆ + v(x) with v satisfying (2.1) there are no embedded eigenvalues, see[START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF] Thm.14.5.5 & 14.7.2].Recall that the free radiating Green's function is given in terms of the Hankel functions H (1) ν of the first kind of order ν by (3.1) Now, since v satisfies (2.1), operator -∆ + v(x) has no embedded point spectrum in L 2 (R d ) according to[START_REF] Agmon | Spectral properties of Schrödinger operators and scattering theory[END_REF] Thm.4.2] and[START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF] Thm.14.5.5 & 14.7.2]. It follows that R + 1 2 (∂Ω), H s+ 1 2 (∂Ω) if and only if k 2 is not a Dirichlet eigenvalue of -∆ + v(x). ∂Ω) if and only if r > Cap ∂Ω , where Cap ∂Ω denotes the capacity of ∂Ω, see, e.g.,[START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF] Thm.8.16]. We consider the cases d ≥ 3 and d = 2 separately. This follows from[START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF] Thm.7.17 & Cor.8.13]. Using Lemma 3.3 and Lemma 3.5 we also get that d ≥ 3. We have that E ∈ GL H -1 2 (∂Ω), H 1 2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) . Lemma 7.1. Let v satisfy (2.1) and consider ψ + v and c 2 as defined as in subsection 4.2. Then we have kc 2 1 kc2(d,k) a standard farfield operator for Dirichlet scattering at Ω (see e.g. [8, §3.3]). 0 (k) is G *
62,913
[ "868128" ]
[ "481087", "155225", "481087", "89626" ]
01710144
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01710144v2/file/role-location-social-strength-friendship-prediction-lbsn-ipm-jorgeValverde.pdf
Jorge C Valverde-Rebaza Mathieu Roche Pascal Poncelet Alneu De Andrade Lopes The role of location and social strength for friendship prediction in location-based social networks Keywords: Location-based social networks, Link prediction, Friendship recommendation, Human mobility, User behavior Recent advances in data mining and machine learning techniques are focused on exploiting location data. There, combined with the increased availability of location-acquisition technology, has encouraged social networking services to offer to their users different ways to share their location information. These social networks, called location-based social networks (LBSNs), have attracted millions of users and the attention of the research community. One fundamental task in the LBSN context is the friendship prediction due to its role in different applications such as recommendation systems. In the literature exists a variety of friendship prediction methods for LBSNs, but most of them give more importance to the location information of users and disregard the strength of relationships existing between these users. The contributions of this article are threefold, we: 1) carried out a comprehensive survey of methods for friendship prediction in LBSNs and proposed a taxonomy to organize the existing methods; 2) put forward a proposal of five new methods addressing gaps identified in our survey while striving to find a balance between optimizing computational resources and improving the predictive power; and 3) used a comprehensive evaluation to quantify the prediction abilities of ten current methods and our five proposals and selected the top-5 friendship prediction methods for LBSNs. We thus present a general panorama of friendship prediction task in the LBSN domain with balanced depth so as to facilitate research and real-world application design regarding this important issue. Introduction In the real world, many social, biological, and information systems can be naturally described as complex networks in which nodes denote entities (individuals or organizations) and links represent different interactions between these entities. A social network is a complex network in which nodes represent people or other entities in a social context, whilst links represent any type of relationship among them, like friendship, kinship, collaboration or others [START_REF] Barabási | Network Science[END_REF]. With the growing use of Internet and mobile devices, different web platforms such as Facebook, Twitter and Foursquare implement social network environments aimed at providing different services to facilitate the connection between individuals with similar interests and behaviors. These platforms, also called as online social networks (OSNs), have become part of the daily life of millions of people around the world who constantly maintain and create new social relationships [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Yu | Friend recommendation with content spread enhancement in social networks[END_REF]. OSNs providing location-based services for users to check-in in a physical place are called location-based social networks (LBSNs) [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF][START_REF] Zhu | Understanding the adoption of location-based recommendation agents among active users of social networking sites[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Ozdikis | Evidential estimation of event locations in microblogs using the dempstershafer theory[END_REF]. One fundamental problem in social network analysis is link prediction, which aims to estimate the likelihood of the existence of a future or missing link between two disconnected nodes based on the observed network information [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF][START_REF] Wu | A balanced modularity maximization link prediction model in social networks[END_REF]. In the case of LBSNs, the link prediction problem should be dealt with by considering the different kinds of links [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Li | Mining user similarity based on location history[END_REF]. Therefore, it is called friendship prediction when the objective is to predict social links, i.e. links connecting users, and location prediction when the focus is to predict userlocation links, i.e. links connecting users with places [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF]. Since location information is a natural source in LBSNs, several techniques have been proposed to deal with the location prediction problem [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF]. However, to the best of our knowledge no studies have analyzed the performance of friendship prediction methods in the LBSN domain. In this paper, we review existing friendship prediction methods in the LBSN domain. Moreover, we organize the reviewed methods according to the different information sources used to make their predictions. We also analyze the different gaps between these methods and then propose five new friendship prediction methods which more efficiently explore the combination of the different identified information sources. Finally, we perform extensive experiments on well-known LBSNs and analyze the performance of all the friendship prediction methods studied not only in terms of prediction accuracy, but also regarding the quality of the correctly predicted links. Our experimental results highlight the most suitable friendship prediction methods to be used when real-world factors are considered. The remainder of this paper is organized as follows. Section 2 briefly describes the formal definition of an LBSN. Section 3 formally explains the link prediction problem and how it is dealt with in the LBSN domain. This section also presents a survey of different friendship prediction methods from the literature. Section 4 presents our proposals with a detailed explanation on how they exploit different information sources to improve the friendship prediction accuracy. Section 5 shows experimental results obtained by comparing the efficiency of existing friendship prediction methods against our proposals. Finally, Section 6 closes with a summary of our main contributions and final remarks. Location-Based Social Networks A location-based social network (LBSN), also referred to as geographic social network or geo-social network, is formally defined as a specific type of social networking platform in which geographical services complement traditional social networks. This additional information enables new social dynamics, including those derived from visits of users to the same or similar locations, in addition to knowledge of common interests, activities and behaviors inferred from the set of places visited by a person and the location-tagged data generated during these visits [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Allamanis | Evolution of a location-based online social network: Analysis and models[END_REF][START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Narayanan | A study and analysis of recommendation systems for location-based social network (LBSN) with big data[END_REF]. Formally, we represent an LBSN as an undirected network G(V, E, L, Φ), where V is the set of users, E is the set of edges representing social links among users, L is the set of different places visited by all users, and Φ is the set of check-ins representing connections between users and places. This representation reflects the presence of two types of nodes: users and locations, and two kinds of links: user-user (social links) and user-location (check-ins), which is an indicator of the heterogeneity of LBSNs [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF] Multiple links and self-connections are not allowed in the set E of social links. On the other hand, only self-connections are not allowed in the set Φ of check-ins. Since a user can visit the same place more than once, the presence of multiple links connecting users and places is possible if a temporal factor is considered. Therefore, a check-in is defined as a tuple θ = (x, t, ), where x ∈ V , t is the check-in time, and ∈ L. Clearly, θ ∈ Φ and |Φ| defines the total number of check-ins made by all users. Link Prediction In this section, we formally describe the link prediction problem and how this mining task is addressed in the LBSN domain. Moreover, we also review a selected number of friendship prediction methods for LBSNs. Problem Description Link prediction is a fundamental problem in complex network analysis [START_REF] Barabási | Network Science[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF], hence in social network analysis [START_REF] Wu | A balanced modularity maximization link prediction model in social networks[END_REF][START_REF] Liu | Network growth and link prediction through an empirical lens[END_REF][START_REF] Valverde-Rebaza | Exploiting behaviors of communities of Twitter users for link prediction[END_REF][START_REF] Shahmohammadi | Presenting new collaborative link prediction methods for activity recommendation in Facebook[END_REF]. Formally, the link prediction problem aims at predicting the existence of a future or missing link among all possible pairs of nodes that have not established any connection in the current network structure [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF]. Consider as potential link any pair of disconnected users x, y ∈ V such that (x, y) / ∈ E. U denotes the universal set containing all potential links between pairs of nodes in V , i.e. |U | = |V |×(|V |-1) Also consider a missing link as any potential link in the set of nonexistent links U -E. The fundamental link prediction task here is thus to detect the missing links in the set of nonexistent links, while scoring each link in this set. Thus, a predicted link is any potential link that has received a score above zero as determined by any link prediction method. The higher the score, the more likely the link will be [START_REF] Barabási | Network Science[END_REF][START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF]. From the set of all predicted links, L p , obtained by use of a link prediction method, we assume the set of true positives (T P ) as all correctly predicted links, and the set of false positives (F P ) as the wrongly predicted links. Thus, L p = T P ∪ F P . Moreover, the set of false negatives (F N ) is formed by all truly new links that were not predicted. Therefore, evaluation measures as the imbalance ratio, defined as IR = , can be used as well as the harmonic mean of precision and recall, the F-measure, defined as F 1 = 2 × P ×R P +R [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Pham | Ebm: An entropy-based model to infer social strength from spatiotemporal data[END_REF]. However, most of the researches in link prediction consider that these evaluation measures do not give a clear judgment of the quality of predictions. For instance, a right predicted link could not be considered as a true positive if any link prediction method gives it a low score. To avoid this fact, two standard evaluation measures are used, AUC and precisi@L [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF]. The area under the receiver operating characteristic curve (AUC) is defined as AU C = n1+0.5×n2 n , where from a total of n independent comparisons between pairs of positively and negatively predicted links, n 1 times the positively predicted links were given higher scores than negatively predicted links whilst n 2 times they were given equal scores. If the scores are generated from an independent and identical distribution, the AUC should be about 0.5; thus, the extent to which AUC exceeds 0.5 indicates how much better the link prediction method performs than pure chance. On the other hand, precisi@L is computed as precisi@L = Lr L , where L r is the number of correctly predicted links from L top-ranked predicted links. Friendship Prediction in LBSNs LBSNs provide services to their users to enable them to take better advantage of different resources within a specific geographical area, so the quality of such services can substantially benefit from improvements in link prediction [START_REF] Zhu | Understanding the adoption of location-based recommendation agents among active users of social networking sites[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Therefore, considering the natural heterogeneity of LBSNs, the link prediction problem for this type of network must consider its two kinds of links [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF], i.e. friendship prediction involves predicting user-user links [START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Xu-Rui | An algorithm for friendship prediction on location-based social networks[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF] whilst location prediction focuses on predict user-location links [START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF][START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF]. Friendship prediction is a traditional link prediction application, providing users with potential friends based on their relationship patterns and the social structure of the network [START_REF] Yu | Friend recommendation with content spread enhancement in social networks[END_REF]. Friendship prediction have been widely explored in LBSNs since it is possible to use traditional link prediction methods, such as common neighbors, Adamic-Adar, Jaccard, resource-allocation and preferential attachment, which are commonly applied and have been extensively studied in traditional social networks [START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF]. However, as location information is a natural resource in LBSNs, different authors have proposed friendship prediction methods to exploit it. Therefore, some methods use geographical distance [START_REF] Zhang | Distance and friendship: A distance-based model for link prediction in social networks[END_REF], GPS and/or check-in history [START_REF] Kylasa | Social ties and checkin sites: connections and latent structures in location-based social networks[END_REF], location semantics (tags, categories, etc.) [START_REF] Bayrak | Examining place categories for link prediction in location based social networks[END_REF] and other mobility user patterns [START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Pham | Ebm: An entropy-based model to infer social strength from spatiotemporal data[END_REF][START_REF] Xiao | Inferring social ties between users with human location history[END_REF] as information sources to improve the effectiveness of friendship prediction in LBSNs. The friendship prediction task in LBSNs is still an open issue where there are constant advances and new challenges. Furthermore, the importance of the friendship prediction task is not only due to its well known application in friendship recommendation systems, but also because it opens doors to new research and application issues, such as companion prediction [START_REF] Liao | Who wants to join me?: Companion recommendation in location based social networks[END_REF], local expert prediction [START_REF] Cheng | Who is the Barbecue King of Texas?: A Geo-spatial Approach to Finding Local Experts on Twitter[END_REF][START_REF] Liou | Design of contextual local expert support mechanism[END_REF][START_REF] Niu | On local expert discovery via geo-located crowds, queries, and candidates[END_REF], user identification [START_REF] Rossi | It's the way you check-in: Identifying users in location-based social networks[END_REF][START_REF] Riederer | Linking users across domains with location data: Theory and validation[END_REF] and others. Friendship Prediction Methods for LBSNs Most existing link prediction methods are based on specific measures that capture similarity or proximity between nodes. Due to theirs low computational cost and easy calculation, link prediction methods based on similarity are candidate approaches for real-world applications [START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Xu-Rui | Using multi-features to recommend friends on location-based social networks[END_REF][START_REF] Narayanan | A study and analysis of recommendation systems for location-based social network (LBSN) with big data[END_REF]. Although there is abundant literature related to friendship prediction in the LBSN context, there is a lack of well organised and clearly explained taxonomy of existing methods in the current literature. For the sake of clearly arranging these existing methods, this study proposes a taxonomy for friendship prediction methods for LBSNs based on the information sources used to perform their predictions. Figure 1 shows the proposed taxonomy. Friendship prediction methods for LBSNs use three information sources to compute the similarity between a pair of users: check-in, place, and social information. In turn, each information source has specific similarity criteria. There-fore, methods based on check-in information explore the frequency of visits at specific places and information gain. Methods based on place information commonly explore the number of user visits, regardless of frequency, to distinct places as well as the geographical distance between places. Finally, methods based on social information explore the social strength among users visiting the same places. Place Information Check-in Information Here, we will give a systematic explanation of popular methods for friendship prediction in LBSNs belonging to each one of the proposed categories. Methods based on Check-in Information User mobility behaviors can be analyzed when the time and geographical information about the location visited are record at check-ins. The number of check-ins may be an indicator of users' preference for visiting a specific type of places and therefore, the key to establishing new friendships. Two of the most common similarity criteria used by methods based on check-in information are the check-in frequency and information gain. Methods based on check-in frequency consider that the more check-ins at same places have made two users the more likely they will establish a friendship relationship. Some representative methods based on check-in frequency are the collocation, distinct collocation, Adamic-Adar of places, preferential attachment of check-ins, among others [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Bellow, we present the definition of two well-known friendship methods for LBSNs based on check-in frequency. Collocation (Co). This is one of the most popular methods based on the checkin frequency. The collocation method, also referred to as the number of collocations or common check-in count, expresses the number of times that users x and y visited some location at the same period of time. Thus, for a pair of disconnected users x and y, and considering a temporal threshold of time, τ ∈ R, the Co method is defined as: s Co x,y,τ = |Φ Co (x, y, τ)|, (1) where, Φ Co (x, y, τ) = {(x, y, t x , t y , ) | (x, t x , ) ∈ Φ(x) ∧ (y, t y , ) ∈ Φ(y) ∧ |t x -t y | ≤ τ}, is the set of check-ins made by both users x and y at the same place and over the same period of time, and Φ(x) = {(x, t, ) | x ∈ V : (x, t, ) ∈ Φ} is the set of check-ins made by the user x at different places. Adamic-Adar of Places (AAP). This is based on the traditional Adamic-Adar method but considering the number of check-ins of common visited places of users x and y. Thus, for a pair of users x and y, AAP is computed as: s AAP x,y = ∈Φ L (x,y) 1 log |Φ( )| , (2) where Although the number of check-ins may be a good indicator for the establishment of friendship between users, the fact that they have many check-ins at visited places may, on the contrary, reduce their chances of getting to know each other. To avoid this situation, some researchers have used the information gain of places as a resource to better discriminate whether a certain place is relevant to the formation of social ties between its visitors [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Some methods based on information gain of places are min entropy, Adamic-Adar of entropy, location category and others. Bellow, we present two well-known friendship methods for LBSNs based on information gain. Φ L (x, y) = Φ L (x) ∩ Φ L (y) Adamic Adar of Entropy (AAE). This also applies the traditional Adamic- Adar method while considering the place entropy for common locations of a pair of users x and y. Therefore, the AAE method is defined as: s AAE x,y = ∈Φ L (x,y) = 1 log E( ) , (3) where E( ) = -x∈Φ V ( ) q x, log(q x, ) is the place entropy of location , q x, = |Φ(x, )| |Φ( )| is the relevance of check-ins of a user, Φ(x, ) = {(x, t, ) | (x, t, ) ∈ Φ(x) ∧ ∈ Φ L (x)} is the set of check-ins of a user x at location , and Φ V ( ) = {x | (x, t, ) ∈ Φ(x) ∧ ∈ Φ L (x)} is the set of visitors of location . Location Category (LC). This calculates the total sum of the ratio of the number of check-ins of all locations visited by users x and y to the number of check-ins of users x and y at these locations while disregarding those with a high place entropy. Therefore, considering an entropy threshold τ E ∈ R, the LC method is defined as: s LC x,y = ∈Φ L (x) ∧ E( )<τ E ∈Φ L (y) ∧ E( )<τ E |Φ( )| + |Φ( )| |Φ(x, )| + |Φ(y, )| . (4) Methods based on Place Information Friendship prediction methods based on place information consider that locations are the main elements on which different similarity criteria can be used. Two of the most common similarity criteria used by methods based on place information are the number of distinct visitations and geographical distance. Methods based on distinct visitations consider specific relations among the different visited places by a pair of user as the key to compute the likelihood of a future friendship between them. Some representative methods based on distinct visitations at specific places are the common location, jaccard of places, location observation, preferential attachment of places, among others [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Steurer | Predicting social interactions from different sources of location-based knowledge[END_REF][START_REF] Steurer | Acquaintance or partner predicting partnership in online and location-based social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Below, we present two of the most representative friendship prediction methods for LBSNs based on distinct visitations. Common Location (CL) . This is inspired by the traditional common neighbor method and constitute the simplest and most popular method based on distinct visitations at places to determine the homophily among pairs of users. Common location method, also known as common places or distinct common locations, expresses the number of common locations visited by users x and y. Thus, CL is defined as: s CL x,y = |Φ L (x, y)|, (5) where, Φ L (x, y) = Φ L (x) ∩ Φ L (y) is the previously defined set of common visited places of a pair of users x and y. Jaccard of Places (JacP). This is inspired by the traditional Jaccard method. Jaccard of places method is defined as the fraction of the number of common locations and the number of locations visited by both users x and y. Therefore, JacP is computed as: s JacP x,y = |Φ L (x, y)| |Φ L (x) ∪ Φ L (y)| . ( 6 ) On the other hand, since different studies have shown the importance of geographical or geospatial distance in the establishment of social ties, many authors have proposed to exploit this fact to improve friendship prediction. Some of the most representative methods based on geographical distance are the min distance, geodist, weighted geodist, Hausdorff distance and adjusted Hausdorff distance [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF][START_REF] Zhang | Distance and friendship: A distance-based model for link prediction in social networks[END_REF][START_REF] Li | Geo-social media analytics[END_REF]. Below, we discuss two representative friendship prediction methods for LBSNs based on geographical distance. GeoDist (GeoD). This method is the most common of those based on geographical distance. Consider as the "home location" of user x, h x , relative to the most checked-in place. Therefore, GeoD computes the geographical distance between the home locations of users x and y. Thus, GeoD is calculated as: s GeoD x,y = dist( h x , h y ), (7) where dist( , ) is simply the well-known Haversine formula to calculate the great-circle distance between two points and over the Earth's surface [START_REF] Goodwin | The haversine in nautical astronomy[END_REF]. It is important to note that for this case, two users are more likely to establish a friendship if they have a low GeoD value. Adjusted Hausdorff Distance (AHD). This method is based on the classic Hausdorff distance but applying an adjustment to improve the friendship prediction accuracy. The AHD method is thus defined as: s AHD x,y = max{ sup ∈Φ L (x) inf ∈Φ L (y) dist adj ( , ), sup ∈Φ L (y) inf ∈Φ L (x) dist adj ( , )}, (8) where dist adj ( , ) = dist( , ) × max(diversity( ), diversity( )) is the adjusted geographical distance between two locations and , diversity( ) = exp(E( )) is the location diversity used to represent a location's popularity, and sup and inf represent the supremum (least upper bound) and infimum (greatest lower bound), respectively, from the set of visited places of a user x. Also similar to GeoD method, two users will be more likely to establish a relationship if they have a low AHD value. Methods based on Social Information Despite the fact that most of previously described methods capture different social behavior patterns based on the visited places of users, they do not directly use the social strength of ties between visitors of places [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. In the last years, some methods have been proposed to compute the friendship probability between a pair of users based on the places visited by their common friends. Some methods based on social strength are common neighbors within and outside of common places, common neighbors of places, common neighbors with total and partial overlapping of places and total common friend common check-ins [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Below, we describe two representative friendship prediction methods for LBSNs based on social strength. Common Neighbors of Places (CNP). This indicates that a pair of users x and y are more likely have a future friendship if they have many common friends visiting the same places also visited by at least x or y. Thus, the CNP method is defined as: s CN P x,y = |Λ L x,y |, (9) where Λ L x,y = {z ∈ Λ x,y | Φ L (x) ∩ Φ L (z) = ∅ ∨ Φ L (y) ∩ Φ L (z) = ∅} is the set of common neighbors of places of users x and y, and Λ x,y = {z ∈ V | (x, z) ∈ E ∧ (y, z) ∈ E} is the traditional set of common neighbors of pair of users x and y. Common Neighbors with Total and Partial Overlapping of Places (TPOP). This considers that a pair of users x and y could develop a friendship if they have more common friends visiting places also visited by both users than common friends who visited places also visited by only one of them. Therefore, the TPOP method is defined as: s T P OP x,y = |Λ T OP x,y | |Λ P OP x,y | , (10) where, Λ T OP x,y = {z ∈ Λ L x,y | Φ L (x)∩Φ L (z) = ∅∧Φ L (y)∩Φ L (z) = ∅} is the set of common neighbors with total overlapping of places, and Λ P OP x,y = Λ L x,y -Λ T OP x,y is the set of common neighbors with partial overlapping of places. Proposals We analyzed the reviewed link prediction methods and observed that some of them use more than one information source to improve their prediction accuracy. For example, AAP is naturally a method based on check-in frequency but it also use distinct visitations at specific places as additional information source. Other example is AHD, which is naturally a method based on geographical distance but it also use check-in frequency and information gain as additional information sources. Table 1 provides an overview of different information sources used by each friendship prediction method described in Section 3.3. Table 1: Summary of the friendship prediction methods for LBSNs, from the literature and our proposals, as well as the information sources used to make their predictions. From Table 1 we found that some information sources were not combined, for instance, social strength is only combined with distinct visitations at specific places. Assuming that combination of some information sources could improve the friendship prediction accuracy, we propose five new methods referred to as check-in observation (ChO), check-in allocation (ChA), friendship allocation within common places (FAW), common neighbors of nearby places (CNNP) and nearby distance allocation (NDA). They are shown in bold in Table 1 and are described as follows: Method Check-in Observation (ChO). This is based on both the distinct visitations at specific places and check-in frequency to perform predictions. We define ChO method as the ratio of the sum of the number of check-ins of users x and y at common visited places to the total sum of the number of check-ins at all locations visited by these users. Thus, ChO is computed as: s ChO x,y = ∈Φ L (x,y) |Φ(x, )| + |Φ(y, )| ∈Φ L (x) |Φ(x, )| + ∈Φ L (y) |Φ(y, )| . ( 11 ) Check-in Allocation (ChA). This is based on the traditional resource allocation method, ChA refines the popularity of all common visited places of users x and y through the count of total check-ins of each of such places. Therefore, ChA is defined as: s ChA x,y = ∈Φ L (x,y) 1 |Φ( )| . ( 12 ) ChA heavily punishes high numbers of check-ins at popular places (e.g. public venues) by not applying a logarithmic function on the size of sets of all check-ins at these places. Similar to ChO, the ChA method is also based on both the distinct visitations at specific places and check-in frequency to work. Friendship Allocation Within Common Places (FAW). This is also inspired by the traditional resource allocation method. Let the set of common neighbors within common visited places be Λ W CP x,y = {z ∈ Λ x,y | Φ L (x, y)∩ Φ L (z) = ∅}, the FAW method refines the number of check-ins made by all common friends within common visited places of users x and y. Therefore, the FAW is defined as: s F AW x,y = z∈Λ W CP x,y 1 |Φ(z)| . ( 13 ) Despite the use of check-in frequency and distinct visitations at places by FAW, we consider that this method is mainly based on social strength, due to the fact that this criterion is the filter used to perform predictions. Common Neighbors of Nearby Places (CNNP). This counts the number of common friends of users x and y whose geographical distance between their home locations and the home location of at least one, x or y, lies within a given radio. Therefore, given a distance threshold τ d , CNNP is computed as: s CN N P x,y = |{z | ∀z ∈ Λ x,y ∧ (dist( h x , h z ) ≤ τ d ∨ dist( h y , h z ) ≤ τ d )}|. ( 14 ) CNNP uses full place information as well as social information to make predictions, however we consider that it is a method based on social strength due to the fact that this criterion is fundamental for CNNP to work. Nearby Distance Allocation (NDA). This refines all the minimum adjusted distances calculated between the home locations of users x and y, and the respective home locations of all of their common neighbors of places. Therefore, NDA is defined as: s N DA x,y = z∈Λ L x,y 1 min{dist adj ( h x , h z ), dist adj ( h y , h z )} . ( 15 ) NDA is the only method that uses full check-in, place and social information. However, as previously applied for the other proposals, since NDA uses social strength as the main criterion, we consider it to be a method based on social information. Performance Evaluation In this section, we present an experimental evaluation carried out for all link prediction methods previously studied. This section includes an analysis of three real-world LBSN datasets with which the experiments were performed as well as a deep analysis of the predictive capabilities of each evaluated method. Dataset Description The datasets used in our experiments are real-world LBSNs in which users made check-ins to report visits to specific physical locations. In this section, we describe their main properties and ways to construct the training and test datasets. Dataset Selection The datasets used for our experiments had to meet certain requirements: i) they had to represent social and location data, i.e. data defining existing connections between users as well as the check-ins of all of them at all of their visited locations, and ii) those connections and/or check-ins had to be time stamped. Based on these two criteria, we selected three datasets collected from real-world LBSNs, which are commonly used by the scientific community for mining tasks in the LBSN domain. Brightkite. This was once a location-based social networking service provider where users shared their locations by checking-in. The Brightkite service was shut down in 2012, but the dataset was collected over the April 2008 to October 2010 period [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF]. This publicly available dataset1 consists of 58228 users, 214078 relations, 4491144 check-ins and 772788 places. Gowalla. This is also another location-based social networking service that ceased operation in 2012. The dataset was collected over the February 2009 to October 2010 period [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF] and also is publicly available2 . This dataset contains 196591 users, 950327 relations, 6442892 check-ins and 1280969 different places. Foursquare. Foursquare is one of the most popular online LBSN. Currently, this service report more than 50 million users, 12 billion check-ins and 105 million places in January 2018 3 . The dataset used for us experiments was collected over January 2011 to December 2011 period [START_REF] Gao | gscorr: Modeling geo-social correlations for new check-ins on location-based social networks[END_REF]. This publicly available dataset 4 contains 11326 users, 23582 relations, 2029555 check-ins and 172044 different places. The various properties of these datasets were calculated an the values depicted in Table 2. This table is divided into two parts, the first shows topological properties [START_REF] Barabási | Network Science[END_REF] whilst the second shows location properties [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Therefore, considering the first part of Table 2 we observe that the analyzed networks have a small average degree, k , which suggests that the users of these networks had between 4 and 10 friends in average. This implies that the average clustering coefficient, C, of networks is also low. However, the low degree heterogeneity, H = k 2 k 2 , of Brightkite and Foursquare indicate that their users are less different from each other than the users of Gowalla. Also, the assortativity coefficient r, which measures the preference of users to attach to others, shows that only Brightkite is assortative, which is why it has a positive value, indicating the presence of few relationships among users with a similar degree. On the other hand, Gowalla and Foursquare are disassortative, since their assortativity coefficients are negative, indicating the presence of a considerable number of relationships among users with a different degree. Considering the second part of Table 2, we observe that the number of users with at least one check-in, |Φ V |, is a little over 85% of total users of networks. Despite the fact that Gowalla and Brightkite have more users and check-ins than Foursquare, the average number of check-ins per user, Φ , of Foursquare users is greater than that of Gowalla and Brightkite users. However, the average of check-ins per place, L Φ , is similar for Brightkite and Gowalla, whilst for Foursquare is greater, i.e. Foursquare users made more check-ins at a specific place than Brightkite and Gowalla users. Finally, the very small average place entropy, E = 1 |L| ∈L E( ), of Brightkite suggests that the location information in this LBSN is a stronger factor to facilitate the establishment of new relationships between users than for Gowalla and Foursquare users. Data Processing We preprocess the datasets to make the data suitable for our experiments. Considering that isolated nodes and locations without visits can generate noise when measuring the performance of different link prediction methods, it is necessary to apply a policy for selecting data samples containing more representative information. Therefore, for each dataset, we consider only users with at least one friend and with at least one check-in at any location. Since our goal is to predict new friendships between users, we divided each dataset into training and test (or probe) sets while taking the time stamps information available into account. Therefore, links formed by Brightkite users who checked-in from April 2008 to January 2010 were used to construct the training set, whilst links formed by users who checked-in from February 2010 to October 2010 were used for the probe set. For Gowalla, the training set was constructed with links formed by users who checked-in from February 2009 to April 2010, and the probe set was constructed with links formed by users who checked-in from May 2010 to October 2010. Whereas, for Foursquare the training set is formed by users who checked-in from January 2011 to September 2011, whilst the probe set is formed by users that made check-ins over the October 2011 to December 2011 period. Table 3 shows the training and testing time ranges for the three datasets. Different studies have used a similar strategy for splitting data into training and probe sets, but they were not concerned about maintaining the consistency between users in both sets [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF][START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF], which could affect the performance of link prediction methods in different ways [START_REF] Yang | Evaluating link prediction methods[END_REF]. To avoid that, we proceeded to remove all links formed by users who checked-in only during the training time range or only in the testing time range. From the links formed by users with check-ins in both the training and testing time ranges, we chose one-third of the links formed by users at random with a higher degree than the average degree for the probe set, while the remaining links were part of the training set. Therefore, we obtained the training set G T (V, E T , L, Φ T ) and probe set G P (V, E P , L, Φ P ), where both sets keep the same users (V ) and locations (L) but differ in the social (E T and E P ) and user-location (Φ T and Φ P ) links. Table 3 Data Limitations Although the datasets selected contain thousands of users and links, they can be considered as relatively small compared to other online social network datasets. Notwithstanding this limitation present in the datasets analyzed in this study, we use them since they meet the requirements explained previously in Section 5.1.1 and also because they are frequently used in the state-of-the-art in order to propose a quantitive and qualitative analysis on the social and spatial factors impacting the friendships [START_REF] Allamanis | Evolution of a location-based online social network: Analysis and models[END_REF][START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Therefore, this work offers new light on exploiting the different information sources to improve friendship prediction in Brightkite, Gowalla and Foursquare, but our findings could be applied for other LBSNs. Some studies of the state-of-the-art use other datasets, e.g. Foursquare [START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF], Facebook [START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF], Twitter [START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF], Second Life [START_REF] Steurer | Predicting social interactions from different sources of location-based knowledge[END_REF][START_REF] Steurer | Acquaintance or partner predicting partnership in online and location-based social networks[END_REF], and other LBSNs. But we cannot use them for two main reasons: i) generally they are not publicly available, and ii) they do not respect the requirements detailed in Section 5.1.1. Experimental Setup For each of the 10 independent partitions of each dataset obtained as explained in Section 5.1.2, we considered 10 executions of each link prediction method presented in Section 3 and our proposals described in Section 4. We then applied different performance measures to the prediction results to determine which were the most accurate and efficient link prediction methods. All of the evaluation tests were performed using the Geo-LPsource framework, which we developed and is publicly available 5 . We set the default parameters of the link prediction methods as follows: i) for Co method we considered that τ = 1 day, ii) for LC method we considered that τ E = E , iii) for CNNP method we considered that τ d = 1500 m., and iv) for AHD method, for a user x and being the most visited place by him, we considered that the comparison . Evaluation Results For the three LBSNs analyzed, Table 4 summarizes the performance results for each link prediction method through different evaluation metrics. Each value in this table was obtained by averaging over 10 runs, over 10 partitions of training and testing sets, as previously detailed in Section 5.2. The values highlighted in bold correspond to the best results achieved for each evaluation metric. From Table 4, imbalance ratio and F-measure results were calculated considering the whole list of predicted links obtained by each evaluated link prediction method. On the other hand, the AUC results were calculated from a list of n = 5000 pairs of wrongly and right predicted links chosen randomly and independently. Due to the number of link prediction methods studied and the different ways they were evaluated, we performed a set of analyses to determine which were the best friendship prediction methods for LBSNs. Reducing the Prediction Space Size The prediction space size is related to the size of the set of predicted links, L p . Most existing link prediction methods prioritize an increase in the number of correctly predicted links even at the cost of a huge amount of wrong predictions. This generates a extremely skewed distribution of classes in the prediction space, which in turn impairs the performance of any link prediction method [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Therefore, efforts should also focus not only on reducing the number of wrong predictions but also on increasing the number of correctly predicted links relative to the total number of predictions. Previous studies showed that the prediction space size of methods based only on the network topology is around 10 11 ∼ 10 12 links for Brightkite and Gowalla. However, by using methods based on location information, the prediction space can be reduced by about 15-fold or more [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Based on that and to determine if reduction of the prediction space is related to different information sources, in Figure 2 we report the average prediction space size of the different link prediction methods analyzed in this study. Figure 2 shows that for the analyzed networks, methods based on checkin frequency, information gain, distinct visitations at places and geographical distance, followed the traditional logic of obtaining a high number of right predictions at the cost of a much higher number of wrong predictions [START_REF] Wang | Human mobility, social ties, and link prediction[END_REF]. On the other hand, methods based on social strength led to a considerably lower number of wrong predictions at the cost of a small decrease in the number of correctly predicted links relative to the results obtained by the first cited methods, which is important in a real scenario [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Our proposals followed a similar scheme as methods based on social strength, leading to less wrong predictions. G1 G2 G3 G4 G5 G6 10 This fact is clearly shown by the IR results in Table 4 where, besides highlighting that Co method generally had a better IR performance, we observed that some methods based on check-in frequency, information gain, distinct visitations at places and geographical distance had an IR higher than most methods based on social strength and our proposals. Therefore, Co was the method with the overall best IR performance, whilst GeoD and AHD were the worst ones. Considering only our proposals, we found that FAW and CNNP performed better in IR. These two methods have social components, which help to significantly reduce the prediction space size. The worst IR performance of our proposals was obtained by NDA, which is based on geographical distance, thus confirming that this type of information source generates a large prediction space. Measuring the Accuracy Since the IR results shown that some methods obtained a considerable number of correctly predicted links whilst others obtained an absurdly large number of wrongly predicted links, we adopted the f-measure (F 1 ) to evaluate the performance of prediction methods in terms of relevant predicted links. Therefore, we observe that FAW method, which is one of our proposals, had the best f-measure performance in the three analyzed LBSNs. To facilitate the analysis of all link prediction methods, based on Table 4 we ranked the average F 1 results obtained by all the link prediction methods in the three analyzed networks, and then we applied the Friedman and Nemenyi posthoc tests [START_REF] Demšar | Statistical comparisons of classifiers over multiple data sets[END_REF]. Therefore, the F-statistics with 14 and 28 degrees of freedom and at the 95 percentile was 2.06. According to the Friedman test using Fstatistics, the null-hypothesis that the link prediction methods behave similarly when compared with respect to their F 1 performance should be rejected. and TPOP, performed better than the others since occupied the first and second position, respectively. Co and ChO are in third and fourth position, respectively, whilst JacP and CL tied for the fifth position. After these methods, and a little further away, we have that ChA, AAP and AAE tied for the sixth position, CNNP and NDA are in seventh and eight position, respectively. CNP is ninth, AHD is tenth , GeoD is eleventh and LC is twelfth. Therefore, we observe that two of our proposals, FAW and ChO, are in the top-5. Moreover, methods based on information gain, such as LC, and methods based on geographical distance, such as GeoD and AHD, were at the end of the ranking. Analyzing the Predictive Power Table 4 also shows the prediction results obtained for AUC. From these results, we observed that CNP, GeoD and JacP outperformed all the other link prediction methods in Brightkite, Gowalla and Foursquare, respectively. In addition, we found that all the link prediction methods performed better than pure chance, except for LC in Foursquare. Furthermore, to gain further insight into the real prediction power of evaluated link prediction methods, we followed the same scheme used previously for F 1 analysis. Therefore, we ranked the average results of AUC obtained by all the link prediction methods, and then we applied Friedman and Nemenyi post-hoc tests. Similarly that for F 1 analysis, the critical value of the F-statistics with 14 and 28 degrees of freedom and at the 95 percentile was 2.06. However, unlike the F 1 analysis, this time the Friedman test suggested that the null-hypothesis that the link prediction methods behave similarly when compared by their AUC performance should not be rejected. Figure 3(b) shows the Nemenyi test results for the evaluated methods ranked by AUC. The diagram indicates that the CD value calculated at the 95 percentile was 12.38. This test also showed that the link prediction methods have no statistical significant difference, so they are connected by a bold line. Figure 3(b) indicates that, differently from F 1 analysis, this time the methods based on geographical distance and information gain are in the firsts positions. Thus, GeoD and AAE are in first and second position, respectively. JacP is third whilst FAW and ChA tied for the fourth position and AAP is fifth. The rest of the ranking was in the following order: NDA, CNP, AHD, ChO, CL, TPOP, Co, CNNP and LC. In this ranking, we also have two of our proposals in the top-5. FAW and ChA. To our surprise, LC continues in last position and some methods that have performed well in the F 1 ranking, such as TPOP, Co and CL, this time were in compromising positions. Obtaining the Top-5 Friendship Prediction Methods Since some link prediction methods performed better in the prediction space analysis whilst other ones did in the prediction power analysis, we analyzed the F 1 and AUC results at the same time. Therefore, from Table 4 we ranked the average F 1 and AUC results obtained by all the link prediction methods, and then applied Friedman and Nemenyi post-hoc tests to them. The critical F-statistic value with 14 and 70 degrees of freedom and at the 95 percentile was 1.84. Based on this F-statistic, the Friedman test suggested that the nullhypothesis that the methods behave similarly when compared according to their F 1 and AUC performances should be rejected. Figure 4 shows the Nemenyi test results for the analyzed methods in our final ranking. The diagram in Figure 4 indicates that the CD value at the 95 percentile is 8.76. From diagram in Figure 4, we observe that FAW has statistical significant difference with LC. 4. Diagram shows the final ranking of link prediction methods considering both the optimal reduction of prediction space size and high prediction power. Our proposals are highlighted in bold. Figure 4 indicates that FAW is in first position, JacP is second, AAE is third, ChA is fourth and AAP is fifth. ChO and TPOP tied for the sixth position. The rest of the ranking was in the following order: CL, GeoD, NDA, Co, CNP, AHD, CNNP and LC. Therefore, two of our proposals, FAW and ChA, are in the top-5 of the final ranking. LC definitively has the worst performance. Note that the methods in the top-5 belong to the different information sources identified in this study, so we have a method based on social strength (FAW), a method based on distinct visitations at places (JacP), a method based on information gain (AAE) and two methods based on check-in frequency (ChA and AAP). The only one missing in the top-5 of the final ranking is some method based on geographical distance. For recommending to users some links as possible new friendships, we can just select the links with the highest scores [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Furthermore, whereas for recommendation task is not enough only a method with good prediction performance, also it is necessary that from a limited portion of the total predicted links it generates a high number of right predictions, good enough to be showed to users as appropriate friendship suggestions [START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF][START_REF] Ahmadian | A social recommendation method based on an adaptive neighbor selection mechanism[END_REF]. Therefore, to assess the performance of top-5 methods from the final ranking through limited segments of the total list of predicted links, we analyzed them by precisi@L. Figure 5 shows the different precisi@L performances for the top-5 methods of our final ranking. These precisi@L results were calculated for different L values and for each analyzed LBSN. Figure 5 indicates that most of the evaluated methods performed best when L = 100, i.e. they are able to make a few accurate predictions. When link prediction methods have to make more than a thousand predictions, i.e when L > 1000, their prediction abilities decrease considerably. Moreover, Figure 5 shows that the evaluated methods have a similar behavior in the three analyzed LBSNs. Thus, ChA, AAP and AAE performed similarly with a slight superiority of ChA. Moreover, JacP and FAW showed similar performance with a slight superiority of JacP. Analyzing the precis@n performance of methods in each analyzed network, Figure 5(a) shows that in Brightkite, AAP outperformed all the other evaluated methods when L = 100. Thereafter, our proposal FAW performed better than the rest of methods for the rest of L values. JacP outperformed poorly. Figure 5(b) shows that in Gowalla, the methods JacP achieved the best performance for all of the L values. One of our proposals, i.e. ChA, ranks second when L = 100, to remain in third position for the rest of L values. When L = 1000, other of our proposals, i.e. FAW, achieve the second position and it holds that position for the rest of L values. Finally, Figure 5(c) shows that methods in this network achieved very low precisi@L values (less than 0.2). However, in Foursquare, ChA outperformed all the methods when L = 100 but it is overcome by JacP, which keeps the second position for the rest of L values. Our proposal FAW performed poorly when L = 100 but it achieves the third position when L = 1000 and maintain this position since it slightly exceeds AAE and AAP. Conclusion In last years, a variety of online services which provide users with easy ways to share their geo-spatial locations and location-related content have become popular. These services, called LBSNs, constitute a new type of social network and give rise to new opportunities and challenges with regard to different social network issues, such as location recommendation [START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF][START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF], user identification [START_REF] Rossi | It's the way you check-in: Identifying users in location-based social networks[END_REF][START_REF] Riederer | Linking users across domains with location data: Theory and validation[END_REF], discovery of local experts [START_REF] Cheng | Who is the Barbecue King of Texas?: A Geo-spatial Approach to Finding Local Experts on Twitter[END_REF][START_REF] Liou | Design of contextual local expert support mechanism[END_REF][START_REF] Niu | On local expert discovery via geo-located crowds, queries, and candidates[END_REF], and discovery of travel companions [START_REF] Liao | Who wants to join me?: Companion recommendation in location based social networks[END_REF]. Motivated by the important role that LBSNs are playing for millions of users, we conducted a survey of recent related research on friendship prediction and recommendation. Although there is abundant methods to tackle the friendship prediction problem in the LBSN domain, there is a lack of well organised and clearly explained taxonomy that helps the best use of current literature. Therefore, our first contribution in this work was related to proposes a taxonomy for friendship prediction methods for LBSNs based on five information sources identified: frequency of check-ins, information gain, distinct visitations at places, geographical distance and social strength. Based on the taxonomy proposed, we identified some gaps in existing friendship prediction methods and proposed five new ones: check-in observation (ChO), check-in allocation (ChA), friendship allocation within common places (FAW), common neighbors of nearby places (CNNP) and nearby distance allocation (NDA). These new friendship prediction methods are exclusive to perform friendship prediction task in the LBSN context and constituted our second contribution. Due to the fact that we aimed objectively quantify the predictive power of friendship prediction methods in LBSNs as well as determine how good they work in the context of recommender systems, our third contribution is related to the identification of the top-5 friendship prediction methods that better perform in the LBSN context. For this purpose, we performed an exhaustive evaluation process on snapshots of three well known real-world LBSNs. Based on our results, we empirically demonstrate that some friendship prediction methods for LBSNs could be ranked as the best for some evaluation measure but could perform poorly for other ones. Thus, we stressed the importance of choosing the appropriate measure according to the objective pursued in the friendship prediction task. For instance, in general, some friendship prediction methods performed better with regard to the F-measure than with AUC, so if in any real-world application it is necessary to focus on minimizing the number of wrong predictions, the best option is to consider methods that work well based on the F-measure. However, if the focus is to obtain a high number of right predictions, but with a high chance that these predictions represent strong connections, then the best option could be to consider methods that work well based on AUC. Nevertheless, in a real-world scenario likely will be necessary to balance both the F-measure and AUC performance of methods. Thus, we finally identified the top-5 friendship prediction methods that performed in a balanced way for different metrics. Moreover, in this top-5 are two of our proposals, FAW in the first position and ChA in the fourth. Other observation based on our results is related to the fact that the use of a variety of information sources does not guarantee the best performance of a method. For instance, NDA method, which is one of our proposals, is the only one that uses all the information sources identified, but it appears in the ninth position of our final ranking. Finally, we also observe that methods based purely on check-in information or place information performed worse than methods combining these information sources with social information. Therefore, we have empirical foundation to support the argument that the best way to cope with friendship prediction problem in the LBSN context is by combining social strength with location information. The future directions of our work will focus on location prediction, which will be used to recommend places that users could visit. For that, we hope that the location information sources identified in this work can also be used in the location prediction task. |Lp| |T P | , precision, defined as P = |T P | |T P |+|F P | , and recall, defined as R = |T P | |T P |+|F N | friendship prediction methods in LBSNs Figure 1 : 1 Figure 1: Information sources and the different similarity criteria used by existing methods to perform friendship prediction in LBSNs. is the set of places commonly visited by users x and y, Φ L (x) = { | ∀ ∈ L : (x, t, ) ∈ Φ(x)} is the set of distinct places visited by user x, and Φ( ) = {(x, t, ) | ∈ L : (x, t, ) ∈ Φ}, is the set of check-ins made by different users at location . Check also summarizes the average number of users, |V | , average number of different locations, |L| , average number of training social links, |E T | and average number of testing social links, |E P | , obtained by averaging 10 independent partitions of each dataset. It is important to comment that, for the three datasets, the average number of check-ins in training set, |Φ P | , is twothirds of the total number of check-ins whilst the average number of check-ins in probe set, |Φ T | , is the remainder part. Figure 2 : 2 Figure 2: Number of correctly and wrongly predicted links for methods based on checkin frequency (G 1 ), information gain (G 2 ), distinct visitations at places (G 3 ), geographical distance (G 4 ), social strength (G 5 ) and our proposals (G 6 ) for (a) Brightkite, (b) Gowalla and (c) Foursquare. The dashed horizontal lines indicate the number of truly new links (links into the probe set) for each dataset. Results averaged over the 10 analyzed partitions and plotted in log 10 scale. Figure 3 ( 3 a) shows the Nemenyi test results for the 15 analyzed link prediction methods considering the F 1 ranking. The critical difference (CD) value for comparing the mean-ranking of two different methods at the 95 percentile was 12.38, as shown on the top of the diagram. The method names are shown on the axis of the diagram, with our proposals highlighted in bold. The lowest (best) ranks are on the left side of the axis. Methods connected by a bold line in the diagram have no statistical significant difference, so the Nemenyi test indicated that FAW has statistical significant difference with LC and GeoD. Figure 3 : 3 Figure 3: Nemenyi post-hoc test diagrams obtained from (a) f-measure and (b) AUC results showed in Table4. Our proposals are highlighted in bold. Figure 3 ( 3 Figure 3(a) indicates that methods based on social strength, such as FAW Figure 4 : 4 Figure 4: Nemenyi post-hoc test diagram obtained over the F 1 and AUC average ranks showed in Table4. Diagram shows the final ranking of link prediction methods considering both the optimal reduction of prediction space size and high prediction power. Our proposals are highlighted in bold. Figure 5 : 5 Figure 5: Precisi@L performance for the top-5 methods of the final ranking considering different L values for (a) Brightkite, (b) Gowalla and (c) Foursquare. Table 2 : 2 The main properties of the experimental LBSNs. Brightkite Gowalla Foursquare |V | 58228 196591 11326 |E| 214078 950327 23582 k 7.35 9.66 4.16 C 0.17 0.24 0.06 H 8.66 31.71 7.66 r 0.01 -0.03 -0.07 |Φ| 4491144 6442892 202955 |Φ V | 50686 107092 9985 Φ 88 60 179 |L| 772788 1280969 172044 L Φ 5 5 11 E 0.05 0.25 0.19 3 https://foursquare.com/about 4 http://www.public.asu.edu/˜hgao16/Publications.html Table 3 : 3 Details of pre-processed datasets. Dataset Training time range Testing time range |V | |L| |E T | |E P | Brightkite 2008/04 -2010/01 2010/02 -2010/10 4606 277515 49460 24800 Gowalla 2009/02 -2010/04 2010/05 -2010/10 19981 607094 232194 87619 Foursquare 2011/01 -2011/09 2011/10 -2011/12 7287 101546 12258 8565 Table 4 : 4 Friendship prediction results for Brightkite, Gowalla and Foursquare. Highlighted values indicate the best results for each evaluation metric considered. Method IR F1 AUC IR F1 AUC IR F1 AUC Co 4.934 0.070 0.668 14.972 0.051 0.554 4.488 0.045 0.554 AAP 13.190 0.104 0.682 36.531 0.045 0.728 13.367 0.034 0.655 AAE 13.190 0.104 0.694 36.586 0.045 0.736 13.367 0.034 0.670 LC 34.000 0.055 0.629 180.945 0.011 0.542 27.844 0.017 0.470 CL 13.114 0.105 0.676 36.327 0.045 0.682 13.368 0.034 0.630 JacP 13.114 0.105 0.630 36.327 0.045 0.742 13.368 0.034 0.708 GeoD AHD CNP Brightkite 35.005 0.053 31.689 0.056 31.180 0.060 0.761 0.710 0.685 Gowalla 180.461 0.011 0.767 223.714 0.011 0.681 66.484 0.029 0.687 Foursquare 35.710 0.018 35.782 0.018 23.277 0.027 0.705 0.656 0.608 TPOP 13.441 0.105 0.673 25.383 0.057 0.665 12.588 0.036 0.594 ChO 13.079 0.104 0.608 31.197 0.050 0.714 13.292 0.034 0.671 ChA 13.173 0.104 0.676 36.460 0.045 0.736 13.367 0.034 0.667 FAW 9.678 0.113 0.740 15.821 0.069 0.718 7.764 0.046 0.642 CNNP 9.387 0.048 0.552 18.868 0.046 0.620 4.920 0.039 0.569 NDA 22.496 0.076 0.700 47.540 0.037 0.720 15.325 0.024 0.624 since G is an undirected network. http://snap.stanford.edu/data/loc-brightkite.html http://snap.stanford.edu/data/loc-gowalla.html https://github.com/jvalverr/Geo-LPsource Acknowledgments This research was partially supported by Brazilian agencies FAPESP (grants 2015/14228-9 and 2013/12191-5), CNPq (grant 302645/2015-2), and by the French SONGES project (Occitanie and FEDER).
69,453
[ "4967", "6247" ]
[ "265913", "182888", "409262", "265913" ]
01761320
en
[ "chim", "spi" ]
2024/03/05 22:32:13
2016
https://hal.science/hal-01761320/file/Battaglia_Pyrocarbon_Composite_revised.pdf
Indrayush De Jean-Luc Battaglia Gérard L Vignoles Thermal properties measurements of a silica/pyrocarbon composite at the microscale Laminar pyrocarbons are used as interphases or matrices of carbon/carbon and ceramic-matrix composites in several high-temperature aerospace applications. Depending on their organization at the microscale, they can have a variety of mechanical and thermal properties. Hence, it is important to know, before thermal processing, the properties of these matrices at the micrometer scale in order to improve and control the composite behavior in a macroscopic scale. We use Scanning Thermal Microscopy (SThM) on a silica fiber / regenerative laminar pyrocarbon matrix composite to provide an insight into the effective thermal conductivity of pyrocarbon as well as the thermal contact resistance at the interface between fiber and matrix. The conductivity of pyrocarbon is discussed as a function of its nanostructural organization. I Introduction Carbon/carbon (C/C) composite materials are choice materials for use in extreme environments, such as space propulsion rocket nozzles, atmospheric re-entry thermal protection systems, aircraft brake discs, and Tokamak plasma-facing components [START_REF] Savage | Carbon-Carbon composites[END_REF] . In addition to carbon fibers, they contain interphases and matrices made of pyrolytic carbon, or pyrocarbon (PyC) [START_REF] Manocha | Carbon reinforcements and C/C composites[END_REF] . This special type of carbon can be though of as a heavily faulted graphite. It is prepared via a gas-phase route, called Chemical Vapor Deposition (CVD) or Infiltration (CVI). It is therefore quite unavailable in bulk form. It has, depending on its processing parameters, a very versatile nanostructure [START_REF] Oberlin | Pyrocarbons[END_REF][4][5] and consequently, broadly varying mechanical and thermal properties, usually anisotropic to a more or less large extent [START_REF] Vignoles | Carbones Pyrolytiques ou Pyrocarbonesdes Matériaux Multi-Echelles et Multi-Performances[END_REF] . Posterior heat treatments may further alter their structure and properties [START_REF] Oberlin | Carbonization and graphitization[END_REF] . Hence, it is important to know the properties of these matrices at the micrometer scale in order to improve and control the composite behavior in a macroscopic scale. In this frame, a large variety of PyC samples have been prepared 8 . That represented in Fig. 1 consists of an asdeposited regenerative laminar (ReL) PyC 9 deposit made on 5-µm radius glass fibers. The general orientation of the anisotropic texture is concentric around the fibers, as exhibited in Fig. 2, and results in orthotropic thermal properties of the matrix in the cylindrical coordinate frame following the fiber axis. This is due to the fact that the graphitic sheets exhibit strong thermal anisotropy. The thermal behavior of these non-homogeneous composites can be captured through characterization that will provide the thermal properties of the PyC. Previous thermoreflectance (TR) experiments [START_REF] Jumel | AIP Conf. Procs[END_REF][START_REF] Jumel | [END_REF][START_REF] Jumel | AIP Conf. Procs[END_REF] have been performed to assess the anisotropic thermal diffusivity of the Smooth Laminar (SL) PyC and of the Rough Laminar (RL) PyC, either pristine or after different heat treatments. It was obtained that the in-plane thermal diffusivity (in orthoradial direction) for the as-prepared SL PyC matrix was 0.14 cm².s -1 while the ratio of the in-plane and out-of-plane thermal diffusivities was 7; the as-prepared RL exhibits higher figures (0.42 cm 2 .s -1 and 20, respectively), denoting a more graphitic and anisotropic structure. ReL PyC, which is a highly anisotropic form of PyC, differs from RL by a larger amount of defects [START_REF] Farbos | [END_REF] and had not been investigated so far. The TR method has in the current case some possible drawbacks: first, its spatial resolution is of the same size as the deposit thickness, a fact that could result in inaccuracies; second, this method requires a rather strong temperature increase on the heating area in order to increase the signal-tonoise ratio, therefore yielding an effective diffusivity characteristic at temperature markedly higher than the ambient and nonlinear effects. On the other hand, the thermal boundary resistance (TBR) at the interface between the fiber and the matrix has not been investigated so far. Since the thermal conductivity for both the silica fiber and the PyC along the radial axis is low, the TBR was not expected to be a key parameter on heat transfer. However, its quantitative identification from measurements at the microscale could bring complementary information regarding the chemical bonding and/or structural arrangement at the interface 14 . In order (i) to overcome the drawbacks of the TR method, (ii) to provide thermal conductivity value for ReL PyC, and (iii) to measure as well the thermal boundary resistance at the interface between the PyC and the glass fiber, we have implemented the scanning thermal microscopy (SThM) experiment involving the 3ω mode 15 . The advantage of using SThM is that (i) the spatial resolution achieved is in the submicron scale and (ii) that high temperature differences are not involved, avoiding thus any risk of nonlinearity. In addition, SThM leads to absolute temperature measurements of the probe as well as to phase measurements when working under the 3 mode. Therefore, advanced inverse techniques can be implemented that can benefit from the frequency and spatial variations of both functions in order to investigate the thermal properties of the PyC and the TBR at the interface between the fiber and the matrix. In the present study the fiber is made of a single glass structure whose properties are available in literature 16 ( k =1.4 W.m -1 .K -1 , r = 2200kg.m -3 , C p = 787 J.kg -1 .K -1 ). The density and specific heat of ReL PyC have been also measured as [START_REF] Farbos | [END_REF]  = 2110 kg.m -3 and C p = 748 J.K -1 .kg -1 respectively. II Scanning Thermal Microscopy in 3 mode -experiment Scanning thermal microscopy is a well-established and almost ideal tool for investigating nanostructures like semiconductors and nano-electronic devices, due to its intrinsic sensitivity with respect to local material properties and to the thermal wave's ability to propagate through the material with sub micrometer lateral resolution. Since its inception in 1986, based on the principle of Scanning Thermal Profile (STP) [START_REF] Williams | [END_REF] , SThM has seen a lot of improvements and developments [18][19][20][21] including the Wollaston probe and thermocouple probes. The employed SThM (Nanosurf Easyscan AFM) uses a 400 nm-thick silicon nitride (Si 3 N 4 ) AFM probe provided by Kelvin Nanotechnology, on which is deposited a palladium strip (Pd, 1.2 µm thick and 10 µm long) that plays both the roles of the heater and of the a) 10 µm b) ReL PyC Silica fiber c) d) OA = 27° thermometer. The SThM probe has a tip curvature radius of r s = 100 nm. The contact force between the probe and the sample was chosen between 5 and 10 nN during our experiments and was accurately controlled during the probe motion using a feedback-closed loop on a piezo element, which ensures the displacement in the z direction with precise steps of 1 nm. The contact area between the probe and the surface is assumed to be a disk with radius r 0 . A periodic current I = I 0 cos w t ( ) with angular frequency w = 2p f passes through the strip with electrical resistance R 0 at room temperature, generating Joule's effect and, thus, being a heat source, dissipating the power P 2w ( ) = P 0 1+ cos 2w t ( ) ( ) 2 DT 2w = 2U 3w R 0 I 0 a R . In our configuration, R 0 =155  and I 0 =750 µA. The quantity DT 2w must be viewed as an average temperature value of the Pd wire since the change is expected to occur very close to the probe tip when it enters into contact with the investigated material surface. The harmonic contribution is measured using a differential stage coupled with a lock-in amplifier. The thermal coefficient of the Pd strip was calibrated by measuring the change in sensor resistance as a function of temperature and a value of a R = 1.3± 0.2 ( ) ´10 -3 K -1 was obtained. The contact between the probe and the surface involves a thermal boundary resistance that plays a very significant role on the measured temperature. This resistance involves at least three main contributions 22,23 : (i) the solid-solid contact resistance, (ii) the heat diffusion through the gas surrounding the probe and the water meniscus that forms inevitably at the probe tip and (iii) the change in the temperature gradient within the Pd strip between the out-of-contact mode (used to evaluate the probe thermal impedance) and when the probe comes into contact with the surface. In the present study, although working under argon flow (after a preliminary air removal from primary vacuum), the diffusion through the gas and the water meniscus may still be present even if its contribution is bounded below. Moreover, for silicon nitride probes, the increase in the contact area can also be explained by the flattening of the tip apex when in contact with the sample. As we showed in a previous study 18 , the thermal contact resistance integrates all the physical phenomena listed above and that occur to form the thermal contact resistance R c at the interface between the probe and the surface. On the other hand, we observed 18 that, in this experimental condition, the contact resistance as well as the radius r 0 of the heated area did not vary significantly when the thermal conductivity of the sample varied from 0.2 to 40 W.m -1 .K -1 . This observation was made considering the probe was motionless and the roughness of all samples was less than 10 nm. Finally, it was also observed 18 that the sensitivity to the thermal conductivity variation started to vanish above 25 W.m -1 .K -1 . This is obviously related to the very small contact area since the smaller this area, the lower is the sensitivity to thermal conductivity change for high conductive samples. Finally, it was observed that the measured phase did not vary significantly with respect to the measurement error whatever the thermal conductivity of the material. The heat transfer in the probe and the investigated material, in both the out-of-contact and contact modes, is described in Fig. 3, using the thermal impedances formalism [START_REF] Maillet | Thermal quadrupoles: solving the heat equation through integral transforms[END_REF] . This contact mode configuration assumed that the probe was located on either the fiber or the matrix and was only sensitive to the thermal properties of the contacting material. In other words, this configuration suggests that the probe was put in static contact on both materials far away enough from the interface. Denoting w 2 = 2w , the average temperature T p w ( ) dq of the amplitude versus the parameter  = {k r,PyC , k z,PyC , r 0 , R c }, have been calculated and reported on the Fig. 4. Sensitivity on k r,PyC and k z,PyC being exactly the same, they cannot be identify separately. Therefore, the measurements achieved when the probe is in contact with the PyC will only lead to identify its effective thermal conductivity k PyC,eff . On the other hand, the ratio of the sensitivity functions to r 0 and R c is constant; there is thus no chance to identify separately those two parameters simultaneously from frequency dependent measurements. III Scanning Thermal Microscopy in 3 modeheat transfer model Z m w 2 ( ) = 2 k z,m p r 0 J 1 (x) é ë ù û 2 x x 2 a r ,m a z,m + j w 2 r 0 2 a z,m dx 0 ¥ ò , j 2 = -1 In order to simulate the probe temperature when the probe sweeps the surface of the composite at a given frequency, we used the analytical model derived by Lepoutre et al. [START_REF] Lepoutre | [END_REF] assuming semi-infinite domains on both sides of the interface. This assumption is realistic since the probe is only sensitive to the material bulk thermal conductivity at distances that do not exceed 5 to 6 times the contact radius r 0 . In addition, in order to validate this assumption, we also performed calculations based on a finite elements model that is not presented here. They confirm the reliability of the solutions obtained using the analytical model that requires less computation times and that can thus be implemented in an inverse procedure to estimate the sought parameters. We assume here the motion of the probe is along the radial direction r, when the probe passes from the fiber to the matrix, through the interface. The reduced sensitivity functions S A q,r ( ) =q dA r ( ) dq of the amplitude to the parameters = {k r,Pyc , k z,PyC , r 0 , R c , TBR} at the frequency 1125 Hz, when r varied from -0.5 to 0.5 µm assuming the interface is at r=0, are represented in figure 5a. The sensitivity functions with respect to r 0 and R c , are, as for the frequency behavior, linearly dependent. However, it appears, as revealed on Fig. 5b, that the parameters k r,Pyc , r 0 and TBR can be identified since the ratios of the associated sensitivity functions are not constant along r. Whatever the experimental configuration for the probe, static or dynamic, the sought properties are identified by minimizing the quadratic gap: J = A p r ,w 2 ( ) -A meas r ,w 2 ( ) ( ) 2 IV Experimental results In a first time, we estimated the contact radius r 0 at the probe/surface interface using a calibrated "step" sample [START_REF] Puyoo | [END_REF] that consists in a 100 nm thick SiO 2 step deposited on a Si substrate. We found r 0 =100 ±10 nm with a constant force of 10 nN applied on the cantilever. It is assumed this value for the contact radius when the probe is in contact either with the glass fiber or the PyC considering the same force (10 nN) applied on the cantilever. We performed frequency dependent measurements when the probe was out-ofcontact and in static contact at the center of the glass fiber and on the PyC matrix far away from the fiber. Figure 6 shows the measured frequency dependent amplitude (Fig. 6a) and phase (Fig. 6b) for those three configurations. As said previously, we retrieve that the difference in the phase for each condition is very small, meaning that only the amplitude can be used. The out-of-contact measurements lead to the probe thermal impedance Z p ( 2 ). Then, since the silica thermal properties are known, we identified the thermal contact resistance at the interface between the SThM probe and the material starting from the amplitude measurements when the probe is in static contact at the center of the fiber. Using the minimization technique and the model for the probe in static contact with the material, we found that R c = 7.83± 0.3 ( ) ´10 -8 K.m 2 .W -1 . The fit between experimental data and the theoretical ones is very satisfying as presented in Fig. 6. Finally, using this value for R c , we identified the effective thermal conductivity of the PyC from the measured amplitude when the probe is in static contact with the PyC. We found that k PyC,eff = 20.8± 4.2 W.m -1 .K -1 , which leads again to a very satisfactory fit between the measurements ad the simulation as showed in Fig. 6. It must be emphasized that the standard deviation on the identified thermal conductivity is high (20% uncertainty) since a small variation in r 0 leads to a very large change in k PyC,eff . We noticed also that the minimum value of the quadratic J that is minimized is obtained for r 0 = 110 nm . This value is thus in the expected range obtained using the calibrated sample. We have however to mention that the value we found for k PyC,eff is at the detection limit (25 W.m -1 .K -1 ) of the instrument, as said in section II. .K -1 and r 0 = 105± 7 nm . The calculated temperature along  using those identified parameters is reported in Fig. 8 with the simulations. Therefore we retrieve well the values for r 0 and k PyC,eff that have been determined using the step sample and the frequency identification procedure. V Conclusion The SThM method has been applied to a composite made of silica fibers embedded in a regenerative laminar pyrocarbon (RL PyC) matrix. It has allowed obtaining values of the effective conductivity of this type of pyrocarbon, therefore completing the existing database obtained by TR on other types of PyC. The method has proved efficient in yielding effective values of the thermal conductivity. Unfortunately, it cannot give the details of the conductivity tensor elements; the uncertainty margin is also rather large. On the other hand, it allows identifying the thermal boundary resistance between the carbon matrix and silica fibers. A main advance in the field of scanning thermal microscopy is that we implemented an inverse technique in order to identify simultaneously (i) the radius of the contact area between the probe and the sample, (ii) the thermal conductivity of the sample and (iii) the effective thermal conductivity of the PyC. Therefore, whereas it was shown that the frequency dependent temperature at a point located on the surface could not lead to this simultaneously identification, it has been demonstrated in this paper that such an identification can be achieved considering from the spatial temperature variation at a given frequency. However, it is obviously required working with a heterogeneous surface where at least some of the materials are known in terms of their thermal conductivity (in the present case, the SiO 2 fiber). As measured by TR experiments [START_REF] Jumel | AIP Conf. Procs[END_REF][START_REF] Jumel | [END_REF][START_REF] Jumel | AIP Conf. Procs[END_REF] , thermal conductivity of RL and SL PyC are respectively 66.7 and 20.4 W.K -1 .m -1 (using the same heat capacity, and densities of 2120 kg.m -3 for RL [START_REF] Jumel | AIP Conf. Procs[END_REF] and 1930 kg.m -3 for SL [START_REF] Jumel | [END_REF] ). Actually both RL and ReL have the same degree of textural anisotropy, as measured e.g. by polarized light optical microscopy or by selected area electron diffraction in a transmission electron microscope, and only differ by the amount of in-plane defects, as measured by X-ray diffraction, neutron diffraction and by Raman spectroscopy 29 , and confirmed by HRTEM image-based atomistic modeling 30 . On the other hand, SL has a lesser anisotropy but a comparable, though lesser, amount of defects as compared to ReL. We conclude here that the room temperature conductivity is more sensitive to structural perfection than to textural arrangement. Indeed, either phonons or electrons, which are responsible for heat transfer in carbons, are scattered by the defects present in the planes. The value of TBR is unexpectedly rather low. A possible reason for this low value is that the PyC finds itself in a state of compression around the fiber: as a matter of fact, no decohesion has been found between the fibers and the matrices. Another effect is the fact that, on the carbon side, the conductivity is much larger parallel to the interface instead of perpendicularly, therefore providing easy "escape routes" to heat around defects present at the interface. Therefore, the hypothesis of 1D transfer across the interface could be questioned. Finally, we have also to mention that the surface of the sample is not fully flat at the interface between the fiber and the PyC. This comes from the different mechanical properties of both materials and their impact on the roughness at the end of the surface polishing. On the other hand, the fiber being an insulator already, the sensitivity of the measured temperature vs. the TBR remains low. Additional measurements of the TBR between a carbon fiber and the PyC matrix are in course. Further investigations are desirable in at least two directions. First, the SThM method should be improved in order to reduce its large degree of uncertainty and to obtain direction-dependent data. Second, measurements should be carried out on other pyrocarbons and fibers, in order to confirm the tendencies obtained here; measurements at higher temperatures are possible and would be highly interesting, since virtually no actual experimental data is available on these materials at elevated temperatures. "PyroMaN" project, funded by ANR. The authors thank the late Patrick Weisbecker (1973-2015) for the TEM images. Figure 1 . 1 Figure 1. Microscopic Image of the Composite Structure obtained using a Scanning Electron Microscope (SEM). Silica fibers (5 µm in radius) in grey color, surrounded by the PyC matrix in dark grey color, are perpendicular to the surface that was prepared by mechanical surfacing. Figure 2 . 2 Figure 2. a) Image showing the silica fiber and the cylindrical arrangement of the graphitic sheets; b) SEM image at a higher resolution confirming the presence of a concentric arrangement of the anisotropic texture, c) Dark field TEM image showing the anisotropic nature of the pyrocarbon, d) High-Resolution 002 Lattice Fringe TEM imaging of the pyrocarbon; the inset is a Selected Area Electron Diffraction Diagram illustrating the high anisotropy through the low value of the 002 diffraction arc opening angle. 2 . 2 The resulting temperature increase ΔT in the strip is composed of a continuous component (DC) and of a periodic one at 2ω, as DT = DT DC + DT 2w cos 2w t +f ( ) . This leads to changes of the strip electrical resistance as R= R is the thermal coefficient. The voltage drop between the two pads of the probe is therefore expressed according to , 2  and 3 .The third harmonic U 3w is related to the transient contribution of the temperature change to resistance as Figure 3 . 3 Figure 3. Heat transfer model for the out-of-contact operation mode and the contact mode (considering the probe in contact either with the glass fiber or the PyC matrix). T p OFC is the out-of-contact probe temperature that is used to identify the probe thermal impedance Z p (). The current generator represents the heat source, localized within the Pd strip of the probe. 2 () 2 ( ) P w 2 ( 2 (p w 2 ( 2 ( 222222 of the Pd strip in the contact mode is related to the total heat flux ) . Expressions for Z p and Z m are required to calculate Tp . We have chosen to express the thermal impedance of the probe as Z ) are respectively the amplitude and the phase measured within the out-of-contact mode. Finally, the heat transfer model within the investigated material leads to express the thermal impedance Z m in an analytical way as25 : Figure 4 .A w 2 ( ) = T p w 2 ( ) and d p w 2 ( 2 ( ) =q dA w 2 422222 Figure 4. Reduced sensitivities of the amplitude A w 2( ) to: the axial and radial thermal conductivity (k r,PyC , k z,Pyc ) of the PyC, the contact radius r 0 and the thermal contact resistance R c at the interface between the probe and the investigated surface, as a function of frequency. Ratio between sensitivity functions are also presented.In this relation, k z,m is the thermal conductivity along the longitudinal axis (perpendicular to the investigated surface) and a r,m and a z,m are the thermal diffusivity of the material (either the fiber or the matrix, i.e., m=SiO 2 or PyC) along the longitudinal and radial axis respectively. J 1 is the Bessel function of the first kind of order 1. Finally, the theoretical expressions for the temperature and the phase are respectively:A w Figure 5 . 5 Figure 5. a/ Reduced sensitivity of the amplitude A r ( ) = T p r ,w 2( ) with respect to R c , r 0 , k r,PyC , k z,PyC and TBR at 1125 Hz, along a path crossing the fiber/matrix interface b/ reduced sensitivity ratios along the same path. Figure 6 .( 6 Figure 6. Measured amplitude (a) and phase (b) vs. 2. Plain lines are simulations using the identified R c when the probe is in contact with the silica fiber (green line) and the identified k eff of the PyC when the probe is in contact with the PyC (red line). We performed an SThM sweep of the specimen at 1125 Hz with a current of 750 µA under atmospheric condition. The topography, amplitude and phase images are recorded during the sweep. The experiments have been performed first over an image edge size of 50 micrometers with 256 points of measurement per line at a speed of 0.25 lines per second (fig. 7a). Then, a sweep over the  domain (see fig. 7a) has been performed and reported on Fig. 7b. The value for the amplitude along line  (see Fig. 7b), when the probe moves from the fiber to the PyC, is represented in Fig. 8. Using the minimization technique described previously, we found TBR= 5±1 ( ) ´10 -8 K.m 2 .W -1 , k PyC,eff = 20.18± 0.12 W.m -1 Figure 7 . 7 Figure 7. a/ The 5050 µm 2 images obtained via experiments under atmospheric conditions using Scanning Thermal Microscopy at 1125 Hz showing the topography, amplitude and phase from left to right respectively. b/ sweep over the  domain. Figure 8 . 8 Figure 8. Green circles: measured probe temperature along the  line (see Fig. 7) and simulated probe temperature values considering the identified values for TBR, r 0 and k PyC,eff and two different values for TBR (in K.m 2 .W -1 ) in order to show the sensitivity of this parameter on the calculated temperature. Acknowledgement This work has been funded by Conseil Régional d'Aquitaine and the CNRS 102758 project with Epsilon Engineering Company. The observed sample was produced during the execution of the BLAN-10-0929
25,608
[ "8801" ]
[ "164351", "17163", "19229" ]
01761539
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01761539/file/1801.00055.pdf
Aliaksandr Siarohin Enver Sangineto Stéphane Lathuilière email: [email protected] Nicu Sebe email: [email protected] Deformable GANs for Pose-based Human Image Generation In this paper we address the problem of generating person images conditioned on a given pose. Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose. In order to deal with pixel-to-pixel misalignments caused by the pose differences, we introduce deformable skip connections in the generator of our Generative Adversarial Network. Moreover, a nearest-neighbour loss is proposed instead of the common L 1 and L 2 losses in order to match the details of the generated image with the target image. We test our approach using photos of persons in different poses and we compare our method with previous work in this area showing state-of-the-art results in two benchmarks. Our method can be applied to the wider field of deformable object generation, provided that the pose of the articulated object can be extracted using a keypoint detector. Introduction In this paper we deal with the problem of generating images where the foreground object changes because of a viewpoint variation or a deformable motion, such as the articulated human body. Specifically, inspired by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF], our goal is to generate a human image conditioned on two different variables: [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] the appearance of a specific person in a given image and (2) the pose of the same person in another image. The task our networks need to solve is to preserve the appearance details (e.g., the texture) contained in the first variable while performing a deformation on the structure of the foreground object according to the second variable. We focus on the human body which is an articulated "object", important for many applications (e.g., computer-graphics based manipulations or re-identification dataset synthesis). However, our approach can be used with other deformable objects such as human faces or animal bodies, provided that a significant number of keypoints can be automatically extracted from the object of interest in order to represent its pose. Pose-based human-being image generation is motivated by the interest in synthesizing videos [START_REF] Walker | The pose knows: Video forecasting by generating pose futures[END_REF] with non-trivial human movements or in generating rare poses for human pose estimation [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] or re-identification [START_REF] Zheng | Unlabeled samples generated by GAN improve the person re-identification baseline in vitro[END_REF] training datasets. However, most of the recently proposed, deepnetwork based generative approaches, such as Generative Adversarial Networks (GANs) [START_REF] Goodfellow | Generative adversarial nets[END_REF] or Variational Autoencoders (VAEs) [START_REF] Kingma | Auto-encoding variational bayes[END_REF] do not explicitly deal with the problem of articulated-object generation. Common conditional methods (e.g., conditional GANs or conditional VAEs) can synthesize images whose appearances depend on some conditioning variables (e.g., a label or another image). For instance, Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] recently proposed an "image-toimage translation" framework, in which an input image x is transformed into a second image y represented in another "channel" (see Fig. 1a). However, most of these methods have problems when dealing with large spatial deformations between the conditioning and the target image. For instance, the U-Net architecture used by Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] is based on skip connections which help preserving local information between x and y. Specifically, skip connections are used to copy and then concatenate the feature maps of the generator "encoder" (where information is downsam-pled using convolutional layers) to the generator "decoder" (containing the upconvolutional layers). However, the assumption used in [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] is that x and y are roughly aligned with each other and they represent the same underlying structure. This assumption is violated when the foreground object in y undergoes to large spatial deformations with respect to x (see Fig. 1b). As shown in [START_REF] Ma | Pose guided person image generation[END_REF], skip connections cannot reliably cope with misalignments between the two poses. Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] propose to alleviate this problem using a two-stage generation approach. In the first stage a U-Net generator is trained using a masked L 1 loss in order to produce an intermediate image conditioned on the target pose. In the second stage, a second U-Net based generator is trained using also an adversarial loss in order to generate an appearance difference map which brings the intermediate image closer to the appearance of the conditioning image. In contrast, the GAN-based method we propose in this paper is end-to-end trained by explicitly taking into account pose-related spatial deformations. More specifically, we propose deformable skip connections which "move" local information according to the structural deformations represented in the conditioning variables. These layers are used in our U-Net based generator. In order to move information according to a specific spatial deformation, we decompose the overall deformation by means of a set of local affine transformations involving subsets of joints, then we deform the convolutional feature maps of the encoder according to these transformations and we use common skip connections to transfer the transformed tensors to the decoder's fusion layers. Moreover, we also propose to use a nearest-neighbour loss as a replacement of common pixelto-pixel losses (such as, e.g., L 1 or L 2 losses) commonly used in conditional generative approaches. This loss proved to be helpful in generating local information (e.g., texture) similar to the target image which is not penalized because of small spatial misalignments. We test our approach using the benchmarks and the evaluation protocols proposed in [START_REF] Ma | Pose guided person image generation[END_REF] obtaining higher qualitative and quantitative results in all the datasets. Although tested on the specific human-body problem, our approach makes few human-related assumptions and can be easily extended to other domains involving the generation of highly deformable objects. Our code and our trained models are publicly available 1 . Related work Most common deep-network-based approaches for visual content generation can be categorized as either Variational Autoencoders (VAEs) [START_REF] Kingma | Auto-encoding variational bayes[END_REF] or Generative Adversarial Networks (GANs) [START_REF] Goodfellow | Generative adversarial nets[END_REF]. VAEs are based on probabilistic graphical models and are trained by maximizing a lower 1 https://github.com/AliaksandrSiarohin/pose-gan bound of the corresponding data likelihood. GANs are based on two networks, a generator and a discriminator, which are trained simultaneously such that the generator tries to "fool" the discriminator and the discriminator learns how to distinguish between real and fake images. Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] propose a conditional GAN framework for image-to-image translation problems, where a given scene representation is "translated" into another representation. The main assumption behind this framework is that there exits a spatial correspondence between the low-level information of the conditioning and the output image. VAEs and GANs are combined in [START_REF] Zhao | Multi-view image generation from a single-view[END_REF] to generate realistic-looking multi-view clothes images from a single-view input image. The target view is filled to the model via a viewpoint label as front or left side and a two-stage approach is adopted: pose integration and image refinement. Adopting a similar pipeline, Lassner et al. [START_REF] Lassner | A generative model of people in clothing[END_REF] generate images of people with different clothes in a given pose. This approach is based on a costly annotation (fine-grained segmentation with 18 clothing labels) and a complex 3D pose representation. Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] propose a more general approach which allows to synthesize person images in any arbitrary pose. Similarly to our proposal, the input of their model is a conditioning image of the person and a target new pose defined by 18 joint locations. The target pose is described by means of binary maps where small circles represent the joint locations. Similarly to [START_REF] Lassner | A generative model of people in clothing[END_REF][START_REF] Zhao | Multi-view image generation from a single-view[END_REF], the generation process is split in two different stages: pose generation and texture refinement. In contrast, in this paper we show that a single-stage approach, trained end-to-end, can be used for the same task obtaining higher qualitative results. Jaderberg et al. [START_REF] Jaderberg | Spatial transformer networks[END_REF] propose a spatial transformer layer, which learns how to transform a feature map in a "canonical" view, conditioned on the feature map itself. However only a global, parametric transformation can be learned (e.g., a global affine transformation), while in this paper we deal with non-parametric deformations of articulated objects which cannot be described by means of a unique global affine transformation. Generally speaking, U-Net based architectures are frequently adopted for pose-based person-image generation tasks [START_REF] Lassner | A generative model of people in clothing[END_REF][START_REF] Ma | Pose guided person image generation[END_REF][START_REF] Walker | The pose knows: Video forecasting by generating pose futures[END_REF][START_REF] Zhao | Multi-view image generation from a single-view[END_REF]. However, common U-Net skip connections are not well-designed for large spatial deformations because local information in the input and in the output images is not aligned (Fig. 1). In contrast, we propose deformable skip connections to deal with this misalignment problem and "shuttle" local information from the encoder to the decoder driven by the specific pose difference. In this way, differently from previous work, we are able to simultaneously generate the overall pose and the texture-level refinement. Finally, our nearest-neighbour loss is similar to the perceptual loss proposed in [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF] and to the style-transfer spatial-analogy approach recently proposed in [START_REF] Liao | Visual attribute transfer through deep image analogy[END_REF]. However, the perceptual loss, based on an element-by-element difference computed in the feature map of an external classifier [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF], does not take into account spatial misalignments. On the other hand, the patch-based similarity, adopted in [START_REF] Liao | Visual attribute transfer through deep image analogy[END_REF] to compute a dense feature correspondence, is very computationally expensive and it is not used as a loss. The network architectures In this section we describe the architectures of our generator (G) and discriminator (D) and the proposed deformable skip connections. We first introduce some notation. At testing time our task, similarly to [START_REF] Ma | Pose guided person image generation[END_REF], consists in generating an image x showing a person whose appearance (e.g., clothes, etc.) is similar to an input, conditioning image x a but with a body pose similar to P (x b ), where x b is a different image of the same person and P (x) = (p 1 , ...p k ) is a sequence of k 2D points describing the locations of the human-body joints in x. In order to allow a fair comparison with [START_REF] Ma | Pose guided person image generation[END_REF], we use the same number of joints (k = 18) and we extract P () using the same Human Pose Estimator (HPE) [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] used in [START_REF] Ma | Pose guided person image generation[END_REF]. Note that this HPE is used both at testing and at training time, meaning that we do not use manually-annotated poses and the so extracted joint locations may have some localization errors or missing detections/false positives. At training time we use a dataset X = {(x (i) a , x (i) b )} i=1,...,N containing pairs of conditioningtarget images of the same person in different poses. For each pair (x a , x b ), a conditioning and a target pose P (x a ) and P (x b ) is extracted from the corresponding image and represented using two tensors H a = H(P (x a )) and H b = H(P (x b )), each composed of k heat maps, where H j (1 ≤ j ≤ k) is a 2D matrix of the same dimension as the original image. If p j is the j-th joint location, then: H j (p) = exp - p -p j σ 2 , (1) with σ = 6 pixels (chosen with cross-validation). Using blurring instead of a binary map is useful to provide widespread information about the location p j . The generator G is fed with: (1) a noise vector z, drawn from a noise distribution Z and implicitly provided using dropout [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] and (2) the triplet (x a , H a , H b ). Note that, at testing time, the target pose is known, thus H(P (x b )) can be computed. Note also that the joint locations in x a and H a are spatially aligned (by construction), while in H b they are different. Hence, differently from [START_REF] Ma | Pose guided person image generation[END_REF][START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], H b is not concatenated with the other input tensors. Indeed the convolutional-layer units in the encoder part of G have a small receptive field which cannot capture large spatial displacements. For instance, a large movement of a body limb in x b with respect to x a , is represented in different locations in x a and H b which may be too far apart from each other to be captured by the receptive field of the convolutional units. This is emphasized in the first layers of the encoder, which represent low-level information. Therefore, the convolutional filters cannot simultaneously process texture-level information (from x a ) and the corresponding pose information (from H b ). For this reason we independently process x a and H a from H b in the encoder. Specifically, x a and H a are concatenated and processed using a convolutional stream of the encoder while H b is processed by means of a second convolutional stream, without sharing the weights (Fig. 2). The feature maps of the first stream are then fused with the layerspecific feature maps of the second stream in the decoder after a pose-driven spatial deformation performed by our deformable skip connections (see Sec. 3.1). Our discriminator network is based on the conditional, fully-convolutional discriminator proposed by Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF]. In our case, D takes as input 4 tensors: (x a , H a , y, H b ), where either y = x b or y = x = G(z, x a , H a , H b ) (see Fig. 2). These four tensors are concatenated and then given as input to D. The discriminator's output is a scalar value indicating its confidence on the fact that y is a real image. Deformable skip connections As mentioned above and similarly to [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], the goal of the deformable skip connections is to "shuttle" local information from the encoder to the decoder part of G. The local information to be transferred is, generally speaking, contained in a tensor F , which represents the feature map activations of a given convolutional layer of the encoder. However, differently from [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], we need to "pick" the information to shuttle taking into account the object-shape deformation which is described by the difference between P (x a ) and P (x b ). To do so, we decompose the global deformation in a set of local affine transformations, defined using subsets of joints in P (x a ) and P (x b ). Using these affine transformations and local masks constructed using the specific joints, we deform the content of F and then we use common skip connections to copy the transformed tensor and concatenate it with the corresponding tensor in the destination layer (see Fig. 2). Below we describe in more detail the whole pipeline. Decomposing an articulated body in a set of rigid subparts. The human body is an articulated "object" which can be roughly decomposed into a set of rigid sub-parts. We chose 10 sub-parts: the head, the torso, the left/right upper/lower arm and the left/right upper/lower leg. Each of them corresponds to a subset of the 18 joints defined by the HPE [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] we use for extracting P (). Using these joint locations we can define rectangular regions which enclose the specific body part. In case of the head, the region is simply chosen to be the axis-aligned enclosing rectangle of all the corresponding joints. For the torso, which is Figure 2: A schematic representation of our network architectures. For the sake of clarity, in this figure we depict P (•) as a skeleton and each tensor H as the average of its component matrices H j (1 ≤ j ≤ k). The white rectangles in the decoder represent the feature maps directly obtained using up-convolutional filters applied to the previous-layer maps. The reddish rectangles represent the feature maps "shuttled" by the skip connections from the H b stream. Finally, blueish rectangles represent the deformed tensors d(F ) "shuttled" by the deformable skip connections from the (x a , H a ) stream. the largest area, we use a region which includes the whole image, in such a way to shuttle texture information for the background pixels. Concerning the body limbs, each limb corresponds to only 2 joints. In this case we define a region to be a rotated rectangle whose major axis (r 1 ) corresponds to the line between these two joints, while the minor axis (r 2 ) is orthogonal to r 1 and with a length equal to one third of the mean of the torso's diagonals (this value is used for all the limbs). In Fig. 3 we show an example. Let R a h = {p 1 , ..., p 4 } be the set of the 4 rectangle corners in x a defining the h-th body region (1 ≤ h ≤ 10). Note that these 4 corner points are not joint locations. Using R a h we can compute a binary mask M h (p) which is zero everywhere except those points p lying inside R a h . Moreover, let R b h = {q 1 , ..., q 4 } be the corresponding rectangular region in x b . Matching the points in R a h with the corresponding points in R b h we can compute the parameters of a body-part specific affine transformation (see below). In either x a or x b , some of the body regions can be occluded, truncated by the image borders or simply miss-detected by the HPE. In this case we leave the corresponding region R h empty and the h-th affine transform is not computed (see below). Note that our body-region definition is the only humanspecific part of the proposed approach. However, similar regions can be easily defined using the joints of other articulated objects such as those representing an animal body or a human face. Computing a set of affine transformations. During the forward pass (i.e., both at training and at testing time) we decompose the global deformation of the conditioning pose with respect to the target pose by means of a set of local affine transformations, one per body region. Specifically, given R a h in x a and R b h in x b (see above), we compute the 6 parameters k h of an affine transformation f h (•; k h ) using Least Squares Error: Figure 3: For each specific body part, an affine transformation f h is computed. This transformation is used to "move" the feature-map content corresponding to that body part. min k h pj ∈R a h ,qj ∈R b h ||q j -f h (p j ; k h )|| 2 2 (2) The parameter vector k h is computed using the original image resolution of x a and x b and then adapted to the specific resolution of each involved feature map F . Similarly, we compute scaled versions of each M h . In case either R a h or R b h is empty (i.e., when any of the specific body-region joints has not been detected using the HPE, see above), then we simply set M h to be a matrix with all elements equal to 0 (f h is not computed). Note that (f h (), M h ) and their lower-resolution variants need to be computed only once per each pair of real images (x a , x b ) ∈ X and, in case of the training phase, this is can be done before starting training the networks (but in our current implementation this is done on the fly). Combining affine transformations to approximate the object deformation. Once (f h (), M h ), h = 1, ..., 10 are computed for the specific spatial resolution of a given tensor F , the latter can be transformed in order to approximate the global pose-dependent deformation. Specifically, we first compute for each h: F h = f h (F M h ), (3) where is a point-wise multiplication and f h (F (p)) is used to "move" all the channel values of F corresponding to point p. Finally, we merge the resulting tensors using: d(F (p, c)) = max h=1,...,10 F h (p, c), ( 4 ) where c is a specific channel. The rationale behind Eq. 4 is that, when two body regions partially overlap each other, the final deformed tensor d(F ) is obtained by picking the maximum-activation values. Preliminary experiments performed using average pooling led to slightly worse results. Training D and G are trained using a combination of a standard conditional adversarial loss L cGAN with our proposed nearest-neighbour loss L N N . Specifically, in our case L cGAN is given by: L cGAN (G, D) = E (xa,x b )∈X [log D(x a , H a , x b , H b )]+ E (xa,x b )∈X ,z∈Z [log(1 -D(x a , H a , x, H b ))], (5) where x = G(z, x a , H a , H b ). Previous works on conditional GANs combine the adversarial loss with either an L 2 [START_REF] Pathak | Context encoders: Feature learning by inpainting[END_REF] or an L 1 -based loss [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF][START_REF] Ma | Pose guided person image generation[END_REF] which is used only for G. For instance, the L 1 distance computes a pixel-to-pixel difference between the generated and the real image, which, in our case, is: L 1 (x, x b ) = ||x -x b || 1 . (6) However, a well-known problem behind the use of L 1 and L 2 is the production of blurred images. We hypothesize that this is also due to the inability of these losses to tolerate small spatial misalignments between x and x b . For instance, suppose that x, produced by G, is visually plausible and semantically similar to x b , but the texture details on the clothes of the person in the two compared images are not pixel-to-pixel aligned. Both the L 1 and the L 2 loss will penalize this inexact pixel-level alignment, although not semantically important from the human point of view. Note that these misalignments do not depend on the global deformation between x a and x b , because x is supposed to have the same pose as x b . In order to alleviate this problem, we propose to use a nearest-neighbour loss L N N based on the following definition of image difference: L N N (x, x b ) = p∈x min q∈N (p) ||g(x(p)) -g(x b (q))|| 1 , (7) where N (p) is a n × n local neighbourhood of point p (we use 5 × 5 and 3 × 3 neighbourhoods for the DeepFashion and the Market-1501 dataset, respectively, see Sec. 6). g(x(p)) is a vectorial representation of a patch around point p in image x, obtained using convolutional filters (see below for more details). Note that L N N () is not a metrics because it is not symmetric. In order to efficiently compute Eq. 7, we compare patches in x and x b using their representation (g()) in a convolutional map of an externally trained network. In more detail, we use VGG-19 [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF], trained on ImageNet and, specifically, its second convolutional layer (called conv 1 2 ). The first two convolutional maps in VGG-19 (conv 1 1 and conv 1 2 ) are both obtained using a convolutional stride equal to 1. For this reason, the feature map (C x ) of an image x in conv 1 2 has the same resolution of the original image x. Exploiting this fact, we compute the nearest-neighbour field directly on conv 1 2 , without losing spatial precision. Hence, we define: g(x(p)) = C x (p), which corresponds to the vector of all the channel values of C x with respect to the spatial position p. C x (p) has a receptive field of 5 × 5 in x, thus effectively representing a patch of dimension 5 × 5 using a cascade of two convolutional filters. Using C x , Eq. 7 becomes: L N N (x, x b ) = p∈x min q∈N (p) ||C x(p) -C x b (q)|| 1 , (8) In Sec. A, we show how Eq. 8 can be efficiently implemented using GPU-based parallel computing. The final L N N -based loss is: L N N (G) = E (xa,x b )∈X ,z∈Z L N N (x, x b ). (9) Combining Eq. 5 and Eq. 9 we obtain our objective: G * = arg min G max D L cGAN (G, D) + λL N N (G), (10) with λ = 0.01 used in all our experiments. The value of λ is small because it acts as a normalization factor in Eq. 8 with respect to the number of channels in C x and the number of pixels in x (more details in Sec. A). Implementation details We train G and D for 90k iterations, with the Adam optimizer (learning rate: 2 * 10 -4 , β 1 = 0.5, β 2 = 0.999). Following [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] we use instance normalization [START_REF] Ulyanov | Instance normalization: The missing ingredient for fast stylization[END_REF]. In the following we denote with: (1) C s m a convolution-ReLU layer with m filters and stride s, (2) CN s m the same as C s m with instance normalization before ReLU and (3) CD s m the same as CN s m with the addition of dropout at rate 50%. Differently from [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], we use dropout only at training time. The encoder part of the generator is given by two streams (Fig. 2), each of which is composed of the following sequence of layers: CN The discriminator architecture is: C 2 64 -C 2 128 -C 2 256 -C 2 512 -C 2 1 , where the ReLU of the last layer is replaced with sigmoid. The generator for the DeepFashion dataset has one additional convolution block (CN 2 512 ) both in the encoder and in the decoder, because images in this dataset have a higher resolution. Experiments Datasets The person re-identification Market-1501 dataset [START_REF] Zheng | Scalable person re-identification: A benchmark[END_REF] contains 32,668 images of 1,501 persons captured from 6 different surveillance cameras. This dataset is challenging because of the low-resolution images (128×64) and the high diversity in pose, illumination, background and viewpoint. To train our model, we need pairs of images of the same person in two different poses. As this dataset is relatively noisy, we first automatically remove those images in which no human body is detected using the HPE, leading to 263,631 training pairs. For testing, following [START_REF] Ma | Pose guided person image generation[END_REF], we randomly select 12,000 pairs. No person is in common between the training and the test split. The DeepFashion dataset (In-shop Clothes Retrieval Benchmark) [START_REF] Liu | Deepfashion: Powering robust clothes recognition and retrieval with rich annotations[END_REF] is composed of 52,712 clothes images, matched each other in order to form 200,000 pairs of identical clothes with two different poses and/or scales of the persons wearing these clothes. The images have a resolution of 256×256 pixels. Following the training/test split adopted in [START_REF] Ma | Pose guided person image generation[END_REF], we create pairs of images, each pair depicting the same person with identical clothes but in different poses. After removing those images in which the HPE does not detect any human body, we finally collect 89,262 pairs for training and 12,000 pairs for testing. Metrics Evaluation in the context of generation tasks is a problem in itself. In our experiments we adopt a redundancy of metrics and a user study based on human judgments. Following [START_REF] Ma | Pose guided person image generation[END_REF], we use Structural Similarity (SSIM) [START_REF] Wang | Image quality assessment: from error visibility to structural similarity[END_REF], Inception Score (IS) [START_REF] Salimans | Improved techniques for training gans[END_REF] and their corresponding masked versions mask-SSIM and mask-IS [START_REF] Ma | Pose guided person image generation[END_REF]. The latter are obtained by masking-out the image background and the rationale behind this is that, since no background information of the target image is input to G, the network cannot guess what the target background looks like. Note that the evaluation masks we use to compute both the mask-IS and the mask-SSIM values do not correspond to the masks ({M h }) we use for training. The evaluation masks have been built following the procedure proposed in [START_REF] Ma | Pose guided person image generation[END_REF] and adopted in that work for both training and evaluation. Consequently, the maskbased metrics may be biased in favor of their method. Moreover, we observe that the IS metrics [START_REF] Salimans | Improved techniques for training gans[END_REF], based on the entropy computed over the classification neurons of an external classifier [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF], is not very suitable for domains with only one object class. For this reason we propose to use an additional metrics that we call Detection Score (DS). Similarly to the classification-based metrics (FCN-score) used in [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], DS is based on the detection outcome of the state-of-theart object detector SSD [START_REF] Liu | SSD: Single shot multibox detector[END_REF], trained on Pascal VOC 07 [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2007[END_REF] (and not fine-tuned on our datasets). At testing time, we use the person-class detection scores of SSD computed on each generated image x. DS(x) corresponds to the maximumscore box of SSD on x and the final DS value is computed by averaging the scores of all the generated images. In other words, DS measures the confidence of a person detector in the presence of a person in the image. Given the high accuracy of SSD in the challenging Pascal VOC 07 dataset [START_REF] Liu | SSD: Single shot multibox detector[END_REF], we believe that it can be used as a good measure of how much realistic (person-like) is a generated image. Finally, in our tables we also include the value of each metrics computed using the real images of the test set. Since these values are computed on real data, they can be considered as a sort of an upper-bound to the results a generator can obtain. However, these values are not actual upper bounds in the strict sense: for instance the DS metrics on the real datasets is not 1 because of SSD failures. Comparison with previous work In Tab. 1 we compare our method with [START_REF] Ma | Pose guided person image generation[END_REF]. Note that there are no other works to compare with on this task yet. The mask-based metrics are not reported in [START_REF] Ma | Pose guided person image generation[END_REF] for the DeepFashion dataset. Concerning the DS metrics, we used the publicly available code and network weights released by the authors of [START_REF] Ma | Pose guided person image generation[END_REF] in order to generate new images according to the common testing protocol and ran the SSD detector to get the DS values. On the Market-1501 dataset our method reports the highest performance with all but the IS metrics. Specifically, our DS values are much higher than those obtained by [START_REF] Ma | Pose guided person image generation[END_REF]. Conversely, on the DeepFashion dataset, our approach significantly improves the IS value but returns a slightly lower SSIM value. User study In order to further compare our method with the state-ofthe-art approach [START_REF] Ma | Pose guided person image generation[END_REF] we implement a user study following the protocol of Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. For each dataset, we show 55 real and 55 generated images in a random order to 30 users for one second. Differently from Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF], who used Amazon Mechanical Turk (AMT), we used "expert" (voluntary) users: PhD students and Post-docs working in Computer Vision and belonging to two different departments. We believe that expert users, who are familiar with GANlike images, can more easily distinguish real from fake images, thus confusing our users is potentially a more difficult task for our GAN. The results 2 in Tab. 2 confirm the significant quality boost of our images with respect to the images produced in [START_REF] Ma | Pose guided person image generation[END_REF]. For instance, on the Market-1501 dataset, the G2R human "confusion" is one order of magnitude higher than in [START_REF] Ma | Pose guided person image generation[END_REF]. Finally, in Sec. D we show some example images, directly comparing with [START_REF] Ma | Pose guided person image generation[END_REF]. We also show the results obtained by training different person re-identification systems after augmenting the training set with images generated by our method. These experiments indirectly confirm that the degree of realism and diversity of our images is very significant. Table 2: User study (%). ( * ) These results are reported in [START_REF] Ma | Pose guided person image generation[END_REF] and refer to a similar study with AMT users. Market-1501 DeepFashion Model R2G G2R R2G G2R Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] ( * ) 11.2 5.5 9.2 14.9 Ours 22.67 50.24 12.42 24.61 Ablation study and qualitative analysis In this section we present an ablation study to clarify the impact of each part of our proposal on the final performance. We first describe the compared methods, obtained by "amputating" important parts of the full-pipeline presented in Sec. 3-4. The discriminator architecture is the same for all the methods. • Baseline: We use the standard U-Net architecture [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] without deformable skip connections. The inputs of G and D and the way pose information is represented 2 R2G means #Real images rated as generated / #Real images; G2R means #Generated images rated as Real / #Generated images. (see the definition of tensor H in Sec. 3) is the same as in the full-pipeline. However, in G, x a , H a and H b are concatenated at the input layer. Hence, the encoder of G is composed of only one stream, whose architecture is the same as the two streams described in Sec.5. • DSC: G is implemented as described in Sec. 3, introducing our Deformable Skip Connections (DSC). Both in DSC and in Baseline, training is performed using an L 1 loss together with the adversarial loss. • PercLoss: This is DSC in which the L 1 loss is replaced with the Perceptual loss proposed in [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF]. This loss is computed using the layer conv 2 1 of [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF], chosen to have a receptive field the closest possible to N (p) in Eq. 8, and computing the element-to-element difference in this layer without nearest neighbor search. • Full: This is the full-pipeline whose results are reported in Tab. 1, and in which we use the proposed L N N loss (see Sec. 4). In Tab. 3 we report a quantitative evaluation on the Market-1501 and on the DeepFashion dataset with respect to the four different versions of our approach. In most of the cases, there is a progressive improvement from Baseline to DSC to Full. Moreover, Full usually obtains better results than PercLoss. These improvements are particularly evident looking at the DS metrics, which we believe it is a strong evidence that the generated images are realistic. DS values on the DeepFashion dataset are omitted because they are all close to the value ∼ 0.96. In Fig. 4 and Fig. 5 we show some qualitative results. These figures show the progressive improvement through the four baselines which is quantitatively presented above. In fact, while pose information is usually well generated by all the methods, the texture generated by Baseline often does not correspond to the texture in x a or is blurred. In same cases, the improvement of Full with respect to Baseline is quite drastic, such as the drawing on the shirt of the girl in the second row of Fig. 5 or the stripes on the clothes of the persons in the third and in the fourth row of Fig. 4. Further examples are shown in the Appendix. Conclusions In this paper we presented a GAN-based approach for image generation of persons conditioned on the appearance and the pose. We introduced two novelties: deformable skip connections and nearest-neighbour loss. The first is used to solve common problems in U-Net based generators when dealing with deformable objects. The second novelty is used to alleviate a different type of misalignment between the generated image and the ground-truth image. Our experiments, based on both automatic evaluation metrics and human judgments, show that the proposed method is able to outperform previous work on this task. Despite the proposed method was tested on the specific task of human-generation, only few assumptions are used which refer to the human body and we believe that our proposal can be easily extended to address other deformable-object generation tasks. Appendix In this Appendix we report some additional implementation details and we show other quantitative and qualitative results. Specifically, in Sec. A we explain how Eq. 8 can be efficiently implemented using GPU-based parallel computing, while in Sec. B we show how the human-body symmetry can be exploited in case of missed limb detections. In Sec. C we train state-of-the-art Person Re-IDentification (Re-ID) systems using a combination of real and generated data, which, on the one hand, shows how our images can be effectively used to boost the performance of discriminative methods and, on the other hand, indirectly shows that our generated images are realistic and diverse. In Sec. D we show a direct (qualitative) comparison of our method with the approach presented in [START_REF] Ma | Pose guided person image generation[END_REF] and in Sec. E we show other images generated by our method, including some failure cases. Note that some of the images in the DeepFashion dataset have been manually cropped (after the automatic generation) to improve the overall visualization quality. A. Nearest-neighbour loss implementation Our proposed nearest-neighbour loss is based on the definition of L N N (x, x b ) given in Eq. 8. In that equation, for each point p in x, the "most similar" (in the C x -based feature space) point q in x b needs to be searched for in a n × n neighborhood of p. This operation may be quite time consuming if implemented using sequential computing (i.e., using a "for-loop"). We show here how this computation can be sped-up by exploiting GPU-based parallel computing in which different tensors are processed simultaneously. Given C x b , we compute n 2 shifted versions of C x b : {C (i,j) x b }, where (i, j) is a translation offset ranging in a relative n × n neighborhood (i, j ∈ {-n-1 2 , ..., + n-1 2 }) and C (i,j) x b is filled with the value +∞ in the borders. Using this translated versions of C x b , we compute n 2 corresponding difference tensors {D (i,j) }, where: D (i,j) = |C x -C (i,j) x b | (11) and the difference is computed element-wise. D (i,j) (p) contains the channel-by-channel absolute difference between C x(p) and C x b (p + (i, j)). Then, for each D (i,j) , we sum all the channel-based differences obtaining: S (i,j) = c D (i,j) (c), (12) where c ranges over all the channels and the sum is performed pointwise. S (i,j) is a matrix of scalar values, each value representing the L 1 norm of the difference between a point p in C x and a corresponding point p + (i, j) in C x b : S (i,j) (p) = ||C x(p) -C x b (p + (i, j))|| 1 . (13) For each point p, we can now compute its best match in a local neighbourhood of C x b simply using: M (p) = min (i,j) S (i,j) (p). (14) Finally, Eq. 8 becomes: L N N (x, x b ) = p M (p). (15) Since we do not normalize Eq. 12 by the number of channels nor Eq. 15 by the number of pixels, the final value L N N (x, x b ) is usually very high. For this reason we use a small value λ = 0.01 in Eq. 10 when weighting L N N with respect to L cGAN . B. Exploiting the human-body symmetry As mentioned in Sec. 3.1, we decompose the human body in 10 rigid sub-parts: the head, the torso and 8 limbs (left/right upper/lower arm, etc.). When one of the joints corresponding to one of these body-parts has not been detected by the HPE, the corresponding region and affine transformation are not computed and the region-mask is filled with 0. This can happen because of either that region is not visible in the input image or because of falsedetections of the HPE. However, when the missing region involves a limb (e.g., the right-upper arm) whose symmetric body part has been detected (e.g., the left-upper arm), we can "copy" information from the "twin" part. In more detail, suppose for instance that the region corresponding to the right-upper arm in the conditioning image is R a rua and this region is empty because of one of the above reasons. Moreover, suppose that R b rua is the corresponding (non-empty) region in x b and that R a lua is the (non-empty) left-upper arm region in x a . We simply set: R a rua := R a lua and we compute f rua as usual, using the (now, no more empty) region R a rua together with R b rua . C. Improving person Re-ID via dataaugmentation The goal of this section is to show that the synthetic images generated with our proposed approach can be used to train discriminative methods. Specifically, we use Re-ID approaches whose task is to recognize a human person in different poses and viewpoints. The typical application of a Re-ID system is a video-surveillance scenario in which images of the same person, grabbed by cameras mounted in different locations, need to be matched to each other. Due to the low-resolution of the cameras, person re-identification is usually based on the colours and the texture of the clothes [START_REF] Zheng | Person reidentification: Past, present and future[END_REF]. This makes our method particularly suited to automatically populate a Re-ID training dataset by generating images of a given person with identical clothes but in different viewpoints/poses. In our experiments we use Re-ID methods taken from [START_REF] Zheng | Person reidentification: Past, present and future[END_REF][START_REF] Zheng | A discriminatively learned CNN embedding for person reidentification[END_REF] and we refer the reader to those papers for details about the involved approaches. We employ the Market-1501 dataset that is designed for Re-ID method benchmarking. For each image of the Market-1501 training dataset (T ), we randomly select 10 target poses, generating 10 corresponding images using our approach. Note that: (1) Each generated image is labeled with the identity of the conditioning image, (2) The target pose can be extracted from an individual different from the person depicted in the conditioning image (this is different from the other experiments shown here and in the main paper). Adding the generated images to T we obtain an augmented training set A. In Tab. 4 we report the results obtained using either T (standard procedure) or A for training different Re-ID systems. The strong performance boost, orthogonal to different Re-ID methods, shows that our generative approach can be effectively used for synthesizing training samples. It also indirectly shows that the generated images are sufficiently realistic and different from the real images contained in T . D. Comparison with previous work In this section we directly compare our method with the results generated by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. The comparison is based on the pairs conditioning image-target pose used in [START_REF] Ma | Pose guided person image generation[END_REF], for which we show both the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] and ours. Figs. 67show the results on the Market-1501 dataset. Comparing the images generated by our full-pipeline with the corresponding images generated by the full-pipeline presented in [START_REF] Ma | Pose guided person image generation[END_REF], most of the times our results are more realistic, sharper and with local details (e.g., the clothes texture or the face characteristics) more similar to the details of the conditioning image. For instance, in the first and the last row of Fig. 6 and in the last row of Fig. 7, our results show human-like images, while the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced images which can hardly be recognized as humans. Figs. 89show the results on the DeepFashion dataset. Also in this case, comparing our results with [START_REF] Ma | Pose guided person image generation[END_REF], most of the times ours look more realistic or closer to the details of the conditioning image. For instance, the second row of Fig. 8 shows a male face, while the approach proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced a female face (note that the DeepFashion dataset is strongly biased toward female subjects [START_REF] Ma | Pose guided person image generation[END_REF]). Most of the times, the clothes texture in our case is closer to that depicted in the conditioning image (e.g., see rows 1, 3, 4, 5 and 6 in Fig. 8 and rows 1 and 6 in Fig. 9). In row 5 of Fig. 9 the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced an image with a pose closer to the target; however it wrongly generated pants while our approach correctly generated the appearance of the legs according to the appearance contained in the conditioning image. We believe that this qualitative comparison using the pairs selected in [START_REF] Ma | Pose guided person image generation[END_REF], shows that the combination of the proposed deformable skip-connections and the nearestneighbour loss produced the desired effect to "capture" and transfer the correct local details from the conditioning image to the generated image. Transferring local information while simultaneously taking into account the global pose deformation is a difficult task which can more hardly be implemented using "standard" U-Net based generators as those adopted in [START_REF] Ma | Pose guided person image generation[END_REF]. E. Other qualitative results In this section we present other qualitative results. Fig. 10 and Fig. 11 show some images generated using the Market-1501 dataset and the DeepFashion dataset, respectively. The terminology is the same adopted in Sec. 6.2. Note that, for the sake of clarity, we used a skeleton-based visualization of P (•) but, as explained in the main paper, only the point-wise joint locations are used in our method to represent pose information (i.e., no joint-connectivity information is used). Similarly to the results shown in Sec. 6.2, also these images show that, despite the pose-related general structure is sufficiently well generated by all the different versions of our method, most of the times there is a gradual quality improvement in the detail synthesis from Baseline to DSC to PercLoss to Full. Finally, Fig. 12 and Fig. 13 show some failure cases (badly generated images) of our method on the Market-1501 dataset and the DeepFashion dataset, respectively. Some common failure causes are: • Errors of the HPE [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF]. For instance, see rows 2, 3 and 4 of Fig. 12 or the wrong right-arm localization in row 2 of Fig. 13. • Ambiguity of the pose representation. For instance, in row 3 of Fig. 13, the left elbow has been detected in x b although it is actually hidden behind the body. Since P (x b ) contains only 2D information (no depth or occlusion-related information), there is no way for the system to understand whether the elbow is behind or in front of the body. In this case our model chose to generate an arm considering that the arm is in front of the body (which corresponds to the most frequent situation in the training dataset). • Rare poses. For instance, row 1 of Fig. 13 shows a girl in an unusual rear view with a sharp 90 degree profile face (x b ). The generator by mistake synthesized a neck where it should have "drawn" a shoulder. Note that rare poses are a difficult issue also for the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF]. [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 73.9 48.8 78.5 55.9 IDE + XQDA [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 73.2 50.9 77.8 57.9 IDE + KISSME [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 75.1 51.5 79.5 58.1 Discriminative Embedding [START_REF] Zheng | A discriminatively learned CNN embedding for person reidentification[END_REF] 78.3 55.5 80.6 61.3 • Rare object appearance. For instance, the backpack in row 1 of Fig. 12 is light green, while most of the backpacks contained in the training images of the Market-1501 dataset are dark. Comparing this image with the one generated in the last row of Fig. 10 (where the backpack is black), we see that in Fig. 10 the colour of the shirt of the generated image is not blended with the backpack colour, while in Fig. 12 it is. We presume that the generator "understands" that a dark backpack is an object whose texture should not be transferred to the clothes of the generated image, while it is not able to generalize this knowledge to other backpacks. • Warping problems. This is an issue related to our specific approach (the deformable skip connections). The texture on the shirt of the conditioning image in row 2 of Fig. 13 is warped in the generated image. We presume this is due to the fact that in this case the affine transformations need to largely warp the texture details of the narrow surface of the profile shirt (conditioning image) in order to fit the much wider area of the target frontal pose. x a x b Full (ours) Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] Figure 6: A qualitative comparison on the Market-1501 dataset between our approach and the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. Columns 1 and 2 show the conditioning and the target image, respectively, which are used as reference by both models. Columns 3 and 4 respectively show the images generated by our full-pipeline and by the full-pipeline presented in [START_REF] Ma | Pose guided person image generation[END_REF]. x a x b Full (ours) Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] Figure 7: More qualitative comparison on the Market-1501 dataset between our approach and the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. Figure 1 : 1 Figure 1: (a) A typical "rigid" scene generation task, where the conditioning and the output image local structure is well aligned. (b) In a deformable-object generation task, the input and output are not spatially aligned. x a P (x a ) P (x b ) x b Baseline DSC PercLoss Full Figure 4 : 4 Figure 4: Qualitative results on the Market-1501 dataset.Columns 1, 2 and 3 represent the input of our model. We plot P (•) as a skeleton for the sake of clarity, but actually no joint-connectivity relation is exploited in our approach. Column 4 corresponds to the ground truth. The last four columns show the output of our approach with respect to different baselines. Figure 5 : 5 Figure 5: Qualitative results on the DeepFashion dataset with respect to different baselines. Some images have been cropped for visualization purposes. Figure 8 : 8 Figure 8: A qualitative comparison on the DeepFashion dataset between our approach and the results obtained by Ma et al. [12]. Figure 9 : 9 Figure 9: More qualitative comparison on the DeepFashion dataset between our approach and the results obtained by Ma et al. [12]. PFigure 10 : 10 Figure 10: Other qualitative results on the Market-1501 dataset. PFigure 11 : 11 Figure 11: Other qualitative results on the DeepFashion dataset. PFigure 12 : 12 Figure 12: Examples of badly generated images on the Market-1501 dataset. See the text for more details. PFigure 13 : 13 Figure 13: Examples of badly generated images on the DeepFashion dataset. Table 1 : 1 Comparison with the state of the art. ( * ) These values have been computed using the code and the network weights released by Ma et al.[START_REF] Ma | Pose guided person image generation[END_REF] in order to generate new images. Market-1501 DeepFashion Model SSIM IS mask-SSIM mask-IS DS SSIM IS DS Ma et al. [12] 0.253 3.460 0.792 3.435 0.39 * 0.762 3.090 0.95 * Ours 0.290 3.185 0.805 3.502 0.72 0.756 3.439 0.96 Real-Data 1.00 3.86 1.00 3.36 0.74 1.000 3.898 0.98 Table 3 : 3 Quantitative ablation study on the Market-1501 and the DeepFashion dataset. Market-1501 DeepFashion Model SSIM IS mask-SSIM mask-IS DS SSIM IS Baseline 0.256 3.188 0.784 3.580 0.595 0.754 3.351 DSC 0.272 3.442 0.796 3.666 0.629 0.754 3.352 PercLoss 0.276 3.342 0.788 3.519 0.603 0.744 3.271 Full 0.290 3.185 0.805 3.502 0.720 0.756 3.439 Real-Data 1.00 3.86 1.00 3.36 0.74 1.000 3.898 x a P (x a ) P (x b ) x b Baseline DSC PercLoss Full Table 4 : 4 Accuracy of Re-ID methods on the Market-1501 test set (%) Standard training set (T ) Augmented training set (A) Model Rank 1 mAP Rank 1 mAP IDE + Euclidean Acknowledgements We want to thank the NVIDIA Corporation for the donation of the GPUs used in this project.
55,892
[ "1058528" ]
[ "423141", "423141", "1042500", "423141" ]
01440167
en
[ "stat", "math" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01440167v2/file/EJP1803-036RA0%20%281%29.pdf
Patrice Bertail Gabriela Ciolek email: [email protected] Gabriela Ciolek New Bernstein Hoeffding 2018 Hal-01440167v2 New Bernstein and Hoeffding type inequalities for regenerative Markov chains come New Bernstein and Hoeffding type inequalities for regenerative Markov chains 1 Patrice Bertail * , Gabriela Cio lek * * Introduction Exponential inequalities are a powerful tool to control the tail probability that a random variable X exceeds some prescribed value t. They have been extensively investigated by many researchers due to the fact that they are a crucial step in deriving many results in numerous fields such as statistics, learning theory, discrete mathematics, statistical mechanics, information theory or convex geometry. There is a vast literature that provides a comprehensive overview of the theory of exponential inequalities in the i.i.d. setting. An interested reader is referred to [START_REF] Bai | Probability Inequalities[END_REF], [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF] or van der Vaart and Wellner (1996). The wealth of possible applications of exponential inequalities has naturally led to development of this theory in the dependent setting. In this paper we are particularly interested in results that establish exponential bounds for the tail probabilities of the additive functional of the regenerative Markov chain of the form f (X 1 ) + • • • + f (X n ), where (X n ) n∈N is a regenerative Markov chain. It is noteworthy that when deriving exponential inequalities for Markov chains (or any other process with some dependence structure) one can not expect to recover fully the classical results from the i.i.d. case. The goal is then to get some counterparts of the inequalities for i.i.d. random variables with some extra terms that appear in the bound as a consequence of a Markovian structure of the considered process. In the recent years such (non-)asymptotic results have been obtained for Markov chains via many approaches: martingale arguments (see [START_REF] Glynn | Hoeffding?s Inequality for Uniformly Ergodic Markov Chains[END_REF], where Hoeffding's inequality for uniformly ergodic Markov chains has been presented), coupling techniques (see [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] and [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]). In fact, [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] have proved that Hoeffding's inequality holds when the Markov chain is geometrically ergodic and thus weakened the assumptions imposed on the Markov chain in [START_REF] Glynn | Hoeffding?s Inequality for Uniformly Ergodic Markov Chains[END_REF]. Winterberger (2016) has generalized the result of [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] by showing that Hoeffding's inequality is valid also for unbounded functions of geometrically ergodic Markov chains provided that the sum is correctly self-normalized. [START_REF] Paulin | Concentration inequalities for Markov chains by Marton couplings and spectral methods[END_REF] has presented McDiarmid inequality for Markov chains using Merton coupling and spectral methods. [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF], Bertail and Clémençon (2009), and [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF] have obtained exponential inequalities for ergodic Markov chains via regeneration techniques (see [START_REF] Smith | Regenerative stochastic processes[END_REF]). Regeneration techniques for Markov chains are particularly appealing to us mainly due to the fact that it requires much fewer restrictions on the ergodicity properties of the chain in comparison to alternative methods. In this paper we establish Hoeffding and Bernstein type of inequalities for statistics of the form 1 n ∑ n i=1 f (X i ) , where (X n ) n∈N is a regenerative Markov chain. We show that under proper control of the size of class of functions F (measured by its uniform entropy number), one can get non-asymptotic bounds on the suprema over the class of F of such empirical process for regenerative Markov chains. It is noteworthy that it is easy to generalize such results from regenerative case to the Harris recurrent one, using Nummelin extension of the initial chain (see [START_REF] Nummelin | A splitting technique for Harris recurrent chains[END_REF]). The paper is organized as follows. In chapter 2 we introduce the notation and preliminary assumptions for Markov chains. We also recall some classical results from the i.i.d. setting which we generalize to the Markovian case. In chapter 3 we present the main results -Bernstein and Hoeffding type inequalities for regenerative Markov chains. The main ingredient to provide a crude exponential bound (with bad constants) is based on Montgomery-Smith which allows to reduce the problem on a random number of blocks to a fixed number of independent blocks. We then proposed a refined inequality by first controlling the the number of blocks in the inequality and then applying again Montgomery-Smith inequality on a remainder term. Next, we generalize these results and obtain Hoeffding and Bernstein type of bounds for suprema of empirical processes over a class of functions F. We also present the inequalities when the chain is Harris recurrent. Some technical parts of the proofs are postponed to the Appendix. Preliminaries We begin by introducing some notation and recall the key concepts of the Markov chains theory (see [START_REF] Meyn | Markov chains and stochastic stability[END_REF] for a detailed review and references). Let X = (X n ) n∈N be a positive recurrent, ψ-irreducible Markov chain on a countably generated state space (E, E) with transition probability Π and initial probability ν. We assume further, that X is regenerative (see [START_REF] Smith | Regenerative stochastic processes[END_REF]), i.e. there exists a measurable set A, called an atom, such that ψ(A) > 0 and for all (x, y) ∈ A 2 we have Π(x, •) = Π(y, •). We define the sequence of regeneration times (τ A (j)) j≥1 which is the sequence of successive points of time when the chain visits A and forgets its past. Throughout the paper we write τ A = τ A [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF]. It is well-known that we can cut the sample path of the process into data segments of the form B j = (X 1+τ A (j) , • • • , X τ A (j+1) ), j ≥ 1 according to consecutive visits of the chain to the regeneration set A. By the strong Markov property the blocks are i.i.d. random variables taking values in the torus ∪ ∞ k=1 E k . In the following, we assume that the mean inter-renewal time α = E A [τ A ] < ∞ and point out that in this case, the stationary distribution is a Pitman occupation measure given by ∀B ∈ E, µ(B) = 1 E A [τ A ] E A [ τ A ∑ i=1 I {X i ∈B} ] , where I B is the indicator function of the event B. Assume that we observe (X 1 , • • • , X n ). We introduce few more pieces of notation: throughout the paper we write l n = ∑ n i=1 I{X i ∈ A} for the total number of consecutive visits of the chain to the atom A, thus we observe l n + 1 data blocks. We make the convention that B (n) ln = ∅ when τ A (l n ) = n. Furthermore, we denote by l(B j ) = τ A (j + 1) -τ A (j), j ≥ 1, the length of regeneration blocks. Let f : E → R be µ-integrable function. In the following, we assume without loss of generality that µ(f ) = E µ [f (X 1 )] = 0. We introduce the following notation for partial sums of the regeneration cycles f (B i ) = ∑ τ A (j+1) i=1+τ A (j) f (X i ). Then, the regenerative approach is based on the following decomposition of the sum ∑ n i=1 f (X i ) : n ∑ i=1 f (X i ) = ln ∑ i=1 f (B i ) + ∆ n , where ∆ n = τ A ∑ i=1 f (X i ) + n ∑ i=τ A (ln-1) f (X i ). We denote by σ 2 (f ) = 1 E A (τ A ) E A ( τ A ∑ i=1 {f (X i ) -µ(f )} ) 2 the asymptotic variance. For the completeness of the exposition, we recall now well-known classical results concerning some exponential inequalities for independent random variables. Firstly, we present the inequality for the i.i.d. bounded random variables due to [START_REF] Hoeffding | Probability inequalities for sums of bounded random variables[END_REF]. a i ≤ X i ≤ b i (i = 1, • • • , n), then for t > 0 P ( 1 n n ∑ i=1 X i -EX 1 ≥ t ) ≤ exp ( - 2t 2 ∑ n i=1 (b i -a i ) 2 ) . Below we recall the generalization of Hoeffding's inequality to unbounded functions. Interested reader, can find different variations of the following inequality (depending on imposed conditions on the random variables) in [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF]. Theorem 2.2 (Bernstein's inequality) Let X 1 , • • • , X n be independent random variables with expectation EX l for X l , l ≥ 1 respectively, such that, for all integers p ≥ 2, E|X l | p ≤ p!R p-2 σ 2 l /2 for all l ∈ {1, • • • , n}. Then, for all t > 0, P ( n ∑ i=1 (X l -EX l ) ≥ t ) ≤ 2 exp ( - t 2 2(σ 2 + Rt) ) , where σ 2 = ∑ n i=1 σ 2 l . The purpose of this paper is to derive similar bounds for Markov chains using the nice regenerative structure of Markov chains. Exponential inequalities for the tail probability for suprema of empirical processes for Markov chains In the following, we denote f (x) = f (x) -µ(f ). Moreover, we write respec- tively f (B 1 ) = ∑ τ A i=1 f (X i ) and | f |(B 1 ) = ∑ τ A i=1 | f |(X i ). We will work under following conditions. A1. (Bernstein's block moment condition) There exists a positive constant M 1 such that for any p ≥ 2 and for every f ∈ F E A f (B 1 ) p ≤ 1 2 p!σ 2 (f )M p-2 1 . ( 1 ) A2. (Non-regenerative block exponential moment assumption) There exists λ 0 > 0 such that for every f ∈ F we have E ν [ exp [ λ 0 ∑ τ A i=1 f (X i ) ]] < ∞. A3. (Exponential block moment assumption) There exists λ 1 > 0 such that for every f ∈ F we have E A [ exp [ λ 1 f (B 1 ) ]] < ∞. Remark 3.1 It is noteworthy to mention that assumption A1 implies the existence of an exponential moment of f (B 1 ) : E A exp(λ f (B 1 )) ≤ exp ( λ 2 /2 1 -M 1 |λ| ) for all λ < 1 M 1 . In this section, we formulate two Bernstein type inequalities for Markov chains, one is established via simple use of Montgomery-Smith inequality (see Montgomery-Smith (1993) and de la Peña, and Giné (1999)) which results in much larger constants (comparing to the i.i.d. setting) in the dominating parts of the bound. The second Bernstein's bound contains small constants in the main counterparts of the bound, however at a cost of having an extra term in the bound. Before we state the theorems, we will give a short discussion on already existing results for exponential inequalities for Markov chains. Remarks 3.1 Since there is plenty of results concerning exponential inequal- ities for Markov chains under many assumptions, it may be difficult to compare their strength (measured by assumptions imposed on the chain) and applicability. Thus, before we present the proofs of Theorem 3.2 and Theorem 3.3 , we make a short comparison of our result to already existing inequalities for Markov chains. We also strongly recommend seeing an exhaustive overview on the recent results of that type in [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF]. The bounds obtained in this paper are related to the Fuk and Nagaev sharp bound inequality obtained in Bertail and Clémençon (2010). It is also based on the regeneration properties and decomposition of the chain. However, our techniques of proof differ and allow us to obtain a better rate in the main subgaussian part of the inequality under the hy- potheses. The proofs of the inequalities are simplified and do not require the partitioning arguments which was used in [START_REF] Bertail | Sharp bounds for the tail of functionals of Markov chains[END_REF]. [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] and [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] or any restrictions on the starting point of the chain as in [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]. Moreover, [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF] use the assumption of strong aperiodicity for Harris Markov chain. We state a remark that this condition can be relaxed and we can only assume that Harris Markov chain is aperiodic (see Remark 3.9). [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF], [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]). Our inequalities work for unbounded functions satisfying Bernstein's block moment condition. Moreover, all terms involved in our inequalities are given by explicit formulas. Thus, the results can be directly used in practical considerations. Note also that all the constants are given in simple, easy to interpret form and they do not depend on other underlying parameters. It is noteworthy that we do not impose condition of stationarity of the considered Markov chain as in Many results concerning exponential inequalities for Markov chains are established for bounded functions f (see for instance Winterberger (2016) has established exponential inequalities in unbounded case extending the result of [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] to the case when the chain can start from any x ∈ E. However, the constant involved in the bound of the Theorem 2.1 (obtained for bounded and unbounded functions) is very large. [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF]. As mentioned in the paper of Adamczak, there is many exponential inequalities that satisfy spectral gaps (see for instance Gao and Guillin, [START_REF] Lezaud | Chernoff and Berry-Esseen inequalities for Markov processes[END_REF]). Spectral gap inequalities allow to recover the Bernstein type inequality at its full strength. We need to mention that the geometric ergodicity assumption does not ensure in the non-reversible case that considered Markov chains admit a spectral gap (see Theorem 1.4 in [START_REF] Kontoyiannis | Geometric ergodicity and the spectral gap of non-reversible Markov chains[END_REF]). We formulate a Bernstein type inequality for Markov chains below. Theorem 3.2 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1 -A3, we have P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 18 exp [ - x 2 2 × 90 2 (nσ 2 (f ) + M 1 x/90) ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , where C 1 = E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] , C 2 = E A [ exp[λ 1 f (B 1 )] ] . Remark 3.2 Observe that we do not impose a moment condition on E A [τ A ] p < ∞ for p ≥ 2. At the first glance, this might be surprising since one usually assumes the existence of E A [τ A ] 2 < ∞ when proving central limit theorem for regenerative Markov chains. A simple analysis of the proof of the central limit theorem in a Markovian case (see for instance [START_REF] Meyn | Markov chains and stochastic stability[END_REF]) reveals that it is sufficient to require only E A [τ A ] < ∞ when we consider cen- tered function f instead of f. Proof. Firstly, we consider the sum of random variables of the following form Z n ( f ) = ln ∑ i=1 f (B j ). (2) Furthermore, we have that S n ( f ) = Z n ( f ) + ∆ n ( f ). We recall, that l n is random and correlated with blocks itself. In order to apply Bernstein's inequality for i.i.d. random variables we apply the Montgomery-Smith inequality (see [START_REF] Montgomery-Smith | Comparison of sums of independent identically distributed random vectors[END_REF]) . It follows easily that P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] ≤ P A [ max 1≤k≤n k ∑ i=1 f (B i ) ≥ x/3 ] ≤ 9P A [ n ∑ i=1 f (B i ) ≥ x/90 ] (3) and under Bernstein's condition A1 we obtain P A [ n ∑ i=1 f (B i ) ≥ x/90 ] ≤ 2 exp [ - x 2 2 × 90 2 (M 1 x/90 + nσ 2 (f )) ] . ( 4 ) Next, we want to control the remainder term ∆ n . ∆ n = τ A ∑ i=1 f (X i ) + n ∑ i=τ A (ln-1) f (X i ). ( 5 ) The control of ∆ n is guaranteed by Markov's inequality, i.e. P ν [ τ A ∑ i=1 f (X i ) ≥ x 3 ] ≤ E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] exp [ - λ 0 x 3 ] . We deal similarly with the last term of ∆ n . We complement the data 1 + τ A (l n ) + 1 by observations up to the next regeneration time 1 + τ A (l n + 1) and obtain P ν   n ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3   ≤ P ν   n ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3   ≤ P ν   1+τ A (ln+1) ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3   ≤ E A [ exp[λ 1 f (B 1 )] ] exp [ - λ 1 x 3 ] . We note that although the Montgomery-Smith inequality allows to obtain easily Bernstein's bound for Markov chains, the constants are rather large. Interestingly, under an additional assumption on E A [τ A ] p we can obtain the Bernstein type inequality for regenerative Markov chains with much smaller constants for the dominating counterparts of the bound. A4. (Block length moment assumption) There exists a positive constant M 2 such that for any p ≥ 2 E A [τ A ] p ≤ p!M p-2 2 E A [τ 2 A ] and E ν [τ A ] p ≤ p!M p-2 2 E ν [τ 2 A ]. Before we formulate Bernstein's inequality for regenerative Markov chains we introduce a lemma which provides a bound for tail probability of √ n ( ln n -1 α ) which will be cruciall for the proof of Bernstein's bound but also may be of independent interest. Lemma 3.1 Suppose that condition A4 holds. Then P ν ( n 1/2 ( l n n - 1 α ) ≥ x ) is bounded by exp ( - 1 2 (αx √ n -2α) 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) + (αx √ n + E ν τ A )M 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) 1/2 ) . Proof of Lemma 3.1 is postponed to the Appendix. Remark 3.3 Note that when n → ∞, the dominating part in the exponential term is of order 1 2 α 2 x 2 E A τ 2 A /α + α 1/2 xM 2 (E A τ 2 A ) 1/2 + O(n -1/2 ) = 1 2 α 2 x 2 E A τ 2 A /α(1 + αxM 2 (E A τ 2 A /α) -1/2 ) + O(n -1/2 ) = 1 2 (αx) 2 / (E A τ 2 A /α) (1 + αxM 2 (E A τ 2 A /α) -1/2 ) + O(n -1/2 ), thus we have a Gaussian tail with the right variance for moderate x and an exponential tail for large x and, in consequence, the constants are asymptotically optimal. Now we are ready to state an alternative Bernstein type inequality for regenerative Markov chains, where under additional condition on the length of the blocks we can obtain much better inequality in terms of constants. Theorem 3.3 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1-A4 we have for any a > 0, for x > 0 and N > 0 P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x 1+a ) ] + 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] + P ν ( n 1/2 [ l n n - 1 α ] > N ) + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , ( 6 ) where C 1 = E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] , C 2 = E A [ exp[λ 1 f (B 1 )] ] . Remark 3.4 If we choose N = log(n), then by Lemma 3.1 we can see that P ν ( n 1/2 [ ln n -1 α ] ≥ log(n) ) = o ( 1 n ) and in that case the second term in [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] remains small uniformly in x. Proof. We start by the obvious observation that P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] + P ν [ τ A ∑ i=1 f (X i ) ≥ x/3 ] + P A   n ∑ i=τ A (ln-1) f (X i ) ≥ x/3   . ( 7 ) Remark 3.5 Instead of dividing x by 3 in [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], one can use a different splitting to improve a little bit the final constants. The bounds for the first and last non-regenerative blocks can be handled the same way as in Theorem 3.2. Next, we observe that, for any a > 0, we have P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] ≤ P A    ⌊ n α ⌋ ∑ i=1 f (B i ) ≥ x 3(1 + a)    + P A   ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a)   , ( 8 ) where l n 1 = min( ⌊ n α ⌋ , l n ) and l n 2 = max( ⌊ n α ⌋ , l n ). We observe that ∑ ⌊ n α ⌋ i=1 f (B i ) is a sum of independent, identically distributed and sub-exponential random variables. Thus, we can directly apply Bernstein's bound and obtain P A    ⌊ n α ⌋ ∑ i=1 f (B i ) ≥ x 3(1 + a)    ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 x/3(1 + a) ) ] . ( 9 ) The control of ∑ ln 2 ln 1 f (B i ) is slightly more challenging due to the fact that l n is random and correlated with the blocks itself. In the following, we will make use of the Montgomery-Smith inequality. Notice however, that since we expect the number of terms in this sum to be at most of the order √ n, this term will be much more smaller than the leading term (9) and will be asymptotically negligible. We have P A   ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a)   ≤ P A   ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) , √ n [ l n n - 1 α ] ≤ N   + P ν ( √ n [ l n n - 1 α ] > N ) = A + B. ( 10 ) Firstly, we will bound term A in (10) using Montgomery-Smith inequality and the fact that if √ n [ ln n -1 α ] ≤ N, then l n 1 -l n 2 ≤ √ nN. P A   ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) , √ n [ l n n - 1 α ] ≤ N   ≤ P A ( max 1≤k≤N √ n k ∑ i=1 f (B i ) ≥ x 3(1 + a) ) ≤ 9P A   N √ n ∑ i=1 f (B i ) ≥ x 90(1 + a)   ≤ 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] . Lemma 3.1 allows to control term B. Maximal inequalities under uniform entropy In empirical processes theory for processes indexed by class of functions, it is important to assess the complexity of considered classes. The information about entropy of F helps us to inspect how large our class is. Generally, control of entropy of certain classes may be crucial step when investigating asymptotic behaviour of empirical processes indexed by a class of functions. In our setting, we will measure the size of class of functions F via covering numbers and uniform entropy number. The following definition is due to [START_REF] Van Der Vaart | Weak Convergence and Empirical Processes With Applications to Statistics[END_REF]. Definition 3.4 (Covering and uniform entropy number) The covering number N p (ϵ, Q, F) is the minimal number of balls {g : ∥g -f ∥ L p (Q) < ϵ} of radius ϵ needed to cover the set F. The entropy (without bracketing) is the logarithm of the covering number. We define uniform entropy number as N p (ϵ, F) = sup Q N p (ϵ, Q, F) , where the supremum is taken over all discrete probability measures Q. In the following we state assumptions on the size of considered class of functions F. Rather than considering the assumptions A2 and A3, we impose the assumptions on the first and the last non-regenerative blocks for the envelope F of F. A2 ′ . (Non-regenerative block exponential moment assumption) There exists λ 0 > 0 such that E ν [ exp [ 2λ 0 ∑ τ A i=1 F (X i ) ]] < ∞. A3 ′ . (Exponential block moment assumption) There exists λ 1 > 0 such that E A [ exp [ 2λ 1 F (B 1 ) ]] < ∞. A5. (Uniform entropy number condition) N 2 (ϵ, F) < ∞. Before we formulate Bernstein deviation type inequality for unbounded classes of functions, we introduce one more piece of notation, let σ 2 m = max f ∈F σ 2 (f ) > η > 0. Theorem 3.5 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ and A5 and for any 0 < ϵ < x and for n large enough we have P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 18 exp [ - (x -2ϵ) 2 2 × 90 2 (nσ 2 m + M 1 (x -2ϵ)/90) ] +C 1 exp [ λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} , ( 11 ) where C 1 = E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] , C 2 = E A [exp[2λ 1 |F | (B 1 )]] and F is an envelope function for F. Before we proceed with the proof of Theorem 3. ||f || C P (E ′ ) = sup x∈E ′ |f (x)| + sup x 1 ∈E ′ ,x 2 ∈E ′ ( f (x 1 ) -f (x 2 ) d(x 1 , x 2 ) p ) then we have M = sup x∈X F (x) < ∞ as well as L = sup f,g∈F ,f ̸ =g sup z |f (z)-g(z)| ||f -g|| C P (E ′ ) < ∞ so that we can directly control the empirical sum by the obvious inequality sup f,g∈F 1 n n ∑ i=1 f (X i ) -g(X i ) ≤ L||f -g|| C P (E ′ ) . It follows that if we replace the notion of uniform covering number N 2 (ε, F) with respect to the norm ∥.∥ L 2 (Q) by the covering numbers N C p (ε, F) with respect to ||.|| C P (E ′ ) , then the results hold true for any n, provided that N 2 (ε, F) is replaced by N C p ( ε L , F) in the inequality. Proof of Theorem 3.5. We choose functions g 1 , g 2 , • • • , g M , where M = N 2 (ϵ, F) such that min j Q|f -µ(f ) -g j + µ(g 1 )| ≤ ϵ for each f ∈ F , where Q is any discrete probability measure. We also assume that g 1 , g 2 , • • • , g M belong to F and satisfy conditions A1, A2 ′ , A3 ′ . We write f * for the g j , where the minimum is achieved. Our further reasoning is based on the following remarks. Remark 3.7 Let f, g be functions with the expectations µ(f ), µ(g) respectively. Then, ∥f -µ(f ) -g + µ(g)∥ L 2 ≤ ∥f -g∥ L 2 + ∥µ(f ) -µ(g)∥ L 2 ≤ 2∥f -g∥ L 2 . In our reasoning, we will also make use of the following remark. Remark 3.8 Assume that the functions f, g ∈ F and ∥f -g∥ 2,Pn < ϵ. Then, for n large enough (depending only on ϵ), P(f -g) 2 ≤ P n (f -g) 2 + |(P n -P)(f -g) 2 | ≤ 2ϵ 2 since uniformly |(P n -P)(f -g) 2 | ≤ ϵ 2 by the uniform strong law of large numbers for regenerative Markov chains (see Theorem 3.6 from [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF]). As a consequence, any ϵ-net in L 2 (P n ) is also √ 2ϵ-net in L 2 (P) (see also [START_REF] Kosorok | Introduction to empirical processes and semiparametric inference[END_REF], page 151 for some refinements in the i.i.d. case). Moreover, note that ∃ N such that ∀ n ≥ N ∥g i -g j ∥ 2,P ≤ ϵ and we have ∥g i -g j ∥ 2,Pn -∥g i -g j ∥ 2,P + ∥g i -g j ∥ 2,P ≤ 2ϵ. Next, by the definition of uniform numbers and the Remark 3.8, we obtain P ν [ sup f ∈F 1 n n ∑ i=1 (f (X i ) -µ(f )) ≥ x ] ≤ P ν { sup f ∈F [ 1 n n ∑ i=1 |f (X i ) -µ(f ) -f * (X i ) + µ(f * ) + 1 n n ∑ i=1 |f * (X i ) -µ(f * )| ] ≥ x } ≤ P ν [ max j∈{1,••• ,N 2 (ϵ,F)} 1 n n ∑ i=1 g j (X i ) -µ(g 1 ) ≥ x -2ϵ ] ≤ N 2 (ϵ, F) max j∈{1,••• ,N 2 (ϵ,F)} P ν { 1 n n ∑ i=1 g j (X i ) -µ(g 1 ) ≥ x -2ϵ } . We set the notation that g j = g j -µ(g 1 ). In what follows, our reasoning is analogous as in the proof of Theorem 3.2. Instead of taking any f ∈ F , we work with the functions g j ∈ F. Thus, we consider now the processes Z n (g j ) = ln ∑ i=1 g j (B i ) (12) and S n (g j ) = Z n (g j ) + ∆ n (g j ). Under the assumptions A1, A2 ′ and A3 ′ for g j , we get the analogous to that from Theorem 3.2 Bernstein's bound for Z n (g j ), namely P A [ ln ∑ i=1 g j (B i ) ≥ x -ϵ ] ≤ 18 exp [ - (x -2ϵ) 2 2 × 90 2 (nσ 2 (g 1 ) + M 1 (x -2ϵ)/90) ] . (13) We find the upper bound for the remainder term ∆ n (g j ) applying the same reasoning as in Theorem 3.2. Thus, P ν [ τ A ∑ i=1 g j (X i ) ≥ x -2ϵ 3 ] ≤ C 1 exp [ - λ 0 (x -2ϵ) 3 ] (14) and P A   n ∑ i=τ A (ln-1) ḡj (X i ) ≥ x -2ϵ 3   ≤ C 2 exp [ - λ 1 (x -2ϵ) 3 ] , ( 15 ) where C 1 = E ν [ exp λ 0 τ A ∑ i=1 g j (X i ) ] , C 2 = E A [ exp[λ 1 g j (B 1 )] ] . Finally, notice that E ν [ exp λ 0 τ A ∑ i=1 g j (X i ) ] ≤ E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] < ∞ and E A [ exp[λ 1 g j (B 1 )] ] ≤ E A [exp[2λ 1 |F | (B 1 )]] < ∞ and insert it into ( 14) and ( 15) which yields the proof. Below we will formulate a maximal version of Theorem 3.3. Theorem 3.6 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ , A4 -A5 and for any 0 < ϵ < x and for n large enough and N > 0 we have P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 2 exp [ -(x -2ϵ) 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x-2ϵ 1+a ) ] + 18 exp   -(x -2ϵ) 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 (x-2ϵ) 1+a )   +P ν ( n 1/2 [ l n n - 1 α ] > N ) + C 1 exp [ - λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} , where C 1 = E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] , C 2 = E A [exp[2λ 1 |F | (B 1 )]] . Proof. The proof is a combination of the proofs of Theorem 3.3 and Theorem 3.5. We deal with the supremum over F the same way as in Theorem 3.5. Then we apply Theorem 3.3. We can obtain even sharper upper bound when class F is uniformly bounded. In the following, we will show that it is possible to get a Hoeffding type inequality and have a stronger control of moments of the sum S n (f ) which is a natural consequence of uniform boundedness assumption imposed on F. A6. The class of functions F is uniformly bounded, i.e. there exists a constant D such that ∀f ∈ F |f | < D. Theorem 3.7 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ , A5 -A6 and for any 0 < ϵ < x, we have for n large enough P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) σ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 18 exp [ - (x -2ϵ) 2 2n × 90 2 D 2 ] +C 1 exp [ - λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} , ( 16 ) where C 1 = E ν exp |2λ 0 τ A D| , C 2 = E A exp |2λ 1 l(B 1 )D| . Proof. The proof bears resemblance to the proof of Theorem 3.5, with few natural modifications which are a consequence of the uniform boundedness of F. Under additional condition A4 we can obtain easily the bound with smaller constants, we follow the analogous way as in Theorem 3.6. General Harris recurrent case It is noteworthy that Theorems 3.2, 3.5, 3.7 are also valid in Harris recurrent case under slightly modified assumptions. It is well known that it is possible to retrieve all regeneration techniques also in Harris case via the Nummelin splitting technique which allows to extend the probabilistic structure of any chain in order to artificially construct a regeneration set. The Nummelin splitting technique relies heavily on the notion of small set. For the clarity of exposition we recall the definition. Definition 3.8 We say that a set S ∈ E is small if there exists a parameter δ > 0, a positive probability measure Φ supported by S and an integer m ∈ N * such that ∀x ∈ S, B ∈ E Π m (x, B) ≥ δ Φ(B), (17) where Π m denotes the m-th iterate of the transition probability Π. We expand the sample space in order to define a sequence (Y n ) n∈N of independent r.v.'s with parameter δ. We define a joint distribution P ν,M of X M = (X n , Y n ) n∈N . The construction relies on the mixture representation of Π on S, namely Π(x, B) = δΦ(B) + (1 -δ) Π(x,B)-δΦ(B) 1-δ . It can be retrieved by the following randomization of the transition probability Π each time the chain X visits the set S. If X n ∈ S and • if Y n = 1 (which happens with probability δ ∈ ]0, 1[), then X n+1 is distributed according to the probability measure Φ, • if Y n = 0 (that happens with probability 1-δ), then X n+1 is distributed according to the probability measure (1 -δ) -1 (Π(X n , •) -δΦ(•)). This bivariate Markov chain X M is called the split chain. It takes its values in E × {0, 1} and possesses an atom, namely A = S × {1}. The split chain X M inherits all the stability and communication properties of the chain X. The regenerative blocks of the split chain are i.i.d. (in case m = 1 in ( 17)) (see [START_REF] Meyn | Markov chains and stochastic stability[END_REF] for further details). We will formulate a Bernstein type inequality for unbounded classes of functions in the Harris recurrent case (equivalent of Theorem 3.2). Theorems 3.5 and 3.7 can be reformulated for Harris chains in similar way. We impose the following conditions: AH1. (Bernstein's block moment condition) There exists a positive constant M 1 such that for any p ≥ 2 and for every f ∈ F sup y∈S E y f (B 1 ) p ≤ 1 2 p!σ 2 (f )M p-2 1 . ( 18 ) AH2. (Non-regenerative block exponential moment assumption) There exists a constant λ 0 > 0 such that for every f ∈ F we have E ν [ exp λ 0 ∑ τ S i=1 f (X i ) ] < ∞. AH3. (Exponential block moment assumption) There exists a constant λ 1 > 0 such that for every f ∈ F we have sup y∈S E y [ exp[λ 1 f (B 1 )] ] < ∞. Let sup y∈S E y [τ S ] = α M < ∞. We are ready to formulate a Bernstein type inequality for Harris recurrent Markov chains. Theorem 3.9 Assume that X M is a Harris recurrent, strongly aperiodic Markov chain. Then, under assumptions AH1-AH3, we have P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 18 exp [ - x 2 2 × 90 2 (nσ 2 (f ) + M 1 x/90) ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , (19) where C 1 = E ν [ exp λ 0 τ S ∑ i=1 f (X i ) ] , C 2 = sup y∈S E y [ exp[λ 1 f (B 1 ) ] . The proof of Theorem 3.9 is analogous to the proof of Theorem 3. [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x 1+a ) ] + 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] + P ν [ n 1/2 ( l n n - 1 α ) > N ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , where C 1 = E ν [ exp λ 0 τ S ∑ i=1 f (X i ) ] , C 2 = sup y∈S E y [ exp[λ 1 f (B 1 )] ] . . Remark 3.9 In the Theorem 3.9 we assumed that X M is strongly aperiodic. It is easy, however, to relax this assumption and impose only the aperiodicity condition on Harris chain by using the same trick as in [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF]. Note that if X M satisfies M(m, S, δ, Φ) for m > 1, then the blocks of data are 1dependent. Denote by S = S ∪ { * }, where { * } is an ideal point which is not in S. Next, we define a pseudo-atom α M = S × {1}. In order to impose only and if 0 < x ≤ √ n(1 -α -1 ), then P ν ( n 1/2 ( l n n - 1 α ) ≥ x ) = P ν ( l n ≥ n α + x √ n ) ≤ P ν ( l n ≥ [ n α + x √ n ]) ≤ P((∆τ 1 -E ν τ A ) + [ n α +x √ n] ∑ i=2 (∆τ i -α) ≤ n -([ n α + x √ n] -1)α -E ν τ A ), where [.] is the integer part. Since n α + x √ n -1 ≤ [ n α + x √ n] ≤ n α + x √ n, we get n -([ n α + x √ n] -1)α -E ν τ A ) ≤ n -( n α + x √ n -2)α -E ν τ A = -αx √ n + 2α -E ν τ A . It follows that P ν ( n 1/2 ( ln n -1 α ) ≥ x ) ≤ P ( (∆τ 1 -E ν τ A ) + ∑ [ n α +x √ n] i=2 (∆τ i -α) ≤ -αx √ n + 2α -E ν τ A ) , where [.] is the integer part Now, we can apply any Bennett's or Bernstein's inequality on these centered i.i.d. random variables to get an exponential bound. This can be done since we assumed A4. Note that other bounds (polynomial for instance) can be obtained under appropriate modifications of A4. In our case we get P((∆τ 1 -E ν τ A ) + [ n α +x √ n] ∑ i=2 (∆τ i -α) ≤ -αx √ n + 2α -E ν τ A ) ≤ exp ( - 1 2 (αx √ n -2α + E ν τ 2 A )/S 2 n 1 + (αx √ n -2α + E ν τ A )M 2 /S n ) , where S 2 n = E ν τ 2 A + ([ n α + x √ n] -1)E A τ 2 A . The above bound can be reduced to exp ( - 1 2 (αx √ n -2α) 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) + (αx √ n + E ν τ A )M 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) 1/2 ) Theorem 2 . 1 ( 21 Hoeffding's inequality) Let X 1 , X 2 , • • • , X n be independent identically distributed random variables with common expectation EX 1 and such that Notice that our bound is a deviation bound in that it holds only for n large enough. This is due to the control of the covering functions Remark 3.6 (under P n ) by a control under P (see Remark 3.8 in the proof ). However, by making additional assumptions on the regularity of the class of functions and by choosing the adequate norm, it is possible to obtain by the same arguments an exponential inequality valid for any n as in Zou, Zhang and Xu (2009) or Cucker and Smale (2002). See also examples of such classes of functions used in statistical learning in this latter. Indeed, if F belongs to a ball of a Hölder space C 5, we indicate that under additional assumptions it is possible to obtain Bernstein type concentration inequality. P (E ′ ) on a compact set E ′ of an Euclidean space endowed with the norm 2. We can obtain a bound with much smaller constants under an extra block moment condition. AH4. (Block length moment assumption) There exists a positive constant M 2 such that for any p ≥ 2 Assume that X M is a Harris recurrent, strongly aperiodic Markov chain. Then, under assumptions AH1-AH4, we have for any x > 0 sup y∈S E y [τ S ] p ≤ p!M p-2 2 E A τ 2 A , E ν [τ S ] p ≤ p!M p-2 2 E ν τ 2 S . . Theorem 3.10 and for N ∈ R P ν Keywords : uniform entropy, exponential inequalities, empirical processes indexed by classes of functions, regenerative Markov chain. Primary Class : 62G09, Secondary Class : 62G20, 60J10 Acknowledgment This research was supported by a public grant as part of the Investissement d'avenir, project reference ANR-11-LABX-0056-LMH. The work was also supported by the Polish National Science Centre NCN ( grant No. UMO2016/23/N/ST1/01355 ) and (partly) by the Ministry of Science and Higher Education. This research has been conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01). aperiodicity in this case it is sufficient to consider two processes {E i } and , for some k ≥ 0 and E i = * . Every function f : S → R will be considered as defined on S with identification f ( * ) = 0 (see also [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF] for more details concerning those two processes). Then, we prove Bernstein type of inequality similarly as we prove Theorems 3.2 and 3.9 applying all the reasoning to {E i } and {O i } separately, yielding to a similar inequality up to an additional multiplicative constant 2. Appendix Proof of Lemma 3.1. Let τ k be the time of the k-th visit to the atom A (S × {1} in the general case). In the following we make use of the argument from [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] and observe that we have for any k ≤ n
38,625
[ "17670" ]
[ "101" ]
01757606
en
[ "sdv" ]
2024/03/05 22:32:13
2018
https://amu.hal.science/hal-01757606/file/PIIS0190962217328785%5B1%5D.pdf
MD Michael Benzaquen email: [email protected]. MD Luca Borradori MD Philippe Berbis MS Simone Cazzaniga Rene Valero MD Marie-Aleth Richard MD, PhD Laurence Feldmeyer MD Michaelbenzaquen MD Ren Evalero Dipeptidyl peptidase IV inhibitors, a risk factor for bullous pemphigoid: Retrospective multicenter Keywords: bullous pemphigoid, case-control study, diabetes, dipeptidyl peptidase-4 inhibitor, gliptin, risk factor Boldface indicates statistical significance. CI, Confidence interval, DPP4i, dipeptidyl peptidase-4 inhibitor, OR, odds ratio. *Univariate conditional logistic regression analysis. y Multivariable published or not. The documents may come B ullous pemphigoid (BP) is the most frequent autoimmune subepidermal blistering disease that typically affects the elderly. Its cutaneous manifestations are polymorphic, ranging from pruritus with excoriated, eczematous, papular, and/ or urticaria-like lesions in the nonbullous phase to vesicles and bullae in the bullous phase. [START_REF] Joly | A comparison of two regimens of topical corticosteroids in the treatment of patients with bullous pemphigoid: a multicenter randomized study[END_REF] BP is associated with an immune response directed against 2 molecules, the BP antigen 180 (BP180 [also called BPAG2]) and the BP antigen 230 (BP230 [also called BPAG1]). [START_REF] Feliciani | Management of bullous pemphigoid: the European Dermatology Forum consensus in collaboration with the European Academy of Dermatology and Venereology[END_REF] Since the publication of the first case of BP associated with sulfasalazine in 1970, a wide range of drugs (spironolactone, furosemide, chloroquine, b-blockers, and several antibiotics) have been associated with the disease. [START_REF] Bastuji-Garin | Drugs associated with bullous pemphigoid. A case-control study[END_REF] Recently, several cases of BP have been reported in association with dipeptidyl peptidase-4 inhibitors (DPP4is), which are also known as gliptins. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF][START_REF] Garc Ia | Dipeptidyl peptidase-IV inhibitors induced bullous pemphigoid: a case report and analysis of cases reported in the European pharmacovigilance database[END_REF][START_REF] Keseroglu | A case of bullous pemphigoid induced by vildagliptin[END_REF][START_REF] Haber | Bullous pemphigoid associated with linagliptin treatment[END_REF][START_REF] Pasmatzi | Dipeptidyl peptidase-4 inhibitors cause bullous pemphigoid in diabetic patients: report of two cases[END_REF][START_REF] Skandalis | Drug-induced bullous pemphigoid in diabetes mellitus patients receiving dipeptidyl peptidase-IV inhibitors plus metformin[END_REF][START_REF] Aouidad | A case report of bullous pemphigoid induced by dipeptidyl peptidase-4 inhibitors[END_REF][START_REF] Attaway | Bullous pemphigoid associated with dipeptidyl peptidase IV inhibitors. A case report and review of literature[END_REF][START_REF] B En E | Bullous pemphigoid induced by vildagliptin: a report of three cases[END_REF][START_REF] Mendonc ¸a | Three cases of bullous pemphigoid associated with dipeptidyl peptidase-4 inhibitorsdone due to linagliptin[END_REF] DPP4is are oral antihyperglycemic drugs administered to patients with type 2 diabetes as monotherapy or in combination with other oral antihyperglycemic medications or insulin. DPP4 is an enzyme that inactivates incretins (glucagon-like peptide-1 and glucose-dependent insulinotropic polypeptide). DPP4is increase levels of incretins, thereby increasing insulin secretion, decreasing glucagon secretion, and improving glycemic control. Sitagliptin was first approved in 2006 by the US Food and Drug Administration, followed by saxagliptin (in 2009), linagliptin (in 2011), and alogliptin (in 2013). Three DPP4is are currently available on the French marketdsitagliptin and vildagliptin (since 2007) and saxagliptin (since 2009)dand 5 are available on the Swiss marketdthe 3 aforementioned DPP4is and linagliptin (since 2011) and alogliptin (since 2013)dboth of which are available only on the Swiss market. They are used alone or in association with metformin in the same tablet. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF] An increasing number of clinical reports and pharmacovigilance database analyses suggesting an association between DPP4i intake and BP have been published. Nevertheless, this has not been confirmed by a well-designed controlled study. The main objective of our case-control study was therefore to retrospectively evaluate the association between DPP4i treatment and development of BP. The secondary end points were to determine a potential higher association for a specific DPP4i and to evaluate the disease course after DPP4i withdrawal. MATERIALS AND METHODS The investigations were conducted as a retrospective case-control study with a 1:2 design, comparing case patients with BP and diabetes with age-and sexmatched controls with type 2 diabetes from January Data collection for cases and controls The study was conducted in 3 university dermatologic departments (Bern, Marseille Nord, and Marseille La Timone). By using the database of the respective histopathology departments and clinical records, we identified all patients with BP diagnosed for the first time between January 1, 2014, and July 31, 2016. The diagnosis of BP was based on the following criteria developed by the French Bullous Study Group 14 : consistent clinical features, compatible histopathology findings, positive direct immunofluorescence studies, and in some cases, positive indirect immunofluorescence microscopy studies and/or positive enzyme-linked immunosorbent assayeBP180/ enzyme-linked immunosorbent assayeBP230 (MBL International, Japan). Among these patients with BP, we identified those having type 2 diabetes. For these patients, we recorded age, sex, date of BP diagnosis, treatment of BP (with topical steroids, systemic corticosteroids, immune suppressors, or other treatments such as doxycycline or dapsone), evolution of BP (complete remission, partial remission, relapse, or death), comorbidities (including rheumatic, neurologic, cardiovascular, or digestive diseases and neoplasia), treatment with DPP4is, and other cotreatments (including diuretics, antibiotics, neuroleptics, nonsteroidal anti-inflammatory drugs, and antihypertensive drugs). If a DPP4i was mentioned in the medical record, we examined the type of DPP4i, the chronology between BP diagnosis and onset of the DPP4i treatment, and the evolution after DPP4i withdrawal. Patients who had other autoimmune bullous diseases or did not otherwise fulfill the inclusion criteria were not included. The control patients were obtained between January 1, 2014, and July 31, 2016, from the endocrinology departments of the same hospitals. For each case, 2 control patients with diabetes visiting the endocrinology department in the same 6-month period and matched to the case by sex and quinquennium of age were then randomly selected from all available patients satisfying the matching criteria. The patient files were reviewed for treatment of diabetes (specifically, the use of DPP4is), other cotreatments, and comorbidities. For the controls, we did not include case patients with any chronic skin diseases, including bullous dermatosis, at the time of the study. We then compared exposure to DPP4is between case patients and controls with adjustment for potential confounders. Statistical analysis Descriptive data were presented as number with percentages or means with standard deviations (SDs) for categoric and continuous variables, respectively. The Mann-Whitney U test was used to assess possible residual differences in the distribution of age between case patients and controls. Differences between case patients and matched controls across different levels of other factors were assessed by means of univariate conditional logistic regression analysis. Factors associated with DPP4i use were also investigated by means of the Pearson x 2 test or Fisher exact test, where required. All factors with a P value less than .10 in the univariate case-control analysis and associated with DPP4i use, with a P value less than .10 at univariate level, were evaluated as possible confounding factors in multivariate conditional logistic regression models with backward stepwise selection algorithm. The factors retained for adjustment were neurologic and metabolic/endocrine comorbidities, as well as other dermatologic conditions unrelated to BP. The effect of DPP4i use on BP onset in diabetic patients was expressed in terms of an odds ratio (OR) along with its 95% confidence interval (CI) and P value. A stratified analysis by possible effect modifiers, including sex and age group, was also performed. All tests were considered statistically significant at a P value less than .05. Before starting the study, we planned to recruit at least 183 patients (61 case patients and 122 controls) to detect an OR higher than 2.5 in a 1:2 matched case-control design, supposing to observe a 30% exposure to DPP4i use in the control group (a = 0.05, b = 0.20, multiple correlation coefficients \0.2). Analyses were carried out with SPSS software (version 20.0, IBM Corp, Armonk, NY). RESULTS From January 2014 to July 2016, BP was diagnosed in 165 patients (61 in Bern, 47 in Marseille Nord, and 57 in Marseille La Timone). Among these, 61 had diabetes (22 in Bern, 14 in Marseille Nord, and 25 in Marseille La Timone). We collected 2 matched controls for each case patient, resulting in a total of 122 controls. Of the case patients, 50.8% were female, and the mean age was 79.1 plus or minus 7.0 years. The main comorbidities of cases were cardiovascular (86.9%), neurologic (52.5%), and metabolic and endocrine diseases other than diabetes (39.3%) and uronephrologic diseases (39.3%) (Table I). In our 3 investigational centers, we collected 28 patients with diabetes and BP who were taking a DPP4i. DPP4is were used more frequently in case patients with BP (45.9%) than in controls (18%), and the difference was statistically significant (P \.001). Of the specific DDP4is, vildagliptin was more common in case patients (23%) than in controls (4.1%). For the other cotreatments, there was no statistical difference between case patients and controls, except for the use of antihistamines (P \ .001). There were no differences in other antidiabetic medications, including metformin, between case patients and controls (P = .08) (Table II). All patients with BP were treated with high-potency topical steroids as first-line treatment. Systemic corticosteroids were used in half of them (50.8%), immunosuppressive agents in 32.8%, and other treatments such as doxycycline or dapsone in 34.4%. With treatment, 37.7% went into complete remission and 42.6% went into partial remission. Finally, there were no differences in treatment between the patients with diabetes and BP who had taken a DPP4i and the patients with diabetes and BP who had not taken a DPP4i (data not shown), an observation suggesting that presentation and initial severity of BP in these 2 groups were similar. Abbreviations used: BP: bullous pemphigoid CI: confidence interval DPP4i: dipeptidyl peptidase-4 inhibitor OR: odds ratio SD: standard deviation DPP4is and BP The univariate analysis of the association between DPP4i use and BP in diabetic patients yielded an OR of 3.45 (95% CI, 1.76-6.77; P\.001). After adjustment for possible confounding factors associated with BP onset and DPP4i use in multivariate analysis, the OR was 2.64 (95% CI, 1.19-5.85; P = .02) (Table III). A more detailed analysis of DPP4i use revealed a higher association for vildagliptin, with a crude OR of 7.23 (95% CI, 2.44-21.40; P = .001) and an adjusted OR of 3.57 (95% CI, 1.07-11.84; P = .04). The study was underpowered to detect differences between other DPP4is, with linagliptin and alogliptin being only used in the Swiss cases. Sex-stratified analysis indicated that the effect of a DPP4i on BP onset was higher in males (adjusted OR, 4.36; 95% CI, 1.38-13.83; P = .01) than in females (adjusted OR, 1.64; 95% CI, 0.53-5.11; P = .39). Age groupestratified analyses showed a stronger association for patients age 80 years or older, with an adjusted OR of 5.31 (95% CI, 1.60-17.62; P = .006). Clinical course of patients with BP under treatment with a DPP4i In our 3 centers, we collected a total of 28 patients with diabetes who developed BP under exposure to a DPP4i. The duration of DPP4i use before onset of BP ranged from 10 days to 3 years (median, 8.2 months). Drug withdrawal was performed in 19 patients upon suspected DPP4i-associated BP. Complete (11 of 19 [58%]) or partial (7 of 19 [37%]) remission with some mild persistent disease was obtained for all patients but 1 (duration of follow up, 3-30 months; median, 16.4 months). First-line treatment was high-potency topical steroids and systemic corticosteroids in severe or refractory cases followed by a standard tapering schedule. [START_REF] Feliciani | Management of bullous pemphigoid: the European Dermatology Forum consensus in collaboration with the European Academy of Dermatology and Venereology[END_REF][START_REF] Joly | A comparison of oral and topical corticosteroids in patients with bullous pemphigoid[END_REF] No further therapy was necessary in these patients after DPP4i withdrawal to obtain BP remission. For 1 patient, sitagliptin was initially stopped, leading to a partial remission, but its reintroduction combined with metformin led to a relapse of the BP. Definitive discontinuation of sitagliptin and its replacement by repaglinide resulted in a partial remission of BP with 12-month follow-up. The clinical outcome in the 9 patients in whom DPP4is were not stopped was unfavorable. There were 3 deaths of unknown causes (33%), 1 relapse (11%), 4 partial remissions (45%), and 1 complete remission (11%). DISCUSSION Our study demonstrates that DPP4is are associated with an increased risk for development of BP, with an adjusted OR of 2.64. Association with vildagliptin was significantly higher than that with other DPP4is, with an adjusted OR of 3.57. Our findings further indicate that the rate of DPP4i intake in patients with BP is higher both in male patients and in patients older than 80 years. Finally, DPP4i withdrawal seems to have a favorable impact on the outcome of BP in patients with diabetes, as 95% of them went into remission after management DPP4 inhibition could enhance the activity of proinflammatory chemokines, such as eotaxin, promoting eosinophil activation in the skin, tissue damage, and blister formation. [START_REF] Forssmann | Inhibition of CD26/dipeptidyl peptidase IV enhances CCL11/ eotaxin-mediated recruitment of eosinophils in vivo[END_REF] Thielitz et al reported that DPP4is have an antifibrogenic activity by decreasing expression of transforming growth factor-b 1 and secretion of procollagen type I. [START_REF] Thielitz | Inhibitors of dipeptidyl peptidase IV-like activity mediate antifibrotic effects in normal and keloid-derived skin fibroblasts[END_REF] All these effects could be higher for vildagliptin than other for DPP4is because of molecular differences. Furthermore, vildagliptin administration in monkeys resulted in dose-dependent and reversible skin effects, such as blister formation, peeling, and erosions. [START_REF] Hoffmann | Vascular origin of vildagliptin-induced skin effects in cynomolgus monkeys: pathomechanistic role of peripheral sympathetic system and neuropeptide Y[END_REF] Finally and more importantly, DPP4 is a cell surface plasminogen receptor that is able to activate plasminogen, leading to plasmin formation. Plasmin is a major serine protease that is known to cleave BP180 within the juxtamembranous extracellular noncollagenous 16A domain. Hence, the inhibition of plasmin by a DPP4i may change the proper cleavage of BP180, thereby affecting its antigenicity and its function. [START_REF] Izumi | Autoantibody profile differentiates between inflammatory and noninflammatory bullous pemphigoid[END_REF] Our study has some limitations: we focused the analysis on DPP4i intake, whereas the potential isolated effect of metformin was not analyzed. Nevertheless, after DPP4i withdrawal, metformin was either continued (in those cases in which it was initially combined with a DPP4i) or newly started in 8 of our patients with BP. Among the latter, we observed 5 complete and 3 partial remissions on follow-up. In addition, metformin intake has not been implicated thus far in the development of BP. On the basis of these observations, it is unlikely that metformin plays a triggering role, but specific studies should be designed to examine the effect of metformin on its own. Finally, we included patients with BP identified by searching our histopathology databases. It is therefore possible that we missed a number of BP cases in which either the term pemphigoid was not used in the corresponding histopathologic report or BP was not clinically and/ or histopathologically considered. In conclusion, our findings in a case-control study confirm that DPP4is are associated with an increased risk for development of BP in patients with diabetes. Therefore, the prescription of a DPP4i, especially vildagliptin, should potentially be limited or avoided in high-risk patients, including males and those age 80 years or older. A larger prospective study might be useful to confirm our findings. with first-line therapeutic options (ie, topical and sometimes systemic corticosteroids). An increasing number of reports have suggested that DPP4is trigger BP. Fourteen of the 19 BP cases described (74%) appeared to be related to vildagliptin intake. The median age of the affected patients was 72.5 years, with an almost identical number of males and females. [START_REF] Garc Ia | Dipeptidyl peptidase-IV inhibitors induced bullous pemphigoid: a case report and analysis of cases reported in the European pharmacovigilance database[END_REF][START_REF] Keseroglu | A case of bullous pemphigoid induced by vildagliptin[END_REF][START_REF] Haber | Bullous pemphigoid associated with linagliptin treatment[END_REF][START_REF] Pasmatzi | Dipeptidyl peptidase-4 inhibitors cause bullous pemphigoid in diabetic patients: report of two cases[END_REF][START_REF] Skandalis | Drug-induced bullous pemphigoid in diabetes mellitus patients receiving dipeptidyl peptidase-IV inhibitors plus metformin[END_REF][START_REF] Aouidad | A case report of bullous pemphigoid induced by dipeptidyl peptidase-4 inhibitors[END_REF][START_REF] Attaway | Bullous pemphigoid associated with dipeptidyl peptidase IV inhibitors. A case report and review of literature[END_REF][START_REF] B En E | Bullous pemphigoid induced by vildagliptin: a report of three cases[END_REF][START_REF] Mendonc ¸a | Three cases of bullous pemphigoid associated with dipeptidyl peptidase-4 inhibitorsdone due to linagliptin[END_REF] In our study, among the 28 diabetic patients developing BP under DPP4i exposure, males were more affected (56.7%) and the median age was 80 years. Garcia et al 5 identified 170 cases of BP in patients taking a DPP4i in the EudraVigilance database, suggesting that the intake of DPP4is was more frequently associated with development of BP when compared with that of other drugs. In the latter, a disproportionally high number of cases of vildagliptin use were found. A French case-noncase study recording all spontaneous reports of DPP4i-related BP in the National Pharmacovigilance Database between April 2008 and August 2014 also provided evidence supporting an increased risk for development of BP associated with DPP4i exposure, especially vildagliptin. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF] Our present study confirms that the association with vildagliptin is stronger than that with the other DPP4is. This cannot be explained by an overprescription of vildagliptin compared with prescription of other DPP4is. In our control group, sitagliptin was the most prescribed DPP4i, with 14 diabetic patients (11.5%), whereas only 5 patients were treated by vildagliptin (4%). Increased prescribing of sitagliptin was confirmed by an analysis of drug sales in France published by the French National Agency for Medicines and Health Products Safety in 2014. In this survey, sitagliptin was the most prescribed DPP4i and the 24th highest-earning drug in 2013, whereas vildagliptin was not ranked. A recent retrospective study suggests that DPP4i-associated BP is frequently noninflammatory or pauci-inflammatory and characterized by small blisters, mild erythema, and a limited skin distribution. The latter is potentially related to a distinct reactivity profile of autoantibodies to BP180. [START_REF] Izumi | Autoantibody profile differentiates between inflammatory and noninflammatory bullous pemphigoid[END_REF] Although in our retrospective evaluation, there was no apparent difference in clinical presentation and initial management between patients with diabetes and BP who had been treated with DPP4i and patients with diabetes and BP who had not been treated with DPP4i (data not shown), prospective studies are required to address the question of whether BP associated with the intake of a DPP4i has unique clinical and immunologic features. The pathophysiologic mechanisms linking DPP4i intake and BP development remain unclear. 1, 2014, to July 31, 2016. All study procedures adhered to the principles of the Declaration of Helsinki. The French Committee for the Protection of Persons (RO-2016/37) and the Ethics Committee of the Canton of Bern (KEK-2016/01488) approved the study. The French Advisory Committee on Information Processing in Material Research in the Field of Health and the French Commission for Information Technology and Civil Liberties also authorized this study. Table I . I Demographics and comorbidities of selected cases and controls Controls Cases Total Demographic characteristic/comorbidity N % N % N % P* Sex Male 60 49.2% 30 49.2% 90 49.2% d Female 62 50.8% 31 50.8% 93 50.8% Age, y (mean, SD = 7) 79.3 7.0 78.7 7.2 79.1 7.0 .63 \75 30 24.6% 17 27.9% 47 25.7% 75-84 62 50.8% 29 47.5% 91 49.7% $85 30 24.6% 15 24.6% 45 24.6% Comorbidities Neurologic 47 38.5% 32 52.5% 79 43.2% .06 Cardiovascular 108 88.5% 53 86.9% 161 88.0% .75 Rheumatic 36 29.5% 11 18.0% 47 25.7% .10 Digestive 34 27.9% 19 31.1% 53 29.0% .65 Metabolic and endocrine y 85 69.7% 24 39.3% 109 59.6% <.001 Pulmonary 27 22.1% 17 27.9% 44 24.0% .41 Uronephrologic 45 36.9% 24 39.3% 69 37.7% .74 Neoplasia 29 23.8% 12 19.7% 41 22.4% .49 Dermatologic z 5 4.1% 12 19.7% 17 9.3% .03 Other 35 28.7% 23 37.7% 58 31.7% .18 SD, Standard deviation. *Mann-Whitney U test was used to assess possible residual differences in the distribution of age between cases and age-and sex-matched controls. Differences between cases and matched controls across different levels of other factors were assessed by means of univariate conditional logistic regression analysis. Boldface indicates statistical significance. y Except for diabetes. z Except for BP. Table II . II DPP4i use and other cotreatments in selected cases and controls Controls Cases Total Treatment N % N % N % P* DPP4i <.01 None 100 82.0% 33 54.1% 133 72.7% Vildagliptin 5 4.1% 14 23.0% 19 10.4% Sitagliptin 14 11.5% 10 16.4% 24 13.1% Linagliptin 3 2.5% 3 4.9% 6 3.3% Saxagliptin 0 0.0% 1 1.6% 1 0.5% Cotreatment Diuretics 69 56.6% 28 45.9% 97 53.0% .17 Antihypertensives/antiarrhythmic agents 101 82.8% 47 77.0% 148 80.9% .36 Neuroleptics 46 37.7% 26 42.6% 72 39.3% .52 Antiaggregants/anticoagulants 85 69.7% 45 73.8% 130 71.0% .56 NSAIDs 12 9.8% 0 0.0% 12 6.6% .14 Analgesics 22 18.0% 12 19.7% 34 18.6% .79 Statins 71 58.2% 31 50.8% 102 55.7% .34 Antihistamines 5 4.1% 19 31.1% 24 13.1% <.001 Antidiabetics y 122 100.0% 51 83.6% 173 94.5% .08 Endocrine or metabolic treatment z 45 36.9% 27 44.3% 72 39.3% .32 Proton pump inhibitors 59 48.4% 28 45.9% 87 47.5% .75 Others 50 41.0% 23 37.7% 73 39.9% .67 DPP4i, Dipeptidyl peptidase-4 inhibitor; NSAID, nonsteroidal anti-inflammatory drug. *Boldface indicates statistical significance. y Except for DPP4i. z Except for diabetes. Table III . III Univariate and multivariate analysis of the association between DPP4i use and BP in patients with diabetes, overall and in strata of sex and age group DPP4is could induce BP de novo or accelerate the development of BP in susceptible individuals. Many cell types, including keratinocytes, T cells, and endothelial cells, constitutionally express DPP4. Controls Cases Univariate analysis* Multivariate analysis y Strata DPP4i use N % N % OR (95% CI) P OR (95% CI) P Overall No 100 82.0% 33 54.1% 1 1 Yes 22 18.0% 28 45.9% 3.45 (1.76-6.77) <.001 2.64 (1.19-5.85) .02 Overall (detailed) No 100 82.0% 33 54.1% 1 1 Vildagliptin 5 4.1% 14 23.0% 7.23 (2.44-21.40) <.001 3.57 (1.07-11.84) .04 Sitagliptin 14 11.5% 10 16.4% 1.82 (0.73-4.54) .20 2.13 (0.77-5.89) .15 Linagliptin/ 3 2.5% 4 6.6% 5.10 (0.98-26.62) .053 2.90 (0.47-17.74) .25 saxagliptin Males No 51 85.0% 13 43.3% 1 1 Yes 9 15.0% 17 56.7% 5.85 (2.13-16.08) .001 4.36 (1.38-13.83) .01 Females No 49 79.0% 20 64.5% 1 1 Yes 13 21.0% 11 35.5% 2.00 (0.78-5.15) .15 1.64 (0.53-5.11) .39 Age \80 y No 49 79.0% 18 56.2% 1 1 Yes 13 21.0% 14 43.8% 2.47 (1.00-6.13) .05 1.53 (0.52-4.52) .44 Age $80 y No 51 85.0% 15 51.7% 1 1 Yes 9 15.0% 14 48.3% 4.50 (1.58-12.77) .005 5.31 (1.60-17.62) .006
26,028
[ "184703", "923363", "762681" ]
[ "301315", "236940", "301315", "236940", "417872", "180118", "527021", "259104", "46222", "236940" ]
01757616
en
[ "sdv" ]
2024/03/05 22:32:13
2017
https://amu.hal.science/hal-01757616/file/mody.pdf
Intellectual disability in patients with MODY due to hepatocyte nuclear factor 1B (HNF1B) molecular defects Introduction Research design and methods The study population consisted of 107 adult patients with diabetes in whom a molecular abnormality of HNF1B had been identified, as described elsewhere [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. The phenotype of the HNF1B-related syndrome was assessed through a questionnaire, filled in by referring physicians, that comprised clinical, biological and morphological items. ID was defined as limitations in intellectual functioning and in adapting to environmental demands, beginning early in life, and was appraised by the need for educational support, protected work or assistance in daily activities, and by the social skills of the patients [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF]. Learning disability (LD) was defined as persistent difficulties in reading, writing, or mathematical-reasoning skills [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF]. Of these 107 patients, 14 had access to detailed evaluations by a geneticist and a neurologist who were blinded to the patient's HNF1B genotype. In case of clinical suspicion of cognitive defects, the evaluation was completed by neuropsychological testing, including the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), and by further testing of executive functions (Trail-Making Test), memory [84-item Battery of Memory Efficiency (BEM 84)] and visuospatial function [Rey Complex Figure Test and Recognition Trial (RCFT)]. In patients presenting with ID or LD, single-nucleotide polymorphism (SNP) array analyses were performed, using the HumanCytoSNP-12 v2.1 array scanner and assay (Illumina, Inc., San Diego, CA, USA), after excluding fragile X syndrome. Results were analyzed with GenomeStudio software, version 3.1.6 (Illumina). All patients gave their written informed consent. The frequency of ID was assessed in two control groups of adult patients with diabetes followed in our department: 339 consecutive patients with type 1 diabetes (T1D); and 227 patients presenting with phenotypes suggestive of MODY, including 31 glucokinase (GCK)-MODY, 42 HNF1A-MODY, 13 HNF4A-MODY, five ATP-binding cassette subfamily C member 8 (ABCC8)-MODY, two insulin (INS)-MODY and 134 genetically screened patients with no identifiable molecular aetiology (referred to as MODY-X). Results are reported as means ± SD or as frequencies (%). Comparisons between groups were made by non-parametric tests or by Fisher's exact test. Intellectual disability (ID) is characterized by impairments of general mental abilities that have an impact on adaptive functioning in conceptual, social and practical areas, and which begin in the developmental period [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF]. It affects 1-3% of the general population [START_REF] Maulik | Prevalence of intellectual disability: a meta-analysis of population-based studies[END_REF]. Chromosomal aberrations or mutations in almost 500 genes have been associated with ID. Among these genes, several are also involved in diseases with phenotypes that may overlap with ID, such as autism spectrum disorders (ASD) and schizophrenia. Molecular defects of the hepatocyte nuclear factor 1B (HNF1B) have been associated with a syndrome that includes maturity-onset diabetes of the young 5 (MODY5 or HNF1B-MODY), kidney structural abnormalities, progressive renal failure, pancreatic hypoplasia and exocrine dysfunction, abnormal liver tests and genital tract abnormalities [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. In half the cases, the HNF1B-related syndrome is due to HNF1B heterozygous mutations whereas, in the others, it is associated with HNF1B whole-gene deletion [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. In all cases examined thus far, the latter results from a 17q12 deletion of 1.4-2.1 Mb, encompassing 20 genes including HNF1B [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF][START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Laffargue | Towards a new point of view on the phenotype of patients with a 17q12 microdeletion syndrome[END_REF][START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF]. Autism and/or ID have been described in patients with various HNF1B -related phenotypes, such as HNF1B-MODY [START_REF] Raile | Expanded clinical spectrum in hepatocyte nuclear factor 1bmaturity-onset diabetes of the young[END_REF], cystic kidney disease [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF][START_REF] Dixit | 17q12 microdeletion syndrome: three patients illustrating the phenotypic spectrum[END_REF] and müllerian aplasia [START_REF] Cheroki | Genomic imbalances associated with müllerian aplasia[END_REF], always in the context of 17q12 deletion. On the other hand, in a large population study, the 17q12 deletion was recognized as a strong risk factor for ID, ASD and schizophrenia, being identified in 1/1000 of children referred for those conditions [START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF]. Whether the neurocognitive phenotypes associated with the 17q12 deletion result from deletion of HNF1B itself or another deleted gene, or from a contiguous gene syndrome, remains unknown [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF][START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF]. To investigate the role of HNF1B abnormalities in the occurrence of cognitive defects, the frequency of ID was assessed according to the presence of HNF1B mutations or deletion in a large cohort of adult patients with HNF1B-MODY. Results The main characteristics of the 107 patients are shown in Table 1. ID was reported in 15 (14 proband) patients (14%). LD was noticed in a further nine patients (Table S1; see supplementary material associated with the article online). Overall, cognitive defects were thus observed in 24/107 patients (22.4%). Common causes of ID were ruled out by the search for fragile X syndrome and SNP array analyses, which excluded other large genomic deletions in all tested patients. The frequency of ID was significantly higher in HNF1B-MODY patients than in those with T1D [8/339 (2.4%), OR: 5.9, 95% CI: 2.6-13.6; P < 10 -4 ], or in those with other monogenic diabetes or MODY-X [6/227 (2.6%), OR: 6.0, 95% CI: 2.3-16.0; P = 0.0002]. HNF1B-MODY patients with or without ID were similar as regards gender, age at diabetes diagnosis, duration and treatment of diabetes, frequency and severity of renal disease, frequency of pancreatic morphological abnormalities and livertest abnormalities, and frequency of arterial hypertension and dyslipidaemia (Table 1). HbA 1c levels at the time of the study were higher in the patients with ID (9.4 ± 3.0% vs 7.3 ± 1.4%; P = 0.005). Of the 15 patients presenting with ID, six had HNF1B coding mutations (three missense, two splicing defects, one deletion of exon 5) and nine had a whole-gene deletion (Table S1). Thus, the frequency of ID was not statistically different between patients with HNF1B mutation (11%) or deletion (17%; P = 0.42; Table 1). Discussion Our study showed that ID affects 14% of adult patients with HNF1B-MODY, which is higher than the 1-3% reported in the general population [START_REF] Maulik | Prevalence of intellectual disability: a meta-analysis of population-based studies[END_REF] and than the 2.4-2.6% observed in our two control groups of adult patients with other diabetes subtypes. The main characteristics of the HNF1B-MODY patients with ID did not differ from those without ID, except for the poorer glycaemic control observed in the former. In patients with HNF1B-related syndrome, the occurrence of cognitive defects has been noted almost exclusively in paediatric series. ID/ASD has been reported in two adolescents with renal cystic disease, livertest abnormalities and diabetes [START_REF] Raile | Expanded clinical spectrum in hepatocyte nuclear factor 1bmaturity-onset diabetes of the young[END_REF]; developmental delay and/or learning difficulties were quoted in three young patients presenting with multicystic renal disease [START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF]; and speech delay in two children with renal cystic disease [START_REF] Dixit | 17q12 microdeletion syndrome: three patients illustrating the phenotypic spectrum[END_REF]. In a series of 86 children with HNF1B-related renal disease, three cases of ASD were noted [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF]. The systematic evaluation of 28 children with HNF1B-associated kidney disease also suggested an increased risk of neuropsychological disorders in those harbouring the 17q12 deletion [START_REF] Laffargue | Towards a new point of view on the phenotype of patients with a 17q12 microdeletion syndrome[END_REF]. A recent study performed in a UK cohort reported the presence of neurodevelopmental disorders in eight out of 20 patients with renal abnormalities or diabetes due to HNF1B whole-gene deletion [START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF]. In all these reports, cognitive defects were observed in the context of the 17q12 deletion. Conversely, the 17q12 deletion has also been reported in children evaluated for ID, beyond the setting of HNF1B-related syndrome. Indeed, an association between the deletion and cognitive defects has been confirmed in paediatric cases with no renal abnormalities [START_REF] Roberts | Clinical report of a 17q12 microdeletion with additionally unreported clinical features[END_REF][START_REF] Palumbo | Variable phenotype in 17q12 microdeletions: clinical and molecular characterization of a new case[END_REF]. In one population study, the 17q12 deletion was detected in 18/15,749 children referred for ASD and/or ID, but in none of the controls [START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF]. However, detailed phenotypes, available for nine children, were suggestive of the HNF1B-related syndrome, as all but one showed multicystic renal disease and/or kidney morphological abnormalities, and one had diabetes. Altogether, these observations strongly suggest that cognitive defects are part of the phenotype associated with the 17q12 deletion. Whether cognitive defects may result from molecular alterations of HNF1B itself remains unsolved. Learning difficulties have been reported in two patients with HNF1B frameshift mutations: one was a man with polycystic kidney disease [START_REF] Bingham | Mutations in the hepatocyte nuclear factor-1b gene are associated with familial hypoplastic glomerulocystic kidney disease[END_REF]; the other was a woman with renal disease, diabetes, and livertest and genitaltract abnormalities [START_REF] Shihara | Identification of a new case of hepatocyte nuclear factor-1beta mutation with highly varied phenotypes[END_REF]. ID has also been reported in two patients with HNF1B-kidney disease due to point mutations [START_REF] Faguer | Diagnosis, management, and prognosis of HNF1B nephropathy in adulthood[END_REF]. However, in these four patients, a search for other causes of cognitive defects was not performed. In the above-mentioned UK study, no neurodevelopmental disorders were reported in 18 patients with intragenic HNF1B mutations [START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF]. Conversely, in our study, ID was observed in 6/54 patients (11%) with an HNF1B point mutation, a frequency three times greater than in the general population, and common causes of ID were ruled out in four of them. These discrepancies might be explained by the small number of patients (n = 18) with HNF1B mutations in the UK study, and by the fact that neurocognitive phenotypes might be milder in patients with mutations. Thus, our observations may suggest the involvement of HNF1B defects in the occurrence of cognitive defects in patients with HNF1B-MODY. The links between HNF1B molecular abnormalities and intellectual developmental disorders remain elusive. Nevertheless, it should be noted that HNF1B is one of the evolutionarily conserved genes involved in the hindbrain development of zebrafish and mice [START_REF] Makki | Identification of novel Hoxa1 downstream targets regulating hindbrain, neural crest and inner ear development[END_REF]. However, the role of HNF1B in the human brain has yet to be established. In our study, because of geographical remoteness, only a small number of patients had access to detailed neurological evaluation. However, the absence of selection bias is supported by the similar spectrum of HNF1B-related syndrome in patients evaluated by either examination or questionnaire (Table S2; see supplementary material associated with the article online). Moreover, the accuracy of the diagnosis made by referring physicians-ID vs no ID-was confirmed in all patients who underwent neurological evaluations. Conclusion ID is more frequent in adults with HNF1B-MODY than in the general population or in patients with other diabetes subtypes. Moreover, it may affect patients with HNF1B point mutations as well as those with 17q12 deletion. Further studies are needed to refine the cognitive phenotypes of HNF1B-related syndrome and to precisely define the role of HNF1B itself in their occurrence. Table 1 1 Main characteristics of 107 HNF1B-MODY patients according to the presence (+) or absence (-) of intellectual disability (ID). Total ID+ ID- P a Values are expressed as n or as mean ± SD. a ID+ vs ID-. b Estimated glomerular filtration rate <60 mL/min/1.73 m 2 [Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) formula], or dialysis or renal transplantation. p CH de Valenciennes, 59322 France q CHU Pellegrin, 33076 Bordeaux, France r CHU Godinne, UCL Namur, 5530 Belgium s CHU Tenon, 75020 Paris, France t CH de Roanne, 42300 France u CH Belle-Isle, 57045 Metz, France v CH Saint-Joseph, 75014 Paris, France w CH de Lannion-Trestel, 22300 France x CH Louis Pasteur, 28630 Chartres, France y CH Emile Muller, 68100 Mulhouse, France z CH Saint-Philibert, 59160 Lomme, France aa CH Laënnec, 60100 Creil, France bb CHU de Caen, 14003 France cc CH de Compiègnes, 60200 France dd CHU de Strasbourg, 67000 France ee CHU de Poitiers, 86021 France ff CH Lucien Hussel, 38200 Vienne, France gg CH Bretagne Sud, Lorient, 56322 France hh CH de Blois, 41016 France ii CHIC Poissy-Saint-Germain-en-Laye, 78300 France jj CHU Cochin, 75014 Paris, France kk CHU Haut-Lévêque, 33600 Bordeaux, France ll CHU Louis Pradel, 69500 Lyon, France mm CHRU Clocheville, 37000 Tours, France nn CH d'Avignon, 84000 France oo CH Robert Bisson, 14100 Lisieux, France pp CHU Ambroise Paré, 92104 Boulogne, France qq CH de Saint-Nazaire, 44600 France rr CH Sud-Francilien, 91100 Corbeil, France ss CHU de Nantes, 44093 France tt CHRU Jean Minjoz, 25030 Besançon, France uu CHU d'Angers, 49100 France vv CHU de Brest, 29200 France ww CHU de la Conception, 13005 Marseille, France xx CHRU de Lille, 59000 France yy CH Manchester, 08011 Charleville-Mézières, France zz CHU Lapeyronie, 34090 Montpellier, France Acknowledgements We thank A. Faudet, C. Vaury and S. Clauin, of the Centre of Molecular Genetics, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, for their technical assistance. Funding source None. Disclosure of interest The authors declare that they have no competing interest. Appendix A. Supplementary data Supplementary material (Tables S1 andS2) related to this article can be found, in the online version, at http://dx.doi.org/10.1016/j.diabet.2016.10.003.
17,342
[ "172712", "769990", "759496" ]
[ "300123", "106187", "353778", "93591", "353778", "353778", "300123", "353778", "353778", "353778", "353778", "300123", "366262", "323262", "451381", "300639", "417765", "300535", "300535", "300122", "328304", "408028", "324518", "154558", "504933", "526066", "454982", "301764", "17987", "189843", "180118", "457216" ]
01721163
en
[ "info" ]
2024/03/05 22:32:13
2018
https://inria.hal.science/hal-01721163v2/file/final-desweb2018.pdf
Paweł Guzewicz email: [email protected] Ioana Manolescu email: [email protected] Quotient RDF Summaries Based on Type Hierarchies Summarization has been applied to RDF graphs to obtain a compact representation thereof, easier to grasp by human users. We present a new brand of quotient-based RDF graph summaries, whose main novelty is to summarize together RDF nodes belonging to the same type hierarchy. We argue that such summaries bring more useful information to users about the structure and semantics of an RDF graph. I. INTRODUCTION The structure of RDF graphs is often complex and heterogeneous, making them hard to understand for users who are not familiar with them. This problem has been encountered in the past in the data management community, when dealing with other semi-structured graph data formats, such as the Object Exchange Model (OEM, in short) [START_REF] Papakonstantinou | Object exchange across heterogeneous information sources[END_REF]. Structural summaries for RDF graphs. To help discover and exploit such graphs, [START_REF] Goldman | Dataguides: Enabling query formulation and optimization in semistructured databases[END_REF], [START_REF] Nestorov | Representative objects: Concise representations of semistructured, hierarchical data[END_REF] have proposed using Dataguide summaries to represent compactly a (potentially large) data graph by a smaller one, computed from it. In contrast with relational databases where the schema is fixed before it is populated with data (a priori schema), a summary is computed from the data (a posteriori schema). Each node from the summary graph represents, in some sense, a set of nodes from the input graph. Many other graph summarization proposals have been made, for OEM [START_REF] Milo | Index structures for path expressions[END_REF], later for XML trees with ID-IDREF links across tree nodes (thus turning an XML database into a graph) [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF], and more recently for RDF [START_REF] Udrea | GRIN: A graph based RDF index[END_REF], [START_REF] Gurajada | Using graph summarization for join-ahead pruning in a distributed RDF engine[END_REF], [START_REF] Čebirić | Query-oriented summarization of RDF graphs (demonstration)[END_REF], [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF], [START_REF] Palmonari | ABSTAT: linked data summaries with abstraction and statistics[END_REF]; many more works have appeared in this area, some of which are presented in a recent tutorial [START_REF] Khan | Summarizing static and dynamic big graphs[END_REF]. Related areas are concerned with graph compression, e.g. [START_REF] Sadri | Shrink: Distance preserving graph compression[END_REF], ontology summarization [START_REF] Troullinou | Ontology understanding without tears: The summarization approach[END_REF] (focusing more on the graph semantics than on its data) etc. Quotient-based summaries are a particular family of summaries, computed based on a (summary-specific) notion of equivalence among graph nodes. Given an equivalence relation ≡, for each equivalence class C (that is, maximal set of graph nodes comprising nodes all equivalent to each other), the summary has exactly one node n C in the summary. Example of quotient-based summaries include [START_REF] Milo | Index structures for path expressions[END_REF], [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF], [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF], [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Čebirić | Query-oriented summarization of RDF graphs (demonstration)[END_REF], [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]; other summaries (including Dataguides) are not quotient-based. This work is placed within the quotient-based RDF summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]. That framework adapts the principles of quotient-based summarization to RDF graphs, in particular preserves the semantics (ontology), which may come with an RDF graph, in its summary. This is important as it guarantees that any summary defined within the framework is representative, that is: a query having answers on an RDF graph, has answers on its summary. This allows to use summaries as a first user interface with the data, guiding query formulation. Note that here, query answers take into account both the data explicitly present in the RDF graph, and the data implicitly present in the graph, through reasoning based on the explicit data and the graph's ontology. Two RDF summaries introduced in [START_REF]Query-oriented summarization of RDF graphs[END_REF] have been subsequently [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] redefined as quotients. They differ in their treatment of the types which may be attached to RDF graph nodes. One is focused on summarizing the structure (non-type triples) first and copies type information to summary nodes afterwards; this may erase the distinctions between resources of very different types, leading to confusing summaries. The other starts by separating nodes according to their sets of types (recall that an RDF node may have one or several types, which may or may not be related to each other). This ignores the relationships which may hold among the different classes present in an RDF graph. Contribution and outline. To simultaneously avoid the drawbacks of the two proposals above, in this paper we introduce a novel summary based on the same framework. It features a refined treatment of the type information present in an RDF graph, so that RDF graph nodes which are of related types are represented together in the summary. We argue that such a summary is more intuitive and more informative to potential users of the RDF graph. The paper is organized as follows. We recall the RDF graph summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF] which frames our work, as well as the two abovementioned concrete summaries. Then, we formally define our novel summary, and briefly discuss a summarization algorithm and its concrete applicability. II. RDF GRAPHS AND SUMMARIES A. RDF and RDF Schema We view an RDF graph G as a set of triples of the form s p o. A triple states that its subject s has the property p, and the value of that property is the object o. We consider only well-formed triples, as per the RDF specification [START_REF] W3c | Resource description framework[END_REF], using uniform resource identifiers (URIs), typed or untyped literals (constants) and blank nodes (unknown URIs or literals). The RDF standard [START_REF] W3c | Resource description framework[END_REF] includes the property rdf∶type (τ in short), which allows specifying the type(s) or class(es), of a resource. Each resource can have zero, one or several types, which may or may not be related. We call the set of G triples, whose property is τ , the type triples of G, denoted TG. RDF Schema and entailment. G may include a schema (ontology), denoted SG, and expressed through RDF Schema (RDFS) triples using one of the following standard properties: subclass, subproperty, domain and range, which we denote by the symbols ≺ sc , ≺ sp , ↩ d and ↪ r , respectively. Our proposal is beneficial in the presence of ≺ sc schema statements; we do not constrain SG in any way. RDF entailment is the mechanism through which implicit RDF triples are derived from explicit triples and schema information. In this work, we consider four entailment rules, each based on one of the four properties above: (i) c 1 ≺ sc c 2 means any resource of type c 1 is also of type c 2 ; (ii) p 1 ≺ sp p 2 means that as long as a triple s p 1 o belongs to G, the triple s p 2 o also holds in G; (iii) p ↩ d c means that any resource s having the property p in G is of type c, that is, s τ c holds in G; finally (iv) p ↪ r c means that any resource that is a value of the property p in G, is also of type c. The fixpoint obtained by applying entailment rules on the triples of G and the schema rules in SG until no new triple is entailed, is termed saturation (or closure) of G and denoted G ∞ . The saturation of an RDF graph is unique (up to blank node renaming), and does not contain implicit triples (they have all been made explicit by saturation). We view an RDF graph G as: G = SG ⊍ TG ⊍ DG , where the schema SG and the type triples TG have been defined above; DG contains all the remaining triples, whose property is neither τ nor ≺ sc , ≺ sp , ↩ d or ↪ r . We call DG the data triples of G. In the presence of an RDFS ontology, the semantics of an RDF graph is its saturation; in particular, the answers to a query posed on G must take into account all triples in G ∞ [START_REF] W3c | Resource description framework[END_REF]. Figure 1 shows an RDF graph we will use for illustration in the paper. Schema nodes and triples are shown in blue; type triples are shown in dotted lines; boxed nodes denote URIs of classes and instances, while d1, d2, e1 etc. denote literal nodes; "desc" stands for "description". B. Quotient RDF summaries We recall the summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]. In a graph G, a class node is an URI appearing as subject or object in a ≺ sc triple, as object in a ↩ d or ↪ r triple, or as object in a τ triple. A property node is an URI appearing as subject or object in a ≺ sp triple, or as subject in a ↩ d or ↪ r triple. The framework brings a generic notion of equivalence among RDF nodes: Definition 1: (RDF EQUIVALENCE) Let ≡ be a binary relation between the nodes of an RDF graph. We say ≡ is an RDF equivalence relation iff (i) ≡ is reflexive, symmetric and transitive, (ii) any class node is equivalent w.r.t. ≡ only to itself, and (iii) any property node is equivalent w.r.t. ≡ only to itself. Graph nodes which are equivalent will be summarized (or represented) by the same node in the summary. The reason behind class and property nodes being only equivalent to themselves in every RDF equivalence relation, is to ensure that each such node is preserved in the summary, as they appear in the schema and carry important information for the graph's semantics. A summary is defined as follows: Definition 2: (RDF SUMMARY) Given an RDF graph G and an RDF node equivalence relation ≡, the summary of G by ≡ is an RDF graph denoted G ≡ and defined as follows: • G ≡ contains exactly one node for each equivalence class of G nodes through ≡; each such node has a distinct, "fresh" URI (that does not appear in G). • For each triple s p o ∈ G such that s ≡ , o ≡ are the G ≡ nodes corresponding to the equivalence classes of s and o, the triple s ≡ p o ≡ belongs to G ≡ . The above definition can also be stated "G ≡ is the quotient graph of G by the equivalence relation ≡", based on the classical notion of quotient graph1 . We make two observations: • Regardless of the chosen ≡, all SG triples are also part of G ≡ , as class and property nodes are represented by themselves, and thanks to the way G ≡ edges are defined; indeed, G and G ≡ have the same schema; • No particular treatment is given to type triples: how to take them into account is left to each individual ≡. Different RDF equivalence relations lead to different summaries. At one extreme, if all data nodes are equivalent, the summary has a single data node; on the contrary, if ≡ is "empty" (each node is equivalent only to itself), the summary degenerates into G itself. Well-studied equivalence relations for graph quotient summaries are based on the so-called forward, backward, or forward and backward (FB) bisimulation [START_REF] Milo | Index structures for path expressions[END_REF], [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF]. It has been noted though, e.g. in [START_REF] Khatchadourian | Constructing bisimulation summaries on a multi-core graph processing framework[END_REF], that RDF graphs exhibit so much structural heterogeneity that bisimulationbased summaries are very large, almost of the size of G, thus not very useful. In contrast, [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF] introduced ≡ relations which lead to compact summaries, many orders of magnitude smaller than the original graphs. C. Types in summarization: first or last? Let us consider how type triples can be used in quotient RDF summaries. Two approaches have been studied in the literature, and in particular in quotient summaries. The approach we will call data-first focuses on summarizing the data (or structure) of G, and then carries (or copies) the possible types of G nodes, to the summary nodes representing them. Conversely, type-first approaches summarize graph nodes first (or only) by their types. Below, we recall two quotient summaries described in [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF], which are the starting point of this work; they are both very compact, and illustrate the data-first and typefirst approaches respectively. They both rely on the notion of property cliques: Definition 3: (PROPERTY RELATIONS AND CLIQUES) Let p 1 , p 2 be two data properties in DG: For example, in Figure 1, the properties email and webpage are source-related since Alice is the subject of both; webpage and officeHours are source-related due to Bob; also due to Alice, registeredIn and attends are source-related to the above properties, leading to a source clique SC 1 = {attends, email, webpage, officeHours, registeredIn}. Another source clique is SC 2 = {desc, givenIn}. It is easy to see that the set of non-empty source (or target) property cliques is a partition over the data properties of DG. Further, all data properties of a resource r ∈ G are all in the same source clique, which we denote SC(r); similarly, all the properties of which r is a value are in the same target clique, denoted T C(r). If r is not the value of any property (respectively, has no property), we consider its target (respectively, source) is ∅. For instance, in our example, SC 1 is the source clique of Alice, Bob, Carole and David, while Figure 2 shows the weak summary of our sample RDF graph. The URIs W 1 to W 6 are "new" summary nodes, representing literals and/or URIs from G. Thus, W 3 represents Alice, Bob, Carole and David together, due to their common source clique SC 1 . W 3 represents the course and the master program, due to their common source clique SC 2 . Note that the givenIn edge from G leads to a summary edge from W 2 to itself; also, W 1 carries over the types of the nodes it represents, thus it is both of type MasterProgram and MasterCourse. This example shows that data-first summarization may represent together G resources whose types clearly indicate their different meaning; this may be confusing. In contrast, the typed weak [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] summary recalled below illustrates the type-first approach: Let ≡ T be an RDF equivalence relation which holds on two nodes iff they have the exact same set of types. Let ≡ UW be an RDF equivalence relation which holds on two nodes iff (i) they have no type, and (ii) they are weakly equivalent. Definition 6: (TYPED WEAK SUMMARY) The typed weak summary of an RDF graph G, denoted G TW , is the summary through ≡ UW of the summary through ≡ T of G: G TW = (G ≡T ) ≡UW This double-quotient summarization acts as follows. First, nodes are grouped by their sets of types (inner quotient through ≡ T ); second, untyped nodes only are grouped according to weak (structural) equivalence. For instance, our sample G has six typed data nodes (Alice to David, BigDataMaster and HadoopCourse), each of which has a set of exactly one type; all these types are different. Thus, ≡ T is empty, and G TW (drawing omitted) has eight typed nodes U T W 1 to U T W 8 , each with a distinct type and the property(ies) of one of these nodes. We now consider G's eight untyped data nodes. We have d1 ≡ W d2 due to their common target clique {desc}, and similarly w1 ≡ W w2 and h2 ≡ W h3 ≡ W h4. Thus, G TW has four untyped nodes, each of which is an object of desc, email, webpage and respectively officeHours triples. The typed weak summary, as well as other type-first summaries, e.g. [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF], also have limitations: • They are defined based on the type triples of G, which may change through saturation, leading to different G TW summaries for conceptually the same graph (as all G leading to the same G ∞ are equivalent). Thus, for a typefirst summary to be most meaningful, one should build it on the saturated graph G ∞ . Note the reason for saturation at Figure 8 in [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF]. • They do not exploit the relationships which the ontology may state among the types. For instance, AssistantProfessor nodes like Carole are summarized separately from Professors like David, although they are all instructors. III. SUMMARIZATION AWARE OF TYPE HIERARCHIES A. Novel type-based RDF equivalence Our first goal is to define an RDF equivalence relation which: 1) takes type information into account, thus belongs to the "types-first" approach; 2) leads (through Definition 2) to a summary which represents together, to the extent possible (see below), nodes that have the same most general type. Formally, let C = {c 1 , c 2 , . . . , } be the set of class nodes present in G (that is, in SG and/or in TG). We can view these nodes as organized in a directed graph where there is an edge c 1 → c 2 as long as G's saturated schema SG ∞ states that c 1 is a subclass of c 2 . By a slight abuse of notation, we use C to also refer to this graph2 . In principle, C could have cycles, but this does not appear to correspond to meaningful schema designs. Therefore, we assume without loss of generality that C is a directed acyclic graph (DAG, in short) 3 . In Figure 1, C is the DAG comprising the eight (blue) class nodes and edges between them; this DAG has four roots. First, assume that C is a tree, e.g., with Instructor as a root type and PhDStudent, AssistantProfessor as its subclasses. In such a case, we would like instances of all the abovementioned types to be represented together, because they are all instances of the top type Instructor. This extends easily to the case when C is a forest, e.g., a second type hierarchy in C could feature In general, though, C may not be a forest, but instead it may be a graph where some classes have multiple superclasses, potentially unrelated. For instance, in Figure 1, PhDStudent has two superclasses, Student and Instructor. Therefore, it is not possible to represent G nodes of type PhDStudent based on their most general type, because they have more than one such type. Representing them twice (once as Instructor, once as Student) would violate the framework (Definition 2), in which any summary is a quotient and thus, each G node must be represented by exactly one summary node. To represent resources as much as possible according to their most general type, we proceed as follows. Definition 7: (TREE COVER) Given a DAG C, we call a tree cover of C a set of trees such that: (i) each node in C appears in exactly one tree; (ii) together, they contain all the nodes of C; and (iii) each C edge appears either in one tree or connects the root of one tree to a node in another. Given C admits many tree covers, however, it can be shown that there exists a tree cover with the least possible number of trees, which we will call min-size cover. This cover can be computed in a single traversal of the graph by creating a tree root exactly from each C node having two supertypes such that none is a supertype of the other, and attaching to it all its descendants which are not themselves roots of another tree. For instance, the RDF schema from Figure 1 leads to a min-size cover of five trees: • A tree rooted at Instructor and the edges connecting it to its children AssistantProfessor and Professor; • A single-node tree rooted at PhDStudent; • A tree rooted at Student with its child MasterStudent; • A single-node tree for MasterProgram and another for MasterCourse. Figure 3 illustrates min-size covers on a more complex RDF schema, consisting of the types A to Q. Every arrow goes from a type to one of its supertypes (for readability, the figure does not include all the implicit subclass relationships, e.g., that E is also a subclass of H, I, J etc.). The pink areas each denote a tree in the corresponding min-size cover. H and L are tree roots because they have multiple, unrelated supertypes. To complete our proposal, we need to make an extra hypothesis on G: ( †) Whenever a data node n is of two distinct types c 1 , c 2 which are not in the same tree in the min-size tree cover of C, then (i) c 1 and c 2 have some common subclasses, (ii) among these, there exists a class c 1,2 that is a superclass of all the others, and (iii) n is of type c 1,2 . For instance, in our example, hypothesis ( †) states that if a node n is an Instructor and a Student, these two types must have a common subclass (in our case, this is PhDStudent), and n must be of type PhDStudent. The hypothesis would be violated if there was another common subclass of Instructor and Student, say MusicLover4 , that was neither a subclass of PhDStudent nor a superclass of it. ( †) may be checked by a SPARQL query on G. While it may not hold, we have not found such counter-examples in a set of RDF graphs we have examined (see Section IV). In particular, ( †) immediately holds in the frequent case when C is a tree (taxonomy) or, more generally, a forest: in such cases, the min-size cover of C is exactly its set of trees, and any types c 1 , c 2 of a data node n are in the same tree. ( †) holds, we can state: Lemma 1 (Lowest branching type): Let G be an RDF graph satisfying ( †), n be a data node in G, cs n be the set of types of n in G, and cs ∞ n be the classes from cs n together with all their superclasses (according to the saturated schema of G). Assume that cs ∞ n ≠ ∅. Then there exists a type lbt n , called lowest branching type, such that: • cs ∞ n = cs ′ n ⊍ cs ′′ n , where {lbt n } ∈ cs ′ n and cs ′′ n may be empty; • the types in cs ′ n (if any) can be arranged in a tree according to ≺ sc relation between them, and the most general one is lbt n ; • if cs ′′ n is not empty, it is at least of size two, and all its types are superclasses of lbt n . Proof: Let's assume to the contrary that there exists an RDF graph G 1 satisfying ( †), a node n in G 1 , cs n the set of types of n, cs ∞ n ≠ ∅ is the set of types of n with all their supertypes (according to saturated schema of G 1 ) and there is no lowest branching type for n. Let G be the set of all such RDF graphs and let G be the G graph containing a node n that violates the lemma and such that cs ∞ n is the smallest, across all such lemma-violating nodes n in any graph from G. Let k = cs ∞ n . Note that k > 0 by definition. Let's consider the cases: 1) k = 1 In this case, the lemma trivially holds. From the above discussion, it follows that Carole ≡ TH David, matching the intuition that they are both instructors and do not belong to other type hierarchies. In contrast, PhD students (such as Bob) are only type-hierarchy equivalent to each other; they are set apart by their dual Student and Instructor status. Master students such as Alice are only type-hierarchy equivalent among themselves, as they only belong to the student type hierarchy. Every other typed node of G is only type-hierarchy equivalent to itself. More summaries based on ≡ TH could be obtained by replacing UW with another RDF equivalence relation. IV. ALGORITHM AND APPLICATIONS A. Constructing the weak type-hierarchy summary An algorithm which builds G WTH is as follows: 1) From SG, build C and its min-size cover. 2) For every typed node n of G, identify its lowest branching type lbt n and (the first time a given lbt n is encountered) create a new URI U RI lbtn : this will be the G WTH node representing all the typed G nodes having the same lbt n . 3) Build the weak summary of the untyped nodes of G, using the algorithm described in [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF]. This creates the untyped nodes in G WTH and all the triples connecting them. 4) Add type edges: for every triple n τ c in G, add (unless already in the summary) the triple U RI lbtn τ c to G WTH . 5) Connect the typed and untyped summary nodes: for every triple n 1 p n 2 in G such that n 1 has types in G and n 2 does not, add (unless already in the summary) the triple U RI lbtn 1 p U W n2 to G WTH , where U W n2 is the node representing n 2 , in the weak summary of the untyped part of G. Apply a similar procedure for the converse case (when n 1 has no types but n 2 does). Step 1) is the fastest as it applies on the schema, typically orders of magnitude smaller than the data. The cost of the steps 2)-4) depend on the distribution of nodes (typed or untyped) and triples (type triples; data triples between typed/untyped nodes) in G. [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] presents an efficient, almost-linear time (in the size of G) weak summarization algorithm (step 3). The complexity of the other steps is linear in the number of triples in G, leading to an overall almost-linear complexity. B. Applicability To understand if G WTH summarization is helpful for an RDF graph, the following questions should be answered: 1) Does SG feature subclass hierarchies? If it does not, then G WTH reduces to the weak summary G TW . 2) Does SG feature a class with two unrelated superclasses? a) No: then C is a tree or a forest. In this case, G WTH represents every typed node together with all the nodes whose type belong to the same type hierarchy (tree). b) Yes: then, does G satisfy ( †)? i) Yes: one can build G WTH to obtain a refined representation of nodes according to the lowest branching type in their type hierarchy. ii) No: G WTH is undefined, due to the lack of a unique representative for the node(s) violating ( †). Among the RDF datasets frequently used, DBLP 5 , the BSBM benchmark [START_REF] Bizer | The Berlin SPARQL Benchmark[END_REF], and the real-life Slegger ontology 6whose description has been recently published [START_REF] Hovland | Ontology-based data access to slegge[END_REF] exhibited subclass hierarchies. Further, BSBM graphs and the Slegger ontology feature multiple inheritance. BSBM graphs satisfy ( †). On Slegger we were unable to check this, as the data is not publicly shared; our understanding of the application though as described implies that ( †) holds. An older study [START_REF] Magkanaraki | Benchmarking RDF schemas for the semantic web[END_REF] of many concrete RDF Schemas notes a high frequence of class hierarchies, of depth going up to 12, as well as a relatively high incidence of multiple inheritance; graphs with such schema benefit from G WTH summarization when our hypothesis ( †) holds. Figure 1 : 1 Figure 1: Sample RDF graph. 1) p 1 , p 2 ∈ G are source-related iff either: (i) a data node in DG is the subject of both p 1 and p 2 , or (ii) DG holds a data node r and a data property p 3 such that r is the subject of p 1 and p 3 , with p 3 and p 2 being source-related. 2) p 1 , p 2 ∈ G are target-related iff either: (i) a data node in DG is the object of both p 1 and p 2 , or (ii) DG holds a data node r and a data property p 3 such that r is the object of p 1 and p 3 , with p 3 and p 2 being target-related. A maximal set of properties in DG which are pairwise source-related (respectively, target-related) is called a source (respectively, target) property clique. SC 2 is the source clique of the BigDataMaster and of the HadoopCourse. Definition 4: (WEAK EQUIVALENCE) Two data nodes are weakly equivalent, denoted n 1 ≡ W n 2 , iff: (i) they have the same non-empty source or non-empty target clique, or (ii) they both have empty source and empty target cliques, or (iii) they are both weakly equivalent to another node of G. Figure 2 : 2 Figure 2: Weak summary of the sample RDF graph in Figure 1. Definition 5: (WEAK SUMMARY) The weak summary of the graph G, denoted G W , is the RDF summary obtained from the weak equivalence ≡ W .Figure2shows the weak summary of our sample RDF graph. The URIs W 1 to W 6 are "new" summary nodes, representing literals and/or URIs from G. Thus, W 3 represents Alice, Bob, Carole and David together, due to their common source clique SC 1 . W 3 represents the course and the master program, due to their common source clique SC 2 . Note that the givenIn edge from G leads to a summary edge from W 2 to itself; also, W 1 carries over the types of the nodes it represents, thus it is both of type MasterProgram and MasterCourse. This example shows that data-first summarization may represent together G resources whose types clearly indicate their different meaning; this may be confusing.In contrast, the typed weak[START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] summary recalled below illustrates the type-first approach:Let ≡ T be an RDF equivalence relation which holds on two nodes iff they have the exact same set of types.Let ≡ UW be an RDF equivalence relation which holds on two nodes iff (i) they have no type, and (ii) they are weakly equivalent.Definition 6: (TYPED WEAK SUMMARY) The typed weak summary of an RDF graph G, denoted G TW , is the summary through ≡ UW of the summary through ≡ T of G:G TW = (G ≡T ) ≡UW This double-quotient summarization acts as follows. First, nodes are grouped by their sets of types (inner quotient through ≡ T ); second, untyped nodes only are grouped according to weak (structural) equivalence.For instance, our sample G has six typed data nodes (Alice to David, BigDataMaster and HadoopCourse), each of which has a set of exactly one type; all these types are different. Thus, ≡ T is empty, and G TW (drawing omitted) has eight typed Figure 3 : 3 Figure 3: Sample RDF schema and min-size cover of the corresponding C. a root type Paper whose subclasses are ConferencePaper, JournalPaper etc. In this case, we aim to represent all authors together because they are instances of Paper.In general, though, C may not be a forest, but instead it may be a graph where some classes have multiple superclasses, potentially unrelated. For instance, in Figure1, PhDStudent has two superclasses, Student and Instructor. Therefore, it is not possible to represent G nodes of type PhDStudent based on their most general type, because they have more than one such type. Representing them twice (once as Instructor, once as Student) would violate the framework (Definition 2), in which any summary is a quotient and thus, each G node must be represented by exactly one summary node.To represent resources as much as possible according to their most general type, we proceed as follows.Definition 7: (TREE COVER) Given a DAG C, we call a tree cover of C a set of trees such that: (i) each node in C appears in exactly one tree; (ii) together, they contain all the nodes of C; and (iii) each C edge appears either in one tree or connects the root of one tree to a node in another.Given C admits many tree covers, however, it can be shown that there exists a tree cover with the least possible number of trees, which we will call min-size cover. This cover can be computed in a single traversal of the graph by creating a tree root exactly from each C node having two supertypes such that none is a supertype of the other, and attaching to it all its descendants which are not themselves roots of another tree. For instance, the RDF schema from Figure1leads to a min-size cover of five trees: Figure 4 : 4 Figure 4: Weak type-hierarchy summary of the RDF graph in Figure 1. The roots of the trees in the min-size cover of C are underlined. B. RDF summary based on type hierarchy equivalence Based on ≡ TH defined above, and the ≡ UW structural equivalence relation (two nodes are ≡ UW if they have no types, and are weakly equivalent), we introduce a novel summary belonging to the "type-first" approach: Definition 9: (WEAK TYPE-HIERARCHY SUMMARY) The type hierarchy summary of G, denoted G WTH , is the summary through ≡ UW of the summary through ≡ TH of G: G WTH = (G ≡TH ) ≡UW Figure 4 illustrates the G WTH summary of the RDF graph in Figure 1. Different from the weak summary (Figure 2), it does not represent together nodes of unrelated types, such as BigDataMaster and HadoopCourse. At the same time, different from the typed weak summary of the same graph, it does not represent separately each individual, and instead it keeps Carole and David together as they only belong to the instructor type hierarchy.More summaries based on ≡ TH could be obtained by replacing UW with another RDF equivalence relation. 2) k ≥ 2 In this case, let t 1 , . . . , t k be the types of node n (their order not important).Let's consider graph G ′ which is the same as G but without node n having type t k . From the way we chose G and G ′ , G ′ satisfies the lemma, thus there exists a lowest branching type lbt n for n in G ′ . Now, let's add t k to the types of n in G ′ . There are 3 possibilities: a) t k is a subclass of lbt n . Then lbt n is also lowest branching type after this addition. b) t k is a superclass of lbt n . If it's the only superclass of lbt n then t k is a new lowest branching type, else n still admits the lowest branching type lbt n . c) t k is neither a sub-nor a superclass of lbt n . Then it is in another tree in min-size cover of G, thus by ( †) it follows that t k and some other type between t 1 , . . . , t k-1 have a common subtype which serves as a lowest branching type for n. From the above discussion we conclude that the node n for which k = cs ∞ n is not the lemma counterexample with the smallest k, which contradicts the assumption we made when picking it! Therefore no graph exists in G, thus all Gs satisfy the lemma. ◻ For instance, let n be Bob in Figure 1, then cs n is {PhDStudent}, thus cs ∞ n is {PhDStudent, Student, Instructor}. In this case, lbt n is PhDStudent, cs ′ n is {PhDStudent} and cs ′′ n is {Student, Instructor}. If we take n to be Carole, cs ∞ n is {AssistantProfessor, Instructor}; no type from this set has two distinct superclasses, thus cs ′′ n must be empty, lbt Carole is Instructor, and cs ′ n is {AssistantProfessor, Instructor}. By a similar reasoning, lbt David is Instructor, and lbt Alice is Student. When n has a type without subclasses or superclasses, such as BigData-Master, it leads to cs ′′ n being empty, and cs ′ n is lbt n , the only type of n. Thus, lbt BigDataMaster is MasterProgram and lbt HadoopCourse is MasterCourse. For a more complex example, recall the RDF schema in Figure 3, and let n be a node of type E in an RDF graph having this schema. In this case, cs n is {E, G, H, B, I, J}, lbt n is H, cs ′ n is {E, G, H} while cs ′′ n is {B, I, J}. Based on Lemma 1, we define our novel notion of equivalence, reflecting the hierarchy among the types of G data nodes: Definition 8: (TYPE-HIERARCHY EQUIVALENCE) Typehierarchy equivalence, denoted ≡ TH , is an RDF node equivalence relation defined as follows: two data nodes n 1 and n 2 are type-hierarchy equivalent, noted n 1 ≡ TH n 2 , iff lbt n1 = lbt n2 . https://en.wikipedia.org/wiki/Quotient graph Ontology languages such as RDF Schema or OWL feature a top type, that is a supertype of any other type, such as rdfs:Resource. We do not include such a generic, top type in C. If C has cycles, the types in each cycle can all be seen as equivalent, as each is a specialization of all the other, and could be replaced by a single (new) type in a simplified ontology. The process can be repeated until C becomes a DAG, then the approach below can be applied, following which the simplified types can be restored, replacing the ones we introduced. We omit the details. MusicLover may be a subclass of yet another class (distinct type c 3 in third other min-size tree) and it would still violate the hypothesis http://dblp.uni-trier.de/ http://slegger.gitlab.io/
37,438
[ "178640", "742652" ]
[ "451441", "419361", "451441" ]
01516011
en
[ "math" ]
2024/03/05 22:32:13
2018
https://inria.hal.science/hal-01516011v3/file/SIAM-REV.pdf
Joseph Frédéric Bonnans Axel Kröner email: [email protected] J Fr Éd Éric Bonnans email: [email protected] Axel Kr Öner Variational analysis Keywords: finance, options, partial differential equations, variational formulation, parabolic variational inequalities AMS subject classifications. 35K20, 35K85, 91G80 HAL is Introduction. In this paper we consider variational analysis for the partial differential equations associated with the pricing of European or American options. For an introduction to these models, see Fouque et al., [START_REF] Fouque | Derivatives in financial markets with stochastic volatility[END_REF]. We will set up a general framework of variable volatility models, which is in particular applicable on the following standard models which are well established in mathematical finance. The well-posedness of PDE formulations of variable volatility poblems was studied in [START_REF] Achdou | Computational methods for option pricing[END_REF][START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF][START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF][START_REF] Pironneau | Partial differential equations for option pricing, Handbook of numerical analysis[END_REF], and in the recent work [START_REF] Feehan | Schauder a priori estimates and regularity of solutions to boundary-degenerate elliptic linear second-order partial differential equations[END_REF][START_REF] Feehan | Degenerate-elliptic operators in mathematical finance and higher-order regularity for solutions to variational equations[END_REF]. Let the W i (t) be Brownian motions on a filtered probability space. The variable s denotes a financial asset, and the components of y are factors that influence the volatility: (i) The Achdou-Tchou model [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF], see also Achdou, Franchi, and Tchou [START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF]: (1. [START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF] ds(t) = rs(t)dt + σ(y(t))s(t)dW 1 (t), dy(t) = θ(µ -y(t))dt + νdW 2 (t), with the interest rate r, the volatility coefficient σ function of the factor y whose dynamics involves a parameter ν > 0, and positive constants θ and µ. (ii) The Heston model [START_REF] Heston | A closed-form solution for options with stochastic volatility with applications to bond and currency options[END_REF] (1.2) ds(t) = s(t) rdt + y(t)dW 1 (t) , dy(t) = θ(µ -y(t))dt + ν y(t)dW 2 (t). (iii) The Double Heston model, see Christoffersen, Heston and Jacobs [START_REF] Jacobs | The shape and term structure of the index option smirk: Why multifactor stochastic volatility models work so well[END_REF], and also Gauthier and Possamaï [START_REF] Gauthier | Efficient simulation of the double Heston model[END_REF]: (1.3)      ds(t) = s(t) rdt + y 1 (t)dW 1 (t) + y 2 (t)dW 2 (t) , dy 1 (t) = θ 1 (µ 1 -y 1 (t))dt + ν 1 y 1 (t)dW 3 (t), dy 2 (t) = θ 2 (µ 2 -y 2 (t))dt + ν 2 y 2 (t)dW 4 (t). In the last two models we have similar interpretations of the coefficients; in the double Heston model, denoting by •, • the correlation coefficients, we assume that there are correlations only between W 1 and W 3 , and W 2 and W 4 . Consider now the general multiple factor model (1.4) ds = rs(t)dt + N k=1 f k (y k (t))s β k (t)dW k (t), dy k = θ k (µ k -y k (t))dt + g k (y k (t))dW N +k (t), k = 1, . . . , N. Here the y k are volatility factors, f k (y k ) represents the volatility coefficient due to y k , g k (y k ) is a volatility coefficient in the dynamics of the kth factor with positive constants θ k and µ k . Let us denote the correlation between the ith and jth Brownian motions by κ ij : this is a measurable function of (s, y, t) with value in [0, 1] (here s ∈ (0, ∞) and y k belongs to either (0, ∞) or R), see below. We asssume that we have nonzero correlations only between the Brownian motions W k and W N +k , for k = 1 to N , i.e. (1.5) κ ij = 0 if i = j and |j -i| = N . Note that, in some of the main results, we will assume for the sake of simplicity that the correlations are constant. We apply the developed analysis to a subclass of stochastic volatility models, obtained by assuming that κ is constant and (1.6) |f k (y k )| = |y k | γ k ; |g k (y k )| = ν k |y k | 1-γ k ; β k ∈ (0, 1]; ν k > 0; γ k ∈ (0, ∞). This covers in particular a variant of the Achdou and Tchou model with multiple factors (VAT), when γ k = 1, as well as a generalized multiple factor Heston model (GMH), when γ k = 1/2, i.e., for k = 1 to N : (1.7) VAT: f k (y k ) = y k , g k (y k ) = ν k , GMH: f k (y k ) = √ y k , g k (y k ) = ν k √ y k . For a general class of stochastic volatility models with correlation we refer to Lions and Musiela [START_REF] Lions | Correlations and bounds for stochastic volatility models[END_REF]. The main contribution of this paper is variational analysis for the pricing equation corresponding to the above general class in the sense of the Feynman-Kac theory. This requires in particular to prove continuity and coercivity properties of the corresponding bilinear form in weighted Sobolev spaces H and V , respectively, which have the Gelfand property and allow the application of the Lions and Magenes theory [START_REF] Lions | Non-homogeneous boundary value problems and applications[END_REF] recalled in Appendix A and the regularity theory for parabolic variational inequalities recalled in Appendix B. A special emphasis is given to the continuity analysis of the rate term in the pricing equation. Two approaches are presented, the standard one and an extension of the one based on the commutator of first-order differential operators as in Achdou and Tchou [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF], extended to the Heston model setting by Achdou and Pironneau [START_REF] Pironneau | Partial differential equations for option pricing, Handbook of numerical analysis[END_REF]. Our main result is that the commutator analysis gives stronger results for the subclass defined by (1.6), generalizing the particular cases of the VAT and GMH classes, see remarks 6.2 and 6.4. In particular we extend some of the results by [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF]. This paper is organized as follows. In section 2 we give the expression of the bilinear form associated with the original PDE, and check the hypotheses of continuity and semi-coercivity of this bilinear form. In section 3 we show how to refine this analysis by taking into account the commutators of the first-order differential operators This manuscript is for review purposes only. associated with the variational formulation. In section 4 we show how to compute the weighting function involved in the bilinear form. In section 5 we develop the results for a general class introduced in the next section. In section 6 we specialize the results to stochastic volatility models. The appendix recalls the main results of the variational theory for parabolic equations, with a discussion on the characterization of the V functional spaces in the case of one dimensional problems. Notation. We assume that the domain Ω of the PDEs to be considered in the sequel of this paper has the following structure. Let (I, J) be a partition of {0, . . . , N }, with 0 ∈ J, and (1.8) Ω := N Π k=0 Ω k ; with Ω k := R when k ∈ I, (0, ∞) when k ∈ J. Let L 0 (Ω) denote the space of measurable functions over Ω. For a given weighting function ρ : Ω → R of class C 1 , with positive values, we define the weighted space (1.9) L 2,ρ (Ω) := {v ∈ L 0 (Ω); Ω v(x) 2 ρ(x)dx < ∞}, which is a Hilbert space endowed with the norm (1.10) v ρ := Ω v(x) 2 ρ(x)dx 1/2 . By D(Ω) we denote the space of C ∞ functions with compact support in Ω. By H 2 loc (Ω) we denote the space of functions over Ω whose product with an element of D(Ω) belongs to the Sobolev space H 2 (Ω). Besides, let Φ be a vector field over Ω (i.e., a mapping Ω → R n ). The first-order differential operator associated with Φ is, for u : Ω → R the function over Ω defined by (1.11) Φ[u](x) := n i=0 Φ i (x) ∂u ∂x i (x), for all x ∈ Ω. 2. General setting. Here we give compute the bilinear form associated with the original PDE, in the setting of the general multiple factor model (1.4). Then we will check the hypotheses of continuity and semi-coercivity of this bilinear form. 2.1. Variational formulation. We compute the bilinear form of the variational setting, taking into account a general weight function. We wil see how to choose the functional spaces for a given ρ, and then how to choose the weight itself. 2.1.1. The elliptic operator. In financial models the underlying is solution of stochastic differential equations of the form dX(t) = b(t, X(t))dt + nσ i=1 σ i (t, X(t))dW i . (2.1) Here X(t) takes values in Ω, defined in (1.8). That is, X 1 corresponds to the s variable, and X k+1 , for k = 1 to N , corresponds to y k . We have that n σ = 2N . So, b and σ i , for i = 1 to n σ , are mappings (0, T ) × Ω → R n , and the W i , for i = 1 to n σ , are standard Brownian processes with correlation κ ij : (0, T ) × Ω → R This manuscript is for review purposes only. between W i and W j for i, j ∈ {1, . . . , n σ }. The n σ × n σ symmetric correlation matrix κ(•, •) is nonnegative with unit diagonal: (2.2) κ(t, x) 0; κ ii = 1, i = 1, . . . , n σ , for a.a. (t, x) ∈ (0, T ) × Ω. Here, for symmetric matrices B and C of same size, by "C B" we mean that C -B is positive semidefinite. The expression of the second order differential operator A corresponding to the dynamics (2.1) is, skipping the time and space arguments, for u : (0, T ) × Ω → R: (2.3) Au := ru -b • ∇u -1 2 nσ i,j=1 κ ij σ j u xx σ i , where (2.4) σ j u xx σ i := nσ k, =1 σ kj ∂u 2 ∂x k ∂x σ i , r(x, t ) represents an interest rate, and u xx is the matrix of second derivatives in space of u. The associated backward PDE for a European option is of the form (2.5) -u(t, x) + A(t, x)u(t, x) = f (t, x), (t, x) ∈ (0, T ) × Ω; u(x, T ) = u T (x), x ∈ Ω, with u the notation for the time derivative of u, u T (x) payoff at final time (horizon) T and the r.h.s. f (t, x) represents dividends (often equal to zero). In case of an American option we obtain a variational inequality; for details we refer to Appendix D. since v ∈ D(Ω) there will be no contribution from the boundary. We obtain (2.7) -1 2 Ω σ j u xx σ i vκ ij ρ = 3 p=0 a p ij (u, v), with (2.8) a 0 ij (u, v) := 1 2 Ω n k, =1 σ kj σ i ∂u ∂x k ∂v ∂x κ ij ρ = 1 2 Ω σ j [u]σ i [v]κ ij ρ, (2.9) a 1 ij (u, v) := 1 2 Ω n k, =1 σ kj σ i ∂u ∂x k ∂(κ ij ρ) ∂x v = 1 2 Ω σ j [u]σ i [κ ij ρ] v ρ ρ, (2.10) a 2 ij (u, v) := 1 2 Ω n k, =1 σ kj ∂(σ i ) ∂x ∂u ∂x k vκ ij ρ = 1 2 Ω σ j [u](div σ i )vκ ij ρ, This manuscript is for review purposes only. (2.11) a 3 ij (u, v) := 1 2 Ω n k, =1 ∂(σ kj ) ∂x σ i ∂u ∂x k vκ ij ρ = 1 2 Ω n k=1 σ i [σ kj ] ∂u ∂x k vκ ij ρ. Also, for the contributions of the first and zero order terms resp. we get (2.12) a 4 (u, v) := - Ω b[u]vρ; a 5 (u, v) := Ω ruvρ. Set (2.13) a p := nσ i,j=1 a p ij , p = 0, . . . , 3. The bilinear form associated with the above PDE is (2.14) a(u, v) := 5 p=0 a p (u, v). From the previous discussion we deduce that Lemma 2.1. Let u ∈ H 2 oc (Ω) and v ∈ D(Ω). Then we have that (2.15) a(u, v) = Ω A(t, x)u(x)v(x)ρ(x)dx. 2.1.3. The Gelfand triple. We can view a 0 as the principal term of the bilinear form a(u, v). Let σ denote the n × n σ matrix whose σ j are the columns. Then (2.16) a 0 (u, v) = nσ i,j=1 Ω σ j [u]σ i [v]κ ij ρ = Ω ∇u σκσ ∇vρ. Since κ 0, the above integrand is nonnegative when u = v; therefore, a 0 (u, u) ≥ 0. When κ is the identity we have that a 0 (u, u) is equal to the seminorm a 00 (u, u), where (2.17) a 00 (u, u) := Ω |σ ∇u| 2 ρ. In the presence of correlations it is natural to assume that we have a coercivity of the same order. That is, we assume that (2.18) For some γ ∈ (0, 1]: σκσ γσσ , for all (t, x) ∈ (0, T ) × Ω. This manuscript is for review purposes only. We need to choose a pair (V, H) of Hilbert spaces satisfying the Gelfand conditions for the variational setting of Appendix A, namely V densely and continuously embedded in H, a(•, •) continuous and semi-coercive over V . Additionally, the r.h.s. and final condition of (2.5) should belong to L 2 (0, T ; V * ) and H resp. (and for the second parabolic estimate, to L 2 (0, T ; H) and V resp. ). We do as follows: for some measurable function h : Ω → R + to be specified later we define (2.21)    H := {v ∈ L 0 (Ω); hv ∈ L 2,ρ (Ω)}, V := {v ∈ H; σ i [v] ∈ L 2,ρ (Ω), i = 1, . . . , n σ }, V := {closure of D(Ω) in V}, endowed with the natural norms, (2.22) v H := hv ρ ; u 2 V := a 00 (u, u) + u 2 H . We do not try to characterize the space V since this is problem dependent. Obviously, a 0 (u, v) is a bilinear continuous form over V. We next need to choose h so that a(u, v) is a bilinear and semi-coercive continuous form, and u T ∈ H. 2.2. Continuity and semi-coercivity of the bilinear form over V. We will see that the analysis of a 0 to a 2 is relatively easy. It is less obvious to analyze the term (2.23) a 34 (u, v) := a 3 (u, v) + a 4 (u, v). Let qij (t, x) ∈ R n be the vector with kth component equal to (2.24) qijk := κ ij σ i [σ kj ]. Set (2.25) q := nσ i,j=1 qij , q := q -b. Then by (2.11)-(2.12), we have that (2.26) a 34 (u, v) = Ω q[u]vρ. We next need to assume that it is possible to choose η k in L 0 ((0, T ) × Ω), for k = 1 to n σ , such that (2.27) q = nσ k=1 η k σ k . Often the n × n σ matrix σ(t, x) has a.e. rank n. Then the above decomposition is possible. However, the choice for η is not necessarily unique. We will see in examples how to do it. Consider the following hypotheses: h σ ≤ c σ h, where h σ := nσ i,j=1 |σ i [κ ij ρ]/ρ + κ ij div σ i | , a.e. , for some c σ > 0, (2.28) h r ≤ c r h, where h r := |r| 1/2 , a.e., for some c r > 0, (2.29) h η ≤ c η h, where h η := |η|, a.e., for some c η > 0. (2.30) This manuscript is for review purposes only. Remark 2.3. Let us set for any differentiable vector field Z : Ω → R n (2.31) G ρ (Z) := div Z + Z[ρ] ρ . Since κ ii = 1, (2.28) implies that (2.32) |G ρ (σ i )| ≤ c σ h, i = 1; . . . , n σ . Remark 2.4. Since (2.33) σ i [κ ij ρ] = σ i [κ ij ]ρ + σ i [ρ]κ ij , and |κ ij | ≤ 1 a.e., a sufficient condition for (2.28) is that there exist a positive constants c σ such that (2.34) h σ ≤ c σ h; h σ := nσ i,j=1 |σ i [κ ij ]| + nσ i=1 (|div σ i | + |σ i [ρ]/ρ|) . We will see in section 4 how to choose the weight ρ so that |σ i [ρ]/ρ| can be easily estimated as a function of σ. Lemma 2.5. Let (2.28)-(2.30) hold. Then the bilinear form a(u, v) is both (i) continuous over V , and (ii) semi-coercive, in the sense of (A.5). Proof. (i) We have that a 1 + a 2 is continuous, since by (2.9)-(2.10), (2.28) and the Cauchy-Schwarz inequality: (2.35) |a 1 (u, v) + a 2 (u, v)| ≤ nσ i,j=1 |a 1 ij (u, v) + a 2 ij (u, v)| ≤ nσ j=1 σ j [u] ρ nσ i=1 (σ i [κ ij ρ]/ρ + κ ij div σ i ) v ρ ≤ c σ n σ v H nσ j=1 σ j [u] ρ . (ii) Also, a 34 is continuous, since by (2.27) and (2.30): (2.36) |a 34 (u, v)| ≤ nσ k=1 σ k [u] ρ η k v ρ ≤ c η v H nσ k=1 σ k [u] ρ . Set c := c σ n σ + c 2 η . By (2.35)-(2.36), we have that (2.37) |a 5 (u, v)| ≤ |r| 1/2 u 2,ρ |r| 1/2 v 2,ρ ≤ c 2 r u H v H , |a 1 (u, v) + a 2 (u, v) + a 34 (u, v)| ≤ ca 00 (u) 1/2 v H . Since a 0 is obviously continuous, the continuity of a(u, v) follows. (iii) Semi-coercivity. Using (2.37) and Young's inequality, we get that (2.38) a(u, u) ≥ a 0 (u, u) -a 1 (u, u) + a 2 (u, u) + a 34 (u, u) -a 5 (u, u) ≥ γa 00 (u) -ca 00 (u) 1/2 u H -c r u 2 H ≥ 1 2 γa 00 (u) -1 2 c 2 γ + c r u 2 H , which means that a is semi-coercive. This manuscript is for review purposes only. The above consideration allow to derive well-posedness results for parabolic equations and parabolic variational inequalities. Theorem 2.6. (i) Let (V, H) be given by (2.21), with h satisfying (2.28)-(2.30), (f, u T ) ∈ L 2 (0, T ; V * )×H. Then equation (2.5) has a unique solution u in L 2 (0, T ; V ) with u ∈ L 2 (0, T ; V * ), and the mapping (f, u T ) → u is nondecreasing. (ii) If in addition the semi-symmetry condition (A.8) holds, then u in L ∞ (0, T ; V ) and u ∈ L 2 (0, T ; H). Proof. This is a direct consequence of Propositions A.1, A.2 and C.1. We next consider the case of parabolic variational inequalities associated with the set (2.39) (ii) Let in addition the semi-symmetry condition (A.8) be satisfied. Then u is the unique solution of the strong formulation (B.2), belongs to L ∞ (0, T ; V ), and u belongs to L 2 (0, T ; H). K := {ψ ∈ V : ψ(x) ≥ Ψ(x) a.e. in Proof. This follows from Propositions B.1 and C.2. 3. Variational analysis using the commutator analysis. In the following a commutator for first order differential operators is introduced, and calculus rules are derived. 3.1. Commutators. Let u : Ω → R be of class C 2 . Let Φ and Ψ be two vector fields over Ω, both of class C 1 . Recalling (1.11), we may define the commutator of the first-order differential operators associated with Φ and Ψ as (3.1) [Φ, Ψ][u] := Φ[Ψ[u]] -Ψ[Φ[u]]. Note that (3.2) Φ[Ψ[u]] = n i=1 Φ i ∂(Ψu) ∂x i = n i=1 Φ i n k=1 ∂Ψ k ∂x i ∂u ∂x k + Ψ k ∂ 2 u ∂x k ∂x i . So, the expression of the commutator is (3.3) [Φ, Ψ] [u] = n i=1 Φ i n k=1 ∂Ψ k ∂x i ∂u ∂x k -Ψ i n k=1 ∂Φ k ∂x i ∂u ∂x k = n k=1 n i=1 Φ i ∂Ψ k ∂x i -Ψ i ∂Φ k ∂x i ∂u ∂x k . It is another first-order differential operator associated with a vector field (which happens to be the Lie bracket of Φ and Ψ, see e.g. [START_REF] Aubin | A course in differential geometry[END_REF]). This manuscript is for review purposes only. Adjoint. Remembering that H was defined in (2.21), given two vector fields Φ and Ψ over Ω, we define the spaces V(Φ, Ψ) := {v ∈ H; Φ[v], Ψ[v] ∈ H} , (3.4) V (Φ, Ψ) := {closure of D(Ω) in V(Φ, Ψ)} . (3.5) We define the adjoint Φ of Φ (view as an operator over say C ∞ (Ω, R), the latter being endowed with the scalar product of L 2,ρ (Ω)), by (3.6) Φ [u], v ρ = u, Φ[v] ρ for all u, v ∈ D(Ω), where •, • ρ denotes the scalar product in L 2,ρ (Ω). Thus, there holds the identity (3.7) Ω Φ [u](x)v(x)ρ(x)dx = Ω u(x)Φ[v](x)ρ(x)dx for all u, v ∈ D(Ω). Furthermore, (3.8 ) Ω u n i=1 Φ i ∂v ∂x i ρdx = - n i=1 Ω v ∂ ∂x i (uρΦ i )dx = - n i=1 Ω v ∂ ∂x i (uΦ i ) + u ρ Φ i ∂ρ ∂x i ρdx. Hence, (3.9) Φ [u] = - n i=1 ∂ ∂x i (uΦ i ) -uΦ i ∂ρ ∂x i /ρ = -u div Φ -Φ[u] -uΦ[ρ]/ρ. Remembering the definition of G ρ (Φ) in (2.31), we obtain that (3.10) Φ[u] + Φ [u] + G ρ (Φ)u = 0. Continuity of the bilinear form associated with the commutator. Setting, for v and w in V (Φ, Ψ): (3.11) ∆(u, v) := Ω [Φ, Ψ][u](x)v(x)ρ(x)dx, we have (3.12) ∆(u, v) = Ω (Φ[Ψ[u]]v -Ψ[Φ[u]]v)ρdx = Ω Ψ[u]Φ [v] -Φ[u]Ψ [v])ρdx = Ω (Φ[u]Ψ[v] -Ψ[u]Φ[v]) ρdx + Ω (Φ[u]G ρ (Ψ)v -Ψ[u]G ρ (Φ)v) ρdx. Lemma 3.1. For ∆(•, •) to be a continuous bilinear form on V (Φ, Ψ), it suffices that, for some c ∆ > 0: (3.13) |G ρ (Φ)| + |G ρ (Ψ)| ≤ c ∆ h a.e., and we have then: (3.14) |∆(u, v)| ≤ Ψ[u] ρ Φ[v] ρ + c ∆ v H + Φ[u] ρ Ψ[v] ρ + c ∆ v H . This manuscript is for review purposes only. Proof. Apply the Cauchy Schwarz inequality to (3.12), and use (3.13) combined with the definition of the space H. We apply the previous results with Φ := σ i , Ψ := σ j . Set for v, w in V : (3.15) ∆ ij (u, v) := Ω [σ i , σ j ][u](x)v(x)ρ(x)dx, i, j = 1, . . . , n σ . We recall that V was defined in (2.21). Corollary 3.2. Let (2.28) hold. Then the ∆ ij (u, v), i, j = 1, . . . , n σ , are continuous bilinear forms over V . Proof. Use remark 2.3 and conclude with lemma 3.1. 3.4. Redefining the space H. In section 2.2 we have obtained the continuity and semi-coercivity of a by decomposing q, defined in (2.26), as a linear combination (2.27) of the σ i . We now take advantage of the previous computation of commutators and assume that, more generally, instead of (2.27), we can decompose q in the form (3.16) q = nσ k=1 η k σ k + 1≤i<j≤nσ η ij [σ i , σ j ] a.e. We assume that η and η are measurable functions over [0, T ] × Ω, that η is weakly differentiable, and that for some c η > 0: (3.17) h η ≤ c η h, where h η := |η | + N i,j=1 σ i [η ij ] a.e., η ∈ L ∞ (Ω). Lemma 3. Ω σ k [u]η k vρ ≤ σ k [u] ρ σ k [u]η k v ρ ≤ σ k [u] ρ v H . (ii) Setting w := η ij v and taking here (Φ, Ψ) = (σ i , σ j ), we get that (3.19) Ω η ij [σ i , σ j )[u]vρ = ∆(u, w), where ∆(•, •) was defined in (3.11). Combining with lemma 3.1, we obtain (3.20) |∆ ij (u, v)| ≤ σ j [u] ρ σ i [w] ρ + c σ η ij ∞ v H + σ i [u] ρ σ j [w] ρ + c σ η ij ∞ v H . Since (3.21) σ i [η ij v] = η ij σ i [v] + σ i [η ij ]v, This manuscript is for review purposes only. by (3.17): (3.22) σ i [w] ρ ≤ η ij ∞ σ i [v] ρ + σ i [η ij ]v ρ ≤ η ij ∞ σ i [v] ρ + c η v H . Combining these inequalities, point (i) follows. (ii) Use u = v in (3.21) and (3.12). We find after cancellation in (3.12) that (3.23) ∆ ij (u, η ij u) = Ω u(σ i [u]σ j [η ij ] -σ j [u]σ i (η ij ))ρ + Ω (σ i [u]G ρ (σ j ) -σ j [u]G ρ (σ i )) η ij uρ. By (3.17), an upper bound for the absolute value of the first integral is (3.24) σ i [u] ρ + σ j [u] ρ hu ρ ≤ 2 u V u H . With (2.28), we get an upper bound for the absolute value of the second integral in the same way, so, for any ε > 0: (3.25) |∆ ij (u, η ij u)| ≤ 4 u V u H . We finally have that for some c > 0 (3.26) a(u, u) ≥ a 0 (u, u) -c u V u H , ≥ a 0 (u, u) -1 2 u 2 V -1 2 c 2 u 2 H , = 1 2 u 2 V -1 2 (c 2 + 1) u 2 H . The conclusion follows. Remark 3.4. The statements analogous to theorems 2.6 and 2.7 hold, assuming now that h satisfies (2.28), (2.29), and (3.17) (instead of (2.28)-(2.30)). We remind that (I, J) is a partition of {0, . . . , N }, with 0 ∈ J and that Ω was defined in (1.8). , with index from 0 to N . Let G(γ , γ ) be the class of functions ϕ : Ω → R such that for some c > 0: (4.1) |ϕ(x)| ≤ c Π k∈I (e γ k x k + e -γ k x k ) Π k∈J (x γ k k + x -γ k k ) . We define G as the union of G(γ , γ ) for all nonnegative (γ , γ ). We call γ k and γ k the growth order of ϕ, w.r.t. x k , at -∞ and +∞ (resp. at zero and +∞). Observe that the class G is stable by the operations of sum and product, and that if f , g belong to that class, so does h = f g, h having growth orders equal to the sum of the growth orders of f and g. For a ∈ R, we define (4.2) a + := max(0, a); a -:= max(0, -a); N (a) := (a 2 + 1) 1/2 , This manuscript is for review purposes only. as well as (4.3) ρ := ρ I ρ J , where ρ I (x) := Π k∈I e -α k N (x + k )-α k N (x - k ) , (4.4) ρ J (x) := Π k∈J x α k k 1 + x α k +α k k , (4.5) for some nonnegative constants α k , α k , to be specified later. Lemma 4.2. Let ϕ ∈ G(γ , γ ). Then ϕ ∈ L 1,ρ (Ω) whenever ρ is as above, with α satisfying, for some positive ε and ε , for all k = 0 to N : (4.6) α k = ε + γ k , α k = ε + γ k , k ∈ I, α k = (ε + γ k -1) + , α k = 1 + ε + γ k , k ∈ J. In addition we can choose for k = 0 (if element of J): (4.7) α 0 := (ε + γ 0 -1) + ; α 0 := 0 if ϕ(s, y) = 0 when s is far from 0, α 0 := 0, α 0 := 1 + ε + γ 0 , if ϕ(s, y) = 0 when s is close to 0. Proof. It is enough to prove (4.6), the proof of (4.7) is similar. We know that ϕ satisfy (4.1) for some c > 0 and γ. We need to check the finiteness of (4.8) Ω Π k∈I (e γ k y k + e -γ k y k ) Π k∈J (y γ k k + y -γ k k ) ρ(s, y)d(s, y). But the above integral is equal to the product p I p J with p I := Π k∈I R (e γ k x k + e -γ k x k )e -α k N (x + k )-α k N (x - k ) dx k , (4.9) p J := Π k∈J R+ x α k +γ k k + x α k -γ k k 1 + x α k +α k k dx k . (4.10) Using (4.6) we deduce that p I is finite since for instance (4.11) R+ (e γ k x k + e -γ k x k )e -α k N (x + k )-α k N (x - k ) dx k ≤ 2 R+ e γ k x k e -(1+γ k )x k dx k = 2 R+ e -x k dx k = 2, and p J is finite since (4.12) p J = Π k∈J R+ x ε +γ k +γ k k + x ε -1 k 1 + x ε +ε +γ k +γ k k dx k < ∞. The conclusion follows. This manuscript is for review purposes only. 4.2. On the growth order of h. Set for all k (4.13) α k := α k + α k . Remember that we take ρ in the form (4.3)-(4.4). Lemma 4.3. We have that: (i) We have that (4.14) ρ x k ρ ∞ ≤ α k , k ∈ I; x ρ ρ x k ∞ ≤ α k , k ∈ J. (ii) Let h satisfying either (2.28)-(2.30) or (2.28)-(2.29), and (3.17). Then the growth order of h does not depend on the choice of the weighting function ρ. Proof. (i) For k ∈ I this is an easy consequence of the fact that N (•) is non expansive. For k ∈ J, we have that (4.15) x ρ ρ x k = x ρ α k x α k -1 (1 + x α k ) -x α k α k x α k -1 (1 + x α k ) 2 = α k -α k x α k 1 + x α k . We easily conclude, discussing the sign of the numerator. (ii) The dependence of h w.r.t. ρ is only through the last term in (2.28), namely, i |σ i [ρ]/ρ. By (i) we have that (4.16) σ k i [ρ] ρ ≤ ρ x k ρ ∞ |σ k i | ≤ α k |σ k i |, k ∈ I, (4.17) σ k i [ρ] ρ ≤ x k ρ x k ρ ∞ σ k i x k ≤ α k σ k i x k , k ∈ J. In both cases, the choice of α has no influence on the growth order of h. European option. In the case of a European option with payoff u T (x), we need to check that u T ∈ H, that is, ρ must satisfy (4.18) Ω |u T (x)| 2 h(x) 2 ρ(x)dx < ∞. In the framework of the semi-symmetry hypothesis (A.8), we need to check that u T ∈ V , which gives the additional condition (4.19) nσ i=1 Ω |σ i [u T ](x)| 2 ρ(x)dx < ∞. In practice the payoff depends only on s and this allows to simplify the analysis. Applications using the commutator analysis. The commutator analysis is applied to the general multiple factor model and estimates for the function h characterizing the space H (defined in (2.21)) are derived. The estimates are compared to the case when the commutator analysis is not applied. The resulting improvement wil be established in the next section. This manuscript is for review purposes only. Commutator and continuity analysis. We analyze the general multiple factor model (1.4), which belongs to the class of models (2.1) with Ω ⊂ R 1+N , n σ = 2N , and for i = 1 to N : (5.1) σ i [v] = f i (y i )s βi v s ; σ N +i [v] = g i (y i )v i , with f i and g i of class C 1 over Ω. We need to compute the commutators of the firstorder differential operators associated with the σ i . The correlations will be denoted (5.3) [Z, Z ][u] = ab x1 u x2 -ba x2 u x1 . We obtain that (5.4) [σ i , σ ][u] = (β -β i )f i (y i )f (y )s βi+β -1 u s , 1 ≤ i < ≤ N, (5.5) [σ i , σ N +i ][u] = -s βi f i (y i )g i (y i )u s , i = 1, . . . , N, and (5.6) [σ i , σ N + ][u] = [σ N +i , σ N + ][u] = 0, i = . Also, (5.7) div σ i + σ i [ρ] ρ = f i (y i )s βi-1 (β i + s ρ s ρ ), div σ N +i + σ N +i [ρ] ρ = g i (y i ) + g i (y i ) ρ i ρ . 5.1.1. Computation of q. Remember the definitions of q, q and q in (2.24) and (2.25), where δ ij denote the Kronecker operator. We obtain that, for 1 ≤ i, j, k ≤ N : (5.8)        qij0 = δ ij β j f 2 i (y i )s 2βi-1 ; qiik = 0; qi,N+j = 0; qN+i,j,0 = δ ij κi f (y i )g i (y i )s βi ; qN+i,j,k = 0; qN+i,N+j,k = δ ijk g i (y i )g i (y i ). That means, we have for q = 2N i,j=1 qij and q = q -b that (5.9) q0 = N i=1 β i f 2 i (y i )s 2βi-1 + κi f (y i )g i (y i )s βi ; q 0 = q0 -rs, qk = g k (y k )g k (y k ); q k = qk -θ k (µ k -y k ), k = 1, . . . , N. This manuscript is for review purposes only. Computation of η and η . The coefficients η , η are solution of (3.16). We can write η = η + η, where (5.10) q = nσ i=1 η i σ i + 1≤i,j≤nσ η ij [σ i , σ j ], η ij = 0 if i = j. -b = nσ i=1 η i σ i + 1≤i,j≤nσ η ij [σ i , σ j ], η ij = 0 if i = j. For k = 1 to N , this reduces to (5.11) η N +k g k (y k ) = g k (y k )g k (y k ); η N +k g k (y k ) = -θ k (µ k -y k ). So, we have that (5.12)    η N +k = g k (y k ); η N +k = -θ k (µ k -y k ) g k (y k ) . For the 0th component, (5.10) can be expressed as (5.13)                      N k=1 -η k,N +k f k (y k )g k (y k )s β k -κk f k (y k )g k (y k )s β k + N k=1 η k f k (y k )s β k -β k f 2 k (y k )s 2β k -1 + N k=1 -η k,N +k f k (y k )g k (y k )s β k + η k f k (y k )s β k -rs = 0. We choose to set each term in parenthesis in the first two lines above to zero. It follows that η k,N +k = -κ k ∈ L ∞ (Ω), η k = β k f k (y k )s β k -1 . (5.14) If N > 1 we (arbitrarily) choose then to set the last line to zero with (5.15) η k = η k = 0, k = 2, . . . , N. It remains that η 1 f 1 (y 1 )s β1 -η 1,N +1 f 1 (y 1 )g 1 (y 1 )s β1 = rs. (5.16) Here, we can choose to take either η 1 = 0 or η 1,N +1 = 0. We obtain then two possibilities: (5.17)        (i) η 1 = 0 and η 1,N +1 = -rs 1-β1 f 1 (y 1 )g 1 (y 1 ) , (ii) η 1 = rs 1-β1 f 1 (y 1 ) and η 1,N +1 = 0. This manuscript is for review purposes only. Estimate of the h function. We decide to choose case (i) in (5.17). The function h needs to satisfy (2.28), (2.29), and (3.17) (instead of (2.30)). Instead of (2.28), we will rather check the stronger condition (2.34). We compute h σ := N k=1 |f k (y k )|s β k |(κ k ) s | + | ρ s ρ | + |g k (y k )| |(κ k ) k | + | ρ k ρ | (5.18) + N k=1 β k |f k (y k )s β k -1 | + |g k (y k )| , h r := |r| 1 2 , (5.19) h η := ĥ η + h η , (5.20) where we have ĥ η := N k=1 β k |f k (y k )|s β k -1 + |g k (y k )| + f k (y k )|s β k ∂κ k ∂s + g k (y k ) ∂κ k ∂y k , (5.21) h η := N k=1 θ k (µ k -y k ) g k (y k ) + r f 1 (y 1 ) f 1 (y 1 )g 1 (y 1 ) + rg 1 (y 1 )s 1-β1 ∂ ∂y 1 1 f 1 (y 1 )g 1 (y 1 ) . (5.22) Remark 5.2. Had we chosen (ii) instead of (i) in (5.17), this would only change the expression of h η that would then be (5.23) h η = N k=1 θ k (µ k -y k ) g k (y k ) + rs 1-β1 f 1 (y 1 ) . Estimate of the h function without the commutator analysis. The only change in the estimate of h will be the contribution of h η and h η . We have to satisfy (2.28)-(2.30). In addition, ignoring the commutator analysis, we would solve (5.13) with η = 0, meaning that we choose (5.24) η k := β k f k (y k )s β k -1 + κk f k (y k )g k (y k ) f k (y k ) , k = 1, . . . , N, and take η 1 out of (5.16). Then condition (3.17), with here η = 0, would give (5.25) h ≥ c η h η , where h η := h η + h η , with h η := N k=1 β k |f k (y k )|s β k -1 + |κ k | f k (y k )g k (y k ) f k (y k ) + |g k (y k )| , (5.26) h η := N k=1 θ k (µ k -y k ) g k (y k ) + rs 1-β1 f 1 (y 1 ) . (5.27) We will see in applications that this is in general worse. This manuscript is for review purposes only. 6. Application to stochastic volatility models. The results of Section 5 are specified for a subclass of the multiple factor model, in particular for the VAT and GMH models. We show that the commutator analysis allows to take smaller values for the function h (and consequently to include a larger class of payoff functions). 6.1. A useful subclass. Here we assume that (6.1) |f k (y k )| = |y k | γ k ; |g k (y k )| = ν k |y k | 1-γ k ; β k ∈ (0, 1]; ν k > 0; γ k ∈ (0, ∞). Furthermore, we assume κ to be constant and (6.2) |f k (y k )g k (y k )| = const for all y k , k = 1, . . . , N. Set (6.3) c s := sρ s /ρ ∞ ; c k = ρ k /ρ ∞ if Ω k = R, 0 otherwise. c k = 0 if Ω k = R, y k ρ k /ρ ∞ otherwise. We get, assuming that γ 1 = 0: (6.4) h σ := N k=1 c s |y k | γ k s β k -1 + ν k c k |y k | 1-γ k +ν k c k |y k | -γ k + β k |y k | γ k s β k -1 + (1 -γ k )ν k |y k | -γ k , ĥ η := N k=1 β k |y k | γ k s β k -1 + (1 -γ k )ν k |y k | -γ k , (6.5) h η := N k=1 θ k |µ k -y k | ν k |y k | 1-γ k + r|y 1 | γ1 γ 1 ν 1 . (6.6) Therefore when all y k ∈ R, we can choose h as (6.7) h := 1 + N k=1 |y k | γ k (1 + s β k -1 ) + (1 -γ k )|y k | -γ k + |y k | γ k -1 + k∈I |y k | 1-γ k + k∈J |y k | -γ k . Without the commutator analysis we would get ĥη := N k=1 (β k |y k | γ k s β k -1 + ν k |κ k ||y k | -γ k + (1 -γ k )ν k |y k | -γ k ), (6.8) hη := N k=1 θ k |µ k -y k | ν k |y k | 1-γ k + rs 1-β1 |y 1 | -γ1 . (6.9) This manuscript is for review purposes only. Therefore we can choose (6.10) h := h ; h := h + rs 1-β1 /|y 1 | γ1 + k ν k |κ k ||y k | -γ k . So, we always have that h ≤ h , meaning that it is advantageous to use the commutator analysis, due to the term rs 1-β1 /|y 1 | γ1 above in particular. The last term in the above r.h.s. has as contribution only when γ k = 1 (since otherwise h includes a term of the same order). Application to the VAT model. For the variant of the Achdou and Tchou model with multiple factors (VAT), i.e. when γ k = 1, for k = 1 to N , we can take h equal to (6.11) h T A := 1 + N k=1 |y k |(1 + s β k -1 ), when the commutator analysis is used, and when it is not, take h equal to (6.12) h T A := h T A + rs 1-β1 |y 1 | -1 + N k=1 ν k |κ k ||y k | -1 . Remember that u T (s) = (s -K) + for a call option, and u T (s) = (K -s) + for a put option, both with strike K > 0. Lemma 6.1. For the VAT model, using the commutator analysis, in case of a call (resp. put) option with strike K > 0, we can take ρ = ρ call , (resp. ρ = ρ put ), with (6.13) ρ call (s, y) := (1 + s 3+ε ) -1 Π N k=1 e -εN (y k ) , ρ put (s, y) := s α P 1 + s α P Π N k=1 e -εN (y k ) , where α P := ε + 2 N k=1 (1 -β k ) -1 + . Proof. (i) In the case of a call option, we have that (6.14) 1 ≥ c 0 s β k -1 for c 0 > 0 small enough over the domain of integration, so that we can as well take (6.15) h(s, y) = 1 + N k=1 |y k | ≤ Π N k=1 (1 + |y k |). So, we need that ϕ(s, y) ∈ L 1,ρ (Ω), with (6.16) ϕ(s, y) = h 2 (s, y)u 2 T (s) = (s -K) 2 + Π N k=1 (1 + |y k |) 2 . By lemma 4.2, where here J = {0} and I = {1, . . . , N }, we may take resp. (6.17) γ 0 = 2, γ 0 = 0, γ k > 0, γ k > 0, k = 1, . . . , N, and so we may choose for ε > 0 and ε > 0: (6.18) α 0 = 0, α 0 = 3 + ε , α k = ε , α k = ε , k = 1, . . . , N, This manuscript is for review purposes only. so that setting ε := ε + ε , we can take ρ = ρ call . (ii) For a put option with strike K > 0, 1 ≤ c 0 s β k -1 for big enough c 0 > 0, over the domain of integration, so that we can as well take (6.19) h(s, y) = 1 + N k=1 |y k |s β k -1 ≤ Π N k=1 (1 + |y k |s β k -1 ) 2 ≤ Π N k=1 s 2β k -2 (1 + |y k |) 2 and (6.20) ϕ(s, y) = h 2 (s, y)u 2 T (s) ≤ (K -s) 2 + Π N k=1 s 2β k -2 (1 + |y k |) 2 . By lemma 4.2, in the case of a put option and since (K -s) 2 + is bounded, we can take γ k , γ k , α k , α k as before, for k = 1 to N , and (6.21) γ 0 = 0, γ 0 = 2 N k=1 (1 -β k ), α 0 = ε + 2 N k=1 (1 -β k ) -1 + , α 0 = 0 the result follows. Remark 6.2. If we do not use the commutator analysis, then we have a greater "h" function; we can check that our previous choice of ρ does not apply any more (so we should consider a smaller weight function, but we do not need to make it explicit). And indeed, we have then a singularity when say y 1 is close to zero so that the previous choice of ρ makes the p integral undefined. Application to the GMH model. For the generalized multiple factor Heston model (GMH), i.e. when γ k = 1/2, k = 1 to N , we can take h equal to (6.22) h H := 1 + N k=1 |y k | 1 2 (1 + s β k -1 ) + |y k | -1 2 , when the commutator analysis is used, and when it is not, take h equal to (6.23) h H := h H + rs 1-β1 |y 1 | -1 2 . Lemma 6.3. (i) For the GMH model, using the commutator analysis, in case of a call option with strike K, meaning that u T (s) = (s -K) + , we can take ρ = ρ call , with (6.24) ρ call (s, y) := (1 + s ε +3 ) -1 Π N k=1 y ε k (1 + y ε+2 k ) -1 . (ii) For a put option with strike K > 0, we can take ρ = ρ put , with (6.25) ρ put (s, y) := Π N k=1 y ε k (1 + y ε+2 k ) -1 . Proof. (i) For the call option, using (6.14) we see that we can as well take (6.26) h(s, y) ≤ 1 + N k=1 y 1/2 k + y -1/2 k ≤ (s -K) 2 + Π N k=1 (1 + y 1/2 k + y -1/2 k ). So, we need that ϕ(s, y) ∈ L 1,ρ (Ω), with (6.27) ϕ(s, y) = h 2 (s, y)u 2 T (s) = (s -K) 2 + Π N k=1 (1 + y 1/2 k + y -1/2 k ). This manuscript is for review purposes only. By lemma 4.2, where here J = {0, . . . , N }, we may take resp. (6.28) γ 0 = 2, γ 0 = 0, γ k = 1, γ k = 1, k = 1, . . . , N, and so we may choose for ε > 0 and ε > 0: (6.29) α 0 = 0, α 0 = 3 + ε , α k = ε , α k = ε + 2, k = 1, . . . , N, so that setting ε := ε + ε , we can take ρ = ρ call . (ii) For a put option with strike K > 0,1 ≤ c 0 s β k -1 for big enough c 0 > 0, over the domain of integration, so that we can as well take (6.30) h(s, y) = 1 + N k=1 |y k |s β k -1 ≤ Π N k=1 (1 + |y k |s β k -1 ) 2 ≤ Π N k=1 s 2β k -2 (1 + |y k |) 2 and (6.31) ϕ(s, y) = h 2 (s, y)u 2 T (s) ≤ (K -s) 2 + Π N k=1 s 2β k -2 (1 + |y k |) 2 . By lemma 4.2, in the case of a put option and since (K -s) 2 + is bounded, we can take γ k , γ k , α k , α k as before, for k = 1 to N , and (6.32) γ 0 = 0, γ 0 = 0, α 0 = 0, α 0 = 0. the result follows. Remark 6.4. If we do not use the commutator analysis, then, again, we have a greater "h" function; we can check that our previous choice of ρ does not apply any more (so we should consider a smaller weight function, but we do not need to make it explicit). And indeed, by the behaviour of the integral for large s the previous choice of ρ makes the p integral undefined. and since u(T ) = u T we find that (B.4)      T 0 -v(t) + A(t)u(t) -f (t), v -u(t) V ≥ 1 2 u(0) -v(0) 2 H -1 2 u(T ) -v(T ) 2 H for all v ∈ W (0, T ; K), u(T ) = u T . It can be proved that the two formulation (B.2) and (B.4) are equivalent (they have the same set of solutions), and that they have at most one solution. The weak formulation is as follows: find u ∈ L 2 (0, T ; K) ∩ C(0, T ; H) such that (B.5)      T 0 -v(t) + A(t)u(t) -f (t), v -u(t) V ≥ -1 2 u(T ) -v(T ) 2 H for all v ∈ L 2 (0, T ; K), u(T ) = u T . Clearly a solution of the strong formulation (B.2) is solution of the weak one. Proposition B.1 (Brézis [6]). The following holds: (i) Let u T ∈ K and f ∈ L 2 (0, T ; V * ). Then the weak formulation (B.5) has a unique solution u and, for some c > 0, given v 0 ∈ K: (B.6) u L ∞ (0,T ;H) + u L 2 (0,T ;V ) ≤ c( u T H + f L 2 (0,T ;V * ) + v 0 V ). (ii) Let in addition the semi-symmetry hypothesis (A.8) hold, and let u T belong to K. Then u ∈ L ∞ (0, T ; V ), u ∈ L 2 (0, T ; H), and u is the unique solution of the original formulation (B.2). Furthermore, for some c > 0: (B.7) u L ∞ (0,T ;V ) + u L 2 (0,T ;H) ≤ c( u T V + f L 2 (0,T ;H) ). Appendix C. Monotonicity. Assume that H is an Hilbert lattice, i.e., is endowed with an order relation compatible with the vector space structure: (C.1) x 1 x 2 implies that γx 1 + x γx 2 + x, for all γ ≥ 0 and x ∈ H, such that the maxima and minima denoted by max(x 1 , x 2 ) and min(x 1 , x 2 ) are well defined, the operator max, min be continuous, with min(x 1 , x 2 ) = -max(-x 1 , -x 2 ). Setting x + := max(x, 0) and x -:= -min(x, 0) we have that x = x + -x -. Assuming that the maximum of two elements of V belong to V we see that we have an induced lattice structure on V . The induced dual order over V * is as follows: for v * 1 and v * 2 in V * , we say that v * 1 ≥ v * 2 if v * 1 -v * 2 , v V ≥ 0 whenever v ≥ 0. Assume that we have the following extension of the integration by parts formula (B.3): for all u, v in W (0, T ) and 0 ≤ t < t ≤ T , (C.2) 2 t t u(s), u + (s) V ds = u + (t ) 2 H -u + (t) 2 H . and that (C.3) A(t)u, u + V = A(t)u + , u + V . Proposition C.1. Let u i be solution of the parabolic equation (A.6) for (f, u T ) = (f i , u i T ), i = 1, 2. If f 1 ≥ f 2 and u 1 T ≥ u 2 T , then u 1 ≥ u 2 . This manuscript is for review purposes only. This type of result may be extended to the case of variational inequalities. If K and K are two subsets of V , we say that K dominates K if for any u ∈ K and u ∈ K , max(u, u ) ∈ K and min(u, u ) ∈ K . Proposition C.2. Let u i be solution of the weak formulation (B.5) of the parabolic variational inequality for (f, u T , K) = (f i , u i T , K i ), i = 1, 2. If f 1 ≥ f 2 , u 1 T ≥ u 2 T , and K 1 dominates K 2 , then u 1 ≥ u 2 . The monotonicity w.r.t. the convex K is due to Haugazeau [START_REF] Haugazeau | Sur des inéquations variationnelles[END_REF] (in an elliptic setting, but the result is easily extended to the parabolic one). See also Brézis [START_REF]Problèmes unilatéraux[END_REF]. Appendix D. Link with American options. An American option is the right to get a payoff Ψ(t, x) at any time t < T and u T at time T . We can motivate as follows the derivation of the associated variational inequalities. If the option can be exercized only at times t k = hk, with h = T /M and k = 0 to M (Bermudean option), then the same PDE as for the European option holds over (t k , t k+1 ), k = 0 to M -1. Denoting by ũk the solution of this PDE, we have that u(t k ) = max(Ψ, ũk ). Assuming that A does not depend on time and that there is a flux f (t, x) of dividents, we compute the approximation u k of u(t k ) as follows. Discretizing the PDE with the implicit Euler scheme we obtain the continuation value ûk solution of (D.1) ûk -u k+1 h + Aû k = f (t k , •), k = 0, . . . , M -1; u M = max(Ψ, 0), so that u k = u k+1 -hAû k + hf (t k , •), we find that (D.2) u k = max(û k , Ψ) = max(u k+1 -hAû k + hf (t k , •), Ψ), which is equivalent to (D.3) min(u k -Ψ, u k -u k+1 h + Aû k -f (t k , •)) = 0. This suggest for the continuous time model and general operators A and r.h.s. f the following formulation (D.4) min(u(t, x)-Ψ(x), -u(t, x)+A(t, x)u(t, x)-f (t, x)) = 0, (t, x) ∈ (0, T )×Ω. The above equation has a rigorous mathematical sense in the context of viscosity solution, see Barles [START_REF] Barles | Convergence of numerical schemes for degenerate parabolic equations arising in finance theory[END_REF]. However we rather need the variational formulation which can be derived as follows. Let v(x) satisfy v(x) ≥ Ψ(x) a.e., be smooth enough. Then (D.5) Ω (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx = {u(t,x)=Ψ(x)} (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx + {u(t,x)>Ψ(x)} (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx. The first integrand is nonnegative, being a product of nonnegative terms, and the second integrand is equal to 0 since by (D.3), -u(t, x) + A(t, x)u(t, x) -f (t, x)) = 0 a.e. when u(t, x) > Ψ(x). So we have that, for all v ≥ Ψ smooth enough: (D.6) Ω (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx ≥ 0. This manuscript is for review purposes only. We see that this is of the same nature as a parabolic variational inequality, where K is the set of functions greater or equal to Ψ (in an appropriate Sobolev space). Appendix E. Some one dimensional problems. It is not always easy to characterize the space V. Let us give a detailed analysis in a simple case. E.1. The Black-Scholes setting. For the Black-Scholes model with zero interest rate (the extension to a constant nonzero interest rate is easy) and unit volatility coefficient, we have that Au = -1 2 x 2 u (x), with x ∈ (0, ∞). In the case of a put option: u T (x) = (K -x) + we may take H := L 2 (R + ). For v ∈ D(0, ∞) and u sufficiently smooth we have that - 1 2 ∞ 0 x 2 u (x)dx = a(u, v) with (E.1) a(u, v) := 1 2 ∞ 0 x 2 u (x)v (x)dx + ∞ 0 xu (x)v(x)dx. This bilinear form a is continuous and semi coercive over the set (E.2) V := {u ∈ H; xu (x) ∈ H}. It is easily checked that ū(x) := x -1/3 /(1 + x) belongs to V . So, some elements of V are unbounded near zero. We now claim that D(0, ∞) is a dense subset of V . First, it follows from a standard truncation argument and the dominated convergence theorem that V ∞ := V ∩ L ∞ (0, ∞) is a dense subset of V . Note that elements of V are continuous over (0, ∞). Given ε > 0 and u ∈ V ∞ , define (E.3) u ε (x) :=    0 if x ∈ (0, ε), u(2ε)(x/ε -1) if x ∈ [ε, 2ε], u(2ε) if x > 2ε. Obviously u ε ∈ V ∞ . By the dominated convergence theorem, u ε → u in H. Set for w ∈ V (E.4) Φ ε (w) := 2ε 0 x 2 w (x) 2 dx. Since Φ ε is quadratic and v ε → u in H, we have that: (E.5) 1 2 ∞ 0 x 2 (u ε -u ) 2 dx = 1 2 Φ ε (u ε -u) ≤ Φ ε (u ε ) + Φ ε (u). Since u ∈ V , Φ ε (u) → 0 and (E.6) Φ ε (u ε ) ≤ u 2 ∞ 2ε 0 ε -2 x 2 dx = O( u 2 ∞ ε). So, the l.h.s. of (E.5) has limit 0 when ε ↓ 0. We have proved that the set V 0 of functions in V ∞ equal to zero near zero, is a dense subset of V . Now define for N > 0 (E. Again for the sake of simplicity we will take ρ(x) = 1, which is well-adapted in the case of a payoff with compact support in (0, ∞). For v ∈ D(0, ∞) and u sufficiently smooth we have that We easily deduce that the bilinear form a is continuous and semi coercive over V, when choosing (E.14) H := {v ∈ L 2 (R + ); (x 1/2 + x -1/2 )v ∈ L 2 (R + )}, Note that then the integrals below are well defined and finite for any v ∈ V: (E.15) ∞ 0 (x 1/2 v )(x -1/2 v) = ∞ 0 vv = 1 2 ∞ 0 (v 2 ) . So w := v 2 is the primitive of an integrable function and therefore has a limit at zero. Since v is continuous over (0, ∞) it follows that v has a limit at zero. However if this limit is nonzero we get a contradiction with the condition that x -1/2 v ∈ L 2 (R + ). So, every element of V has zero value at zero. We now claim that D(0, ∞) is a dense subset of V. First, V ∞ := V ∩ L ∞ (0, ∞) is a dense subset of V. Note that elements of V are continuous over (0, ∞). Given ε > 0 This manuscript is for review purposes only. Since Φ ε is quadratic and u ε → u in H, we have that: (E.17) 1 2 ∞ 0 x 2 (u ε -u ) 2 dx = 1 2 Φ ε (u ε -u) ≤ Φ ε (u ε ) + Φ ε (u). Since u ∈ V, Φ ε (u) → 0 and (E.18) Φ ε (u ε ) ≤ ε -2 u(2ε) 2 2ε 0 xdx = 2u(2ε) 2 → 0. So, the l.h.s. of (E.17) has limit 0 when ε ↓ 0. We have proved that the set V 0 of functions in V ∞ equal to zero near zero, is a dense subset of V. Define ϕ N as in (E.7) Given u ∈ V 0 , set u N := uϕ N . As before, u N → u in H, is u N = u ϕ N + uϕ N , xu ϕ N → xu in L 2 (R + ), and it remains to prove that xuϕ N → 0 in L 2 (R + ). But ϕ N is equal to 1/x over its support, so that when N ↑ ∞: (E.19) x 1/2 uϕ N 2 L 2 (R+) = eN N x -1 u 2 (x)dx ≤ ∞ N u 2 (x)dx → 0. The claim is proved. Ω}, where Ψ ∈ V . The strong and weak formulations of the parabolic variational inequality are defined in (B.2) and (B.5) resp. The abstract notion of monotonicity is discussed in appendix B. We denote by K the closure of K in V . Theorem 2.7. (i) Let the assumptions of theorem 2.6 hold, with u T ∈ K. Then the weak formulation (B.5) has a unique solution u in L 2 (0, T ; K) ∩ C(0, T ; H), and the mapping (f, u T ) → u is nondecreasing. 4 . 4 The weight ρ. Classes of weighting functions characterized by their growth are introduced. A major result is the independence of the growth order of the function h on the choice of the weighting function ρ in the class under consideration.4.1. Classes of functions with given growth. In financial models we usually have nonnegative variables and the related functions have polynomial growth. Yet, after a logarithmic transformation, we get real variables whose related functions have exponential growth. This motivates the following definitions. Definition 4 . 1 . 41 Let γ and γ belong to R N +1 + by ( 5 Remark 5 . 1 . 551 .2) κk := κ k,N +k , k = 1, . . . , N. We use many times the following rule. For Ω ⊂ R n , where n = 1 + N , u ∈ H 1 (Ω), a, b ∈ L 0 , and vector fields Z[u] := au x1 and Z [u] := bu x2 , we have Z[Z [u]] = a(bu x2 ) x1 = ab x1 u x2 + abu x1x2 , so that 1 - 2 L 2 ( 2 1222 log(x/N ) if x ∈ [N, eN ], 0 if x > eN .Given u ∈ V 0 , set u N := uϕ N . Then u N ∈ H and, by a dominated convergenceargument, u N → u in H. The weak derivative of u N is u N = u ϕ N + uϕ N . By aThis manuscript is for review purposes only.dominated convergence argument, xu ϕ N → xu in L 2 (R + ). It remains to prove that xuϕ N → 0 in L 2 (R + ). But ϕ N is equal to 1/x over its support, so that(E.8) xuϕ N (x)dx → 0 when N ↑ ∞.The claim is proved.E.2. The CIR setting. In the Cox-Ingersoll-Ross model[START_REF] Cox | A theory of the term structure of interest rates[END_REF] the stochastic process satisfies(E.9) ds(t) = θ(µ -s(t))dt + σ √ s dW (t), t ≥ 0We assume the coefficients θ, µ and σ to be constant and positive. The associated PDE is given by (E.10) Au := -θ(µ -x)u -1 2 xσ 2 u = 0 (x, t) ∈ R + × (0, T ), u(x, T ) = u T (x) x ∈ R + . ∞0uu Au(x)v(x)dx = a(u, v) with (E.11) a(u, v) := θ ∞ 0 (µ -x)u (x)v(x)dx + (x)v(x)dx.So one should take V of the form (E.12)V := {u ∈ H; √ xu (x) ∈ L 2 (R + )}.We next determine H by requiring that the bilinear form is continuous; by the Cauchy-(x)v(x)dx ≤ x 1/2 u 2 x -1/2 v 2 ; ∞ 0 xu (x)v(x)dx ≤ x 1/2 u 2 x 1/2 v 2 . and u ∈ V ∞ , define u ε (x) as in (E.3). Then u ε ∈ V ∞ . By the dominated convergence theorem, u ε → u in H. Set for w ∈ V (E.16) Φ ε (w) := 2ε 0 xw (x) 2 dx. This manuscript is for review purposes only. The first author was suported by the Laboratoire de Finance des Marchés de l'Energie, Paris, France. Both authors were supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences. Appendix A. Regularity results by Lions and Magenes [START_REF] Lions | Non-homogeneous boundary value problems and applications[END_REF]Ch. 1]. Let H be a Hilbert space identified with its dual and scalar product denoted by (•, •). Let V be a Hilbert space, densely and continuously embedded in H, with duality product denoted by and that for any u, v in W (0, T ), and 0 ≤ t < t ≤ T , the following integration by parts formula holds: This manuscript is for review purposes only. Let A(t) ∈ L ∞ (0, T ; L(V, V * )) satisfy the hypotheses of uniform continuity and semicoercivity, i.e., for some α > 0, λ ≥ 0, and c > 0: Proposition A.1 (first parabolic estimate). The parabolic equation (A.6) has a unique solution u ∈ W (0, T ), and for some c > 0 not depending on (f, u T ): We next derive a stronger result with the hypothesis of semi-symmetry below: is measurable with range in H, and for positive numbers α 0 , c A,1 : Proposition A.2 (second parabolic estimate). Let (A.8) hold. Then the solution u ∈ W (0, T ) of (A.6) belongs to L ∞ (0, T ; V ), u belongs to L 2 (0, T ; H), and for some c > 0 not depending on (f, u T ): Appendix B. Parabolic variational inequalities. Let K ⊂ V be a non-empty, closed and convex set, K be the closure of K in H, We consider parabolic variational inequalities as follows: find u ∈ W (0, T H This manuscript is for review purposes only.
50,645
[ "833418", "1759" ]
[ "56041", "418113", "56041" ]
01761754
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01761754/file/SphericalDrawing_Hal.pdf
Luca Castelli Gaspard Denis email: [email protected] Éric Fusy email: [email protected] Fast spherical drawing of triangulations: an experimental study of graph drawing tools * We consider the problem of computing a spherical crossing-free geodesic drawing of a planar graph: this problem, as well as the closely related spherical parameterization problem, has attracted a lot of attention in the last two decades both in theory and in practice, motivated by a number of applications ranging from texture mapping to mesh remeshing and morphing. Our main concern is to design and implement a linear time algorithm for the computation of spherical drawings provided with theoretical guarantees. While not being aesthetically pleasing, our method is extremely fast and can be used as initial placer for spherical iterative methods and spring embedders. We provide experimental comparison with initial placers based on planar Tutte parameterization. Finally we explore the use of spherical drawings as initial layouts for (Euclidean) spring embedders: experimental evidence shows that this greatly helps to untangle the layout and to reach better local minima. Introduction In this work we consider the problem of computing in a fast and robust way a spherical layout (crossing-free geodesic spherical drawing) of a genus 0 simple triangulation. While several solutions have been developed in the computer graphics and geometry processing communities [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF][START_REF] Aigerman | Orbifold tutte embeddings[END_REF][START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Friedel | Unconstrained spherical parameterization[END_REF][START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Sheffer | Robust spherical parameterization of triangular meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF], very few works attempted to test the practical interest of standard tools [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Chambers | Drawing graphs in the plane with a prescribed outer face and polynomial area[END_REF][START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Kobourov | Non-euclidean spring embedders[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] from graph drawing developed for the non-planar (or non-Euclidean) case. On one hand, force-directed methods and iterative solvers are successful to obtain very nice layouts achieving several desirable aesthetic criteria, such as uniform edge lengths, low angle distortion or even the preservation of symmetries. Their main drawbacks rely on the lack of rigorous theoretical guarantees and on their expensive runtime costs, since their implementation requires linear solvers (for large sparse matrices) or sometimes non-linear optimization methods, making these approaches slower and less robust than combinatorial graph drawing tools. On the other hand, some well known tools such as linear-time grid embeddings [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF][START_REF] Schnyder | Embedding planar graphs on the grid[END_REF] are provided with worst-case theoretical guarantees allowing us to compute in a fast and robust way a crossing-free layout with bounded resolution: just observe that their practical performances allow processing several millions of vertices per second on a standard (single-core) CPU. Unfortunately, the resulting layouts are rather unpleasing and fail to achieve some basic aesthetic criteria that help readability (they often have long edges and large clusters of tiny triangles). Motivation. It is commonly assumed that starting from a good initial layout (referred to as initial guess in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF]) is crucial for both iterative methods and spring embedders. A nice initial configuration, that is closer to the final result, should help to obtain nicer layouts (this was explored in [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] for the planar case). This could be even more relevant for the spherical case, where an initial layout having many edge-crossings can be difficult to unfold in order to obtain a valid spherical drawing. Moreover, the absence of natural constraints on the sphere prevents in some cases from eliminating all crossings before the layouts collapse to a degenerate configuration. One of the motivations of this work is to get benefit of a prior knowledge of the graph structure: if its combinatorics is known in advance, then one can make use of fast graph drawing tools and compute almost instantaneously a crossing-free layout to be used as starting point for running more expensive force-directed tools. Related works. A first approach for computing a spherical drawing consists in projecting a (convex) polyhedral representation of the input graph on the unit sphere: one of the first works [START_REF] Shapiro | Polyhedron realization for shape transformation[END_REF] provided a constructive version of Steinitz theorem (unfortunately its time complexity was quadratic). Another very simple approach consists in planarizing the graph and to apply well known tools from mesh parameterizations (see Section 2.1 for more details): the main drawback is that, after spherical projection, the layout does not always remain crossing-free. Along another line of research, several works proposed generalizations of the barycentric Tutte parameterization to the sphere. Unlike the planar case, where boundary constraints guarantees the existence of crossing-free layouts, in the spherical case both the theoretical analysis and the practical implementations are much more challenging. Several works in the geometry processing community [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Friedel | Unconstrained spherical parameterization[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF] expressed the layout problem as an energy minimization problem (with non-linear constraints) and proposed a variety of iterative or optimization methods to solve the spherical Tutte equations: while achieving nice results on the tested 3D meshes, these methods lack rigorous theoretical guarantees on the quality of the layout in the worst case (for a discussion on the existence of non degenerate solutions of the spherical Tutte equations we refer to [START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF]). A very recent work [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF] proposed an adaptation of the approach based on the Euclidean orbifold Tutte parameterization [START_REF] Aigerman | Orbifold tutte embeddings[END_REF] to the spherical case: the experimental results are very promising and come with some theoretical guarantees (a couple of weak assumptions are still necessary to guarantee the validity of the drawing). However the layout computation becomes much more expensive since it involves solving non-linear problems, as reported in [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF]. A few papers in the graph drawing domain also considered the spherical drawing problem. Fowler and Kobourov proposed a framework to adapt force-directed methods [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] to spherical geometry, and a few recent works [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Chambers | Drawing graphs in the plane with a prescribed outer face and polynomial area[END_REF][START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] extend some combinatorial tools to produce planar layouts of non-planar graphs: some of these tools can be combined to deal with the spherical case, as we will show in this work (as far as we know, there are not existing implementations of these algorithms). Our contribution • Our first main contribution is to design and implement a fast algorithm for the computation of spherical drawings. We make use of several ingredients [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] involving the well-known canonical orderings and can be viewed as an adaptation of the shift paradigm proposed by De Fraysseix, Pach and Pollack [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF]. As illustrated by our experiments, our procedure is extremely fast, with theoretical guarantees on both the runtime complexity and the layout resolution. • While not being aesthetically pleasing (as in the planar case), our layouts can be use as initial vertex placement for iterative parameterization methods [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] or spherical spring embedders [START_REF] Kobourov | Non-euclidean spring embedders[END_REF]. Following the approach suggested by Fowler and Kobourov [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF], we compare our combinatorial algorithm with two standard initial placers used in previous existing works [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF] relying on Tutte planar parameterizations: our experimental evaluations involve runtime performances and statistics concerning edge lengths. • As an application, we show in Section 5 how spherical drawings can be used as initial layouts for (Euclidean) spring embedders: as illustrated by our tests, starting from a spherical drawing greatly helps to entangle the layout and to escape from bad local minima. All our results are provided with efficient implementations and experimental evaluations on a wide collection of real-world and synthetic datasets. Preliminaries Planar graphs and spherical drawings. In this work we deal with planar maps (graphs endowed with a combinatorial planar embedding), and we consider in particular planar triangulations which are simple genus 0 maps where all faces are triangles (they correspond to the combinatorics underlying genus 0 3D triangle meshes). Given a graph G = (V, E) we denote by n = |V | (resp. by |F (G)|) the number of its vertices (resp. faces) and by N (v i ) the set of neighbors of vertex v i ; x(v i ) will denote the Euclidean coordinates of vertex v i . The notion of planar drawings can be naturally generalized to the spherical case: the main difference is that edges are mapped to geodesic arcs on the unit sphere S 2 , which are minor arcs of great circles (obtained as intersection of S 2 with an hyperplane passing through the origin). A geodesic drawing of a map should preserve the cyclic order of neighbors around each vertex (such an embedding is unique for triangulations, up to reflexions of the sphere). As in the planar case, we would aim to obtain crossing-free geodesic drawings, where geodesic arcs do not intersect (except at their extremities). In the rest of this work we will make use of the term spherical drawings when referring to drawings satisfying the requirements above. Sometimes, the weaker notion of spherical parameterization (an homeomorphism between an input mesh and S 2 ) is considered for dealing with applications in the geometry processing domain (such as mesh morphing): while the bijectivity between the mesh and S 2 is guaranteed, there are no guarantees that the triangle faces are mapped to spherical triangles with no overlaps (obviously a spherical drawing leads to a spherical parameterization). Initial Layouts Part of this work will be devoted to compare our drawing algorithm (Section 3) to two spherical parameterization methods involving Tutte planar parameterization: both methods have been used as initial placers for more sophisticated iterative spherical layout algorithms. Inverse Stereo Projection layout (ISP). For the first initial placer, we follow the approach suggested in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] (see Fig. 1). The faces of the input graph G are partitioned into two components homeomorphic to a disk: this is achieved by computing a vertex separator defining a simple cycle of small size (having O( √ n) vertices) whose removal produces a balanced partition (G S , G N ) of the faces of G. The two graphs G S and G N are then drawn in the plane using Tutte's barycentric method: boundary vertices lying on the separator are mapped on the unit disk. Combining a Moebius inversion with the inverse of a stereographic projection we obtain a spherical parameterization of the input graph: while preserving some of the aesthetic appeal of Tutte's planar drawings, this map is bijective but cannot produce in general a crossing-free spherical drawing (straight-line segments in the plane are not mapped to geodesics by inverse stereographic projection). In our experiments we adopt a growing-region heuristic to compute a simple separating cycle: while not having theoretical guarantees, our approach is simple to implement and very fast, achieving balanced partitions in practice (separators are of size roughly Θ( √ n) and the balance ratio = min(|F (G S )|,|F (G N )|) |F (G)| is always between 0.39 and 0.49 for the tested data) 1 . Polar-to-Cartesian layout (PC). The approach adopted in [START_REF] Zayer | Curvilinear spherical parameterization[END_REF] consists in planarizing the graph by cutting the edges along a simple path from a south pole v S to a north pole v N . A planar rectangular layout can be computed by applying standard Tutte parameterization with respect to the azimuthal angle θ ∈ (0, 2π) and to the polar angle φ ∈ [0, π]: the spherical layout, obtained by the polar-to-cartesian projection, is bijective but not guaranteed to be crossing-free. Spherical drawings and parameterizations The spherical layouts described above can used as initial guess for more sophisticated iterative schemes and force-directed methods for computing spherical drawings. For the sake of completeness we provide an overview of the algorithms that will be tested in Section 4. Iterative relaxation: projected Gauss-Seidel. The first method can be viewed as an adaptation of the iterative scheme solving Tutte equations (see Fig. 1). This scheme consists in moving points on the sphere in tangential direction in order to minimize the spring energy E = 1 2 n i=1 j∈N (i) w ij x(v i ) -x(v j ) 2 (1) with the only constraint x(v i ) = 1 for i = 1 . . . n (in this work we consider uniform weights w ij , as in Tutte's work). As opposite to the planar case, there are no boundary constraints on the sphere, which makes the resulting layouts collapse in many cases to degenerate solutions. As observed in [START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] this method does not always converge to a valid spherical drawing, and its practical performance strongly depends on the geometry of the starting initial layout x 0 . While not having theoretical guarantees, this method is quite fast allowing to quickly decrease the residual error: it thus can be used in a first phase and combined with more stable iterative schemes leading in practice to better convergence results [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] (still lacking of rigorous theoretical guarantees). Alexa's method In order to avoid the collapse of the layout, without introducing artificial constraints, Alexa [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF] modified the iterative relaxation above by penalizing long edges (that tend to move all vertices in a same hemisphere). More precisely, the vertex v i is moved according to a displacement i = c 1 deg(v i ) j (x(v i ) -x(v j )) x(v i ) -x(v j ) and then reprojected on the sphere. The parameter c regulates the step length, and can be chosen to be proportional to the inverse of the longest edge incident to a vertex, improving the convergence speed. (Spherical) Spring Embedders. While spring embedders are originally designed to produce 2D or 3D layouts, one can adapt them to non euclidean geometries. We have implemented the standard spring-electrical model introduced in [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] (referred to as FR), and the spherical version following the framework described by Kobourov and Wampler [START_REF] Kobourov | Non-euclidean spring embedders[END_REF] (called Spherical FR). As in [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] we compute attractive forces (between adjacent vertices) and repulsive forces (for any pair of vertices) acting on vertex u, defined by: F a (u) = (u,v)∈E x(u) -x(v) K (x(u) -x(v)), F r (u) = v∈V,v =u -CK 2 (x(v) -x(u)) x(u) -x(v) 2 where the values C (the strength of the forces) and K (the optimal distance) are scale parameters. In the spherical case, we shift the repulsive forces by a constant term, making the force acting on pairs of antipodal vertices zero. 3 Fast spherical embedding with theoretical guarantees: SFPP layout We now provide an overview of our algorithm for computing a spherical drawing of a planar triangulation G in linear time, called SFPP layout (the main steps are illustrated in Fig 2). We make use of an adaptation of the shift method used in the incremental algorithm of de Fraysseix, Pach and Pollack [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF] (referred to as FPP layout): our solution relies on the combination of several ideas developed in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]. For the sake of completeness, a more detailed presentation in given the Appendix. Mesh segmentation. Assuming that there are two non-adjacent faces f N and f S , one can find 3 disjoint and chord-free paths P 0 , P 1 and P 2 from f S to f N (planar triangulations are 3-connected). Denote by u N 0 , u N 1 and u N 2 the three vertices of f N on P 0 , P 1 and P 2 (define similarly the three neighbors u S 0 , u S 1 , u S 2 of the face f S ). We first compute a partition of the faces of G into 3 regions, cutting G along the paths above and removing f S and f N . We thus have three quasi-triangulations G C 0 , G C 1 and G C 2 that are planar maps whose inner faces are triangles, and where the edges on the outer boundary are partitioned into four sides. The first pair of opposite sides only consist of an edge (drawn as vertical segment in Fig. 2), while the remaining pair of opposite sides contains vertices lying on P i and P i+1 respectively (indices being modulo 3): according to these definitions, G C i and G C i+1 share the vertices lying on P i+1 (drawn as a path of on horizontal segments in Fig. 2). Grid drawing of rectangular frames. We apply the algorithm described in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF] to obtain three rectangular layouts of G C 0 , G C 1 and G C 2 : this algorithm first separates each G C i into two sub-graphs by removing a so-called river : an outer-planar graph consisting of a face-connected set of triangles which corresponds to a simple path in the dual graph, starting at f S and going toward f N . The two-subgraphs are then processed making use of the canonical labeling defined in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF]: the resulting layouts are stretched and then merged with the set of edges in the river, in order to fit into a rectangular frame. Just observe that in our case a pair of opposite sides only consists of two edges, which leads to an algorithm considerably simpler to implement in practice. Finally, we apply the two-phases adaptation of the shift algorithm described in [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF] to obtain a planar grid drawing of each map G C i , such that the positions of vertices on the path P i in G C i do match the positions of corresponding vertices on P i in G C i+1 . The grid size of drawing of G C i is O(n) × O(n) (using the fact that the two opposite sides (u N i , . . . , u S i ) and (u N i+1 , . . . , u S i+1 ) of G C i are at distance 1). Spherical layout. To conclude, we glue together the drawings of G C 0 , G C 1 and G C 2 computed above in order to obtain a drawing of G on a triangular prism. By a translation within the 3D ambient space we can make the origin coincides with the center of mass of the prism (upon seeing it as a solid polyhedron). Then a central projection from the origin maps each vertex on M to a point on the sphere: each edge (u, v) is mapped to a geodesic arc, obtained by intersecting the sphere with the plane passing trough the origin and the segment relying u and v on the prism (crossings are forbidden since the map is bijective). Theorem 1. Let G be a planar triangulation of size n, having two non-adjacent faces f S and f N . Then one can compute in O(n) time a spherical drawing of G, where edges are drawn as (non-crossing) geodesic arcs of length at least Ω( 1 n ). Some heuristics. We use as last initial placer our combinatorial algorithm of Section 3. For the computation of the three disjoint paths P 0 , P 1 and P 2 , we adopt again an heuristic based on a growing-region approach: while not having theoretical guarantees on the quality of the partition and the length of the paths, our results suggest that well balanced partitions are achieved for most tested graphs. A crucial point to obtain a nice layout resides in the choice of the canonical labeling (its computation is performed with an incremental approach based on vertex removal). A bad canonical labeling could lead to unpleasant configurations, where a large number of vertices on the boundaries of the bottom and top sub-regions of each graph G i are drawn along the same direction: as side effects, a few triangles use a lot of area, and the set of interior chordal edges in the river can be highly stretched, especially those close to the south and north poles. To partially address this problem, we design a few heuristics during the computation of the canonical labeling, in order to obtain more balanced layouts. Firstly, we delay the conquest of the vertices which are close to the south and north poles: this way these extremal vertices are assigned low labels (in the canonical labeling), leading to smaller and thicker triangles close to the poles. Moreover the selection of the vertices is done so as to keep the height of the triangle caps more balanced in the final layout. Finally, we adjust the horizontal stretch of the edges, to get more equally spaced vertices on the paths P 0 , P 1 and P 2 . Experimental results and comparison Experimental settings and datasets. In order to obtain a fair comparison of runtime performances, we have written pure Java implementations of all algorithms and drawing methods presented in this work. 2 Our tests involves several graphs including the 1skeleton of 3D models (made available by the AIM@SHAPE repository) as well as random planar triangulations obtained with an uniform random sampler [START_REF] Poulalhon | Optimal coding and sampling of triangulations[END_REF]. In our tests we take as an input the combinatorial structure of a planar map encoded in OFF format: nevertheless we do not make any assumption on the geometric realization of the input triangulation in 2D or 3D space. Moreover, observe that the fact of knowing the combinatorial embedding of the input graph G (the set of its faces) is a rather weak assumption, since such an embedding is essentially unique for planar triangulations and it can be easily retrieved from the graph connectivity in linear time [START_REF] Nagamochi | A simple recognition of maximal planar graphs[END_REF]. We run our experiments on a HP EliteBook, equipped with an Intel Core i7 2.60GHz (with Ubuntu 16.04, Java 1.8 64-bit, using a single core, and 4GB of RAM for the JVM). Quantitative evaluation of aesthetic criteria In order to obtain a quantitative evaluation of the layout quality we compute the spring energy E defined by Eq. 1 and two metrics measuring the edge lengths and the triangle areas. As suggested in [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] we compute the average percent deviation of edge lengths, according to el := 1 - 1 |E| e∈E |l g (e) -l avg | max(l avg , l max -l avg ) where l g (e) denotes the geodesic distance of the edge e, and l avg (resp. l max ) is the average geodesic edge length (resp. maximal geodesic edge length) in the layout. In a similar manner Table 1: This table reports the runtime performance of all steps involved in the computation of the SFPP layout obtained with the algorithm of Section 3. The overall cost (red chart) includes the preprocessing phase (computing the three rivers and the canonical labeling) and the layout computation (running the two-phases shift algorithm, constructing and projecting the prism). The last two columns report the timing cost for solving the linear systems for the ISP and PC layouts (see blue/green charts), using the MTJ conjugate gradient solver. All results are expressed in seconds. we compute the average percent deviation of triangle areas, denoted by a. The metrics el and a take values in [0 . . . 1], and higher values indicate more uniform edge lengths and triangle areas 3 . Timing performances: comparison The runtime performances reported in Table 1 clearly show that our SFPP algorithm has an asymptotic linear-time behavior and in practice is much faster than other methods based on planar parameterization. For instance the ISP layout adopted in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] requires to solve large linear systems: among the tested Java libraries (MTJ, Colt, PColt, Jama), we found that the linear solvers of the MTJ have the best runtime performances for the solution of large sparse linear systems (in our tests we run the conjugate gradient solver, setting a numeric tolerance of 10 -6 ). Observe that a slightly better performance can be achieved with more sophisticated schemes or tools (e.g. Matlab solvers) as done in [START_REF] Aigerman | Orbifold tutte embeddings[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF]. Nevertheless the timing cost still remains much larger than ours: as reported in [START_REF] Aigerman | Orbifold tutte embeddings[END_REF] the orbifold parameterization of the dragon graph requires 19 seconds (for solving the linear systems, on a 3.5GHz Intel i7 CPU). Initial layout Projected Gauss-Seidel Alexa method ρ = 0.42 E = Evaluation of the layout quality: interpretation and comparisons All our tests confirm that starting with random vertex locations is almost always a bad choice, since iterative methods lead in most cases to a collapse before reaching a valid spherical drawing (spherical spring embedders do not have this problem, but cannot always eliminate edge crossings, see Fig. 4). Our experiments (see Fig. 3 and4) also confirm two well known facts: Alexa's method is always more robust compared to the projected Gauss-Seidel Alexa method Figure 4: Spherical layouts of a random triangulation with 1K faces. While the projected Gauss-Seidel relaxation always collapse, Alexa method is more robust, but also fails when starting from a random initial layout. When using the ISP, PC or our SFPP layouts Alexa method converges toward a crossing-free layout: starting from the SFPP layout allows getting the same aesthetic criteria as the ISP or the PC layouts (with even less iterations). Spring embedders [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] (Spherical FR) prevent from reaching a degenerate configuration, but have some difficulties to unfold the layout. The charts on the right show the plot of the energy, edge lengths and areas statistics computed when running 800 iterations of Alexa method (we compute these statistics every 10 iterations). relaxation, and the ISP provides a better starting point compared to the PC layout (one can more often converge towards a non-crossing configuration before collapsing, since vertices are distributed in a more balanced way on the sphere). Layout of mesh-like graphs. When computing the spherical layout of mesh-like structures, the ISP layout seems to be a good choice as initial guess (Fig. 3 show the layout of the dog mesh). The drawing is rather pleasing, capturing the structure of the input graph and being not too far from the final spherical Tutte layout: we mention that the results obtained in our experiments strongly depend on the quality of the separator cycle. Our SFPP layout clearly fails to achieve similar aesthetic criteria: nevertheless, even not being pleasing in the first few iterations, it is possible to reach very often a valid final configuration (crossing-free) without collapsing, and whose quality is very close, in terms of energy and edge lengths and area statistics, to the ones obtained starting from the ISP layout, as confirmed by the charts in Fig. 3. As we observed for most of the tested graphs, when starting from the SFPP layout the number of iterations required to reach a spherical drawing with good aesthetics is larger than starting from an ISP layout. But the convergence speed can be slightly better in a few cases: Fig. 3 shows a valid spherical layout computed after 1058 iterations of the Gauss-Seidel relaxation (1190 iterations are required when starting from the ISP layout). The charts in Fig. 3 show that our SFPP has higher values of the edge lengths and area statistics in the first iterations: this reflects the fact that our layout has a polynomial resolution and thus triangles have a bounded aspect ratio and side lengths. In the case of the ISP parameterization there could be a large number of tiny triangles clustered in some small regions (the size of coordinates could be exponentially small as n grows). Layout of random triangulations. When drawing random triangulations the behavior is rather different: the performances obtained starting from our SFPP layout are often better than the ones achieved using the ISP layout. As illustrated by the pictures in Fig. 4) and 6, Alexa's method is able to reach a non-crossing configuration requiring less iterations when starting from our SFPP layout: this is observed in most our experiments, and clearly confirmed by the plots of the energy and statistics el and a that converge faster to the values of the final layout (see charts in Fig. 4). Spherical preprocessing for Euclidean spring embedders In this section we investigate the use of spherical drawings as initial placers for spring embedders in 3D space. The fact of recovering the original topological shape of the graph, at least in the case of graphs that have a clear underlying geometric structure, is an important and well known ability of spring embedders. This occurs for the case of regular graphs used in Geometry Processing (the pictures in Fig. 5 show a few force-directed layouts of the cow graph), and also for many mesh-like complex networks involved in physical and real-world applications (such as the networks made available by the Sparse Matrix Collection [START_REF] Davis | The university of florida sparse matrix collection[END_REF]). In the case of uniformly random embedded graphs (called maps) of a large size n on a fixed surface S, the spring embedding algorithms (applied in the 3D ambient space) yield graph layouts that greatly capture the topological and structural features of the map (the genus of the surface is visible, the "central charge" of the model is reflected by the presence of spikes, etc.), a great variety of such representations can be seen at the very nice simulation gallery of Jérémie Bettinelli ( http://www.normalesup.org/∼bettinel/simul.html). While common software and libraries (e.g. GraphViz [START_REF] Ellson | Graphviz -open source graph drawing tools[END_REF], Gephi [START_REF] Bastian | Gephi: An open source software for exploring and manipulating networks[END_REF], GraphStream) for graph visualization provide implementations of many force-directed models, as far as we know they never try to exploit the strong combinatorial structure of surface-like graphs. Discussion of experimental results. Our main goal is to show empirically that starting from a nice initial shape that captures the topological structure of the input graph greatly improves the convergence speed and layout quality. In our first experiments (see Figures 5 and6) we run our 3D implementation of the spring electrical model FR [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF], where we make use of exact force computation and we adopt the cooling system proposed in [START_REF] Walshaw | A multilevel algorithm for force-directed graph-drawing[END_REF] (with repulsive strength C = 0.1). We also perform some tests with the Gephi implementation of the Yifan Hu layout algorithm [START_REF] Hu | Efficient, high-quality force-directed graph drawing[END_REF], which is a more sophisticated spring-embedder with fast approximate calculation of repulsive forces (see the layouts of Fig. 7). In order to quantify the layout quality, we evaluate the number of self-intersections of the resulting 3D shape during the iterative computation process4 . To be more precise, we plot (over the first 100 iterations) the number of triangle faces that have a collision with a non adjacent triangle in 3D space. The charts of Fig. 5 and 6 clearly confirm the visual intuition suggested by pictures: when starting from a good initial shape the force-directed layouts seem to evolve according to an inflating process, which leads to better and faster untangle the graph layout. This phenomenon is observed in all our tests (on several mesh-like graphs and synthetic data): experimental evidence shows that an initial spherical drawing is a good starting point helping the spring embedder to reach nicer layout aesthetics and also to improve the runtime performances. Finally observe that from the computational point of view the computation of a spherical drawing has a negligible cost: iterative schemes (e.g. Alexa method) require O(n) time per iteration, which must be compared to the complexity cost of force-directed methods, requiring between O(n 2 ) or O(n log n) time per iteration (depending on the repulsive force calculation scheme). This is also confirmed in practice, according to the timing costs reported in Fig 5, 6 and 7. Concluding remarks One main feature of our SFPP method is that it always computes a crossing-free layout: unfortunately edge crossings can appear during the beautification process, when running iterative algorithms (projected Gauss-Seidel iteration, Alexa method or more sophisticated schemes). It could be interested to adapt to the spherical case existing methods [START_REF] Simonetto | Impred: An improved force-directed algorithm that prevents nodes from crossing edges[END_REF] (which are designed for the Euclidean case) whose goal is to dissuade edge-crossings: their application could lead to produce a sequence of layouts that converge to the final spherical drawing while always preserving the map. The promising results of Section 5 would suggest that starting from an initial nice layout could lead to faster algorithms and better results for the case of mesh-like structures. It could be interesting to investigate whether this phenomenon arises for other classes of graphs, such as quadrangulated or 3-connected planar graphs, or non planar (e.g. toroidal) graphs, for which fast drawing methods also exist [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Gonçalves | Toroidal maps: Schnyder woods, orthogonal surfaces and straight-line representations[END_REF]. f f f f Figure 8: These pictures illustrate the computation of a spherical drawing using our SFPP algorithm. A Appendix A.1 Proof of Theorem 1 For the sake of completeness we provide a detailed description of all the steps of the linear-time algorithm computing a SFPP layout of a triangulation G of the sphere, as sketched in Section 3: we combine and adapt many ingredients developed in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]. Cutting G along three disjoint paths. Let G be a triangulation on the sphere. We assume that there exist two non adjacent faces f and f (with no common incident vertices). If not, one can force the existence of two such faces by adding a new triangle t within a face (and adding edges so as to triangulate the area between t and the face-contour). The first step is to compute 3 vertex-disjoint chord-free paths that start at each of the 3 vertices of f S and end at each of the 3 vertices of f N . Schnyder woods [START_REF] Schnyder | Embedding planar graphs on the grid[END_REF][START_REF] Brehm | 3-orientations and Schnyder 3-tree-decompositions[END_REF] provide a nice way to achieve this. Taking f as the outer face, where v 0 , v 1 , v 2 are the outer vertices in clockwise (CW) order and inserting a vertex v of degree 3 inside f , we compute a Schnyder wood of the obtained triangulation, and let P 0 , P 1 , P 2 be the directed paths in respective colors 0, 1, 2 starting from v: by well-known properties of Schnyder woods, these paths are chord-free and are disjoint except at v, and they end at the 3 vertices v 0 , v 1 , v 2 of f . Deleting v and its 3 incident edges, (and thus deleting the starting edge in each of P 0 , P 1 , P 2 ) we obtain a triple of disjoint chord-free paths from f to f . Let u 0 , u 1 , u 2 be the vertices on f incident to P 0 , P 1 , P 2 . As in [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] we call 4ST triangulation a graph embedded in the plane with a polygonal outer contour and with triangular inner faces, with 4 distinguished vertices w 0 , w 1 , w 2 , w 3 (in cw order) incident to the outer face, and such that each of the 4 outer paths delimited by the marked outer vertices is chord-free. The external paths between w 0 and w 1 and between w 2 and w 3 are called vertical, and the two other ones are called horizontal. A 4ST is called narrow if the two vertical paths have only one edge. For i ∈ {0, 1, 2} let G i be the narrow 4ST whose outer contour (indices are taken modulo 3) is made of the path P i , the edge {u i , u i+1 }, the path P i+1 , and the edge {v i , v i+1 }. Note that G can be seen as a prism, with f and f as the two triangular faces and with G 0 , G 1 , G 2 occupying the 3 lateral quadrangular faces of the prism. Computing compatible drawings of the 3 components G 0 , G 1 , G 2 . A straight-frame drawing of a 4ST H is a straight-line drawing of H where the outer face contour is an axis-aligned rectangle, with the 4 sides of the rectangle corresponding to the 4 paths along the contour. The interspacevector of each of the 4 paths is the vector giving the lengths (in the drawing) of the successive edges along the path, where the path is traversed from left to right for the two horizontal ones and is traversed from bottom to top for the two vertical ones. In order to obtain a drawing of G on the prism (which then yields a geodesic crossing-free drawing on the sphere, using a central projection), we would like to obtain compatible straight-frame drawings of G 0 , G 1 , G 2 , i.e., such that for i ∈ {0, 1, 2} the interspace-vectors of P i in the drawing of G i and in the drawing of G i-1 are the same. Using an adaptation -given in [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]-of the algorithm by Duncan et al. [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF], one gets the following result, where a vector of positive integers is called even if all its components are even, and the total of a vector is the sum of its components: Lemma 1 (from [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]). Let H be a narrow 4ST with m vertices. Then one can compute in linear time a straight-frame drawing of H such that the interspace-vectors U = (u 1 , . . . , u p ) and V = (v 1 , . . . , v q ) of the two horizontal external paths are even, and the grid-size of the drawing is bounded by 4m × (4m + 1). Moreover, for any pair U = (u 1 , . . . , u p ) and V = (v 1 , . . . , v q ) of even vectors such that U ≥ U , V ≥ V , and U and V have the same total s, one can recompute in linear time a straight-frame drawing of H such that the interspace vectors of the two horizontal external paths are respectively U and V , and the grid-size is 4s × (4s + 1). For i ∈ {0, 1, 2} let k i be the number of vertices of G i . By the first part of Lemma 1, G i admits a straight-frame drawing -where P i and P i+1 are the two horizontal external paths-such that the interspace-vector U i along P i and the interspace-vector V i+1 along P i+1 are even, and the grid size is bounded by 4k i × (4k i + 1), with k i the number of vertices in G i . We let W i be the vector max(U i , V i ), and let s i be the total of W i , and set s := max(s 0 , s 1 , s 2 ). It is easy to check that s ≤ 8n. We then let W i be obtained from W i by adding s -s i to the last component. Then we set U i and V i to W i . Note that we have U i ≥ U i and V i ≥ V i for i ∈ {0, 1, 2}, and moreover all the vectors U 0 , V 0 , U 1 , V 1 , U 2 , V 2 now have the same total, which is s. We can thus use the second part of Lemma 1 and obtained straight-frame drawings of G i (for i ∈ {0, 1, 2}) on the grid 4s × (4s + 1) where the interspace-vector for the bottom (resp. upper) horizontal external path is U i (resp. is V i+1 ). Since U i = V i for i ∈ {0, 1, 2}, the drawings of G 0 , G 1 , G 2 are compatible and can thus be assembled to get a drawing of G on the prism (see Figure 8 for an example), which then yields a drawing on the unit sphere using a central projection (with the origin at the center of mass of the prism seen as a solid polyhedron). Note that the prism has its 3 lateral faces of area O(n × n), hence is at distance O(n) from the origin. Since every edge of G drawn on the prism clearly has length at least 1 (as in any straight-line drawing with integer coordinates) we conclude that after the central projection every edge of G has length Ω(1/n), as claimed in Theorem 1. Remark. To improve the distribution of the points on the sphere, one aims at 3 paths P 0 , P 1 , P 2 such that the graphs G 0 , G 1 , G 2 are of similar sizes. A simple heuristic (using the approach based on Schnyder woods mentioned above), is to do the computation for every face f S non-adjacent to f N , and keep the one that maximizes the objective parameter 0≤i<j≤2 ||G i | -|G j ||. Projected Gauss-Seidel(x 0 , λ, ε)r = 0; // iteration counter do { } while ( x rx r-1 > ε))for(i = 1; i ≤ n; i++) { s = (1λ)x r (vi) + λ j wijx r (vj) Figure 1 : 1 Figure 1: (left) Two spherical parameterizations of the gourd graph obtained obtained via Tutte's planar parameterization. (right) The pseudo-code of the Projected Gauss-Seidel method. 2 Figure 2 : 22 Figure 2: Computation of a spherical drawing based on a prism layout of the gourd graph (326 vertices). Three vertex-disjoint chord-free paths lead to the partition of the faces of G into three regions which are each separated by one river (green faces). Our variant of the FPP algorithm allows to produce three rectangular layouts, where boundary vertex locations do match on identified (horizontal) sides. One can thus glue the planar layouts to obtain a 3D prism: its central projection on the sphere produces a spherical drawing. Edge colors (blue, red and black) are assigned during the incremental computation of a canonical labeling [11], according to the Schnyder wood local rule. Figure 3 : 3 Figure 3: These pictures illustrate the use of different initial placers as starting layouts for two iterative schemes on the dog graph (1480 vertices). For each initial layout, we first run 50 iterations of the projected Gauss-Seidel and Alexa method, and then we run the two methods until a valid spherical drawing (crossing free) is reach. The charts below show the energy, area and edge length statistics obtained running 1600 iterations of the projected Gauss-Seidel and Alexa methods. Figure 5 : 5 Figure5: These pictures illustrate the use of spherical drawings as initial placers for forcedirected methods: we compute the layouts of the cow graph (2904 vertices, 5804 faces) using our 3D implementation of the FR spring embedder[START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF]. In the charts on the right we plot the number of colliding 3D triangles, over 100 iterations of the algorithm. Figure 6 : 6 Figure 6: These pictures illustrate the use of spherical drawings as initial placers for the 3D version of the FR spring embedder [16], for a random planar triangulation with 5K faces. Figure 7 : 7 Figure 7: The spherical drawings of the graphs in Fig. 5 and 6 are used as initial placers for the Yifan Hu algorithm [19]: we test the implementation provided by Gephi (after rescaling the layout by a factor 1000, we set an optimal distance of 10.0 and a parameter ϑ = 2.0). The computation of small cycle separators for planar triangulations is a very challenging task. This work does not focus on this problem: we refer to recent results[START_REF] Fox-Epstein | Short and simple cycle separators in planar graphs[END_REF] providing the first practical implementations with theoretical guarantees. Observe that one common metric considered in the geometric processing community is the (angle) distortion: in our case this metric cannot be taken into account since our input is a combinatorial structure (without any geometric embedding). We compute the intersections between all pairs of non adjacent triangles running a brute-force algorithm: the runtimes reported in Fig. and do not count the cost of computing the triangle collisions.
48,017
[ "926531", "1030435", "926532" ]
[ "2071", "2071", "2071" ]
01761771
en
[ "info" ]
2024/03/05 22:32:13
2017
https://hal.science/tel-01761771/file/hdr-rakoto-sep2017.pdf
Didier Henrion Daniel Liberzon Alexandre Dolgui Claude Jard Stephane Lafortune Jean-Jacques J Loiseau Jose-Luis Santi Alice Pascale Ma Camille Emilienne Mère PhD Santi Esteva J Aguilar-Martin J L De La Rosa J Colomer J C Hennet E Garcia J Melendez V Puig Jose-Luis L Villa email: [email protected] M Morari K E Arzen M Duque email: [email protected] A Gauthier email: [email protected] N Rakoto email: [email protected] Eduardo Mojica email: [email protected] P Caines Nicanor Quijano P Riedinger N Rakoto -Supervision M Quijano C Ocampo-Martinez H Gueguen Jean-Sebastien Besse Naly Rakoto-Ravalontsalama Eduardo Mojica-Nava Germán Obando Modelling, Control and Supervision for a Class of Hybrid Systems Keywords: method of moment, optimal control, switched systems Distributed optimisation, resource allocation, multi-agent systems dynamique. Introduction The HDR (Habilitation à Diriger des Recherches) is a French Degree that you get some years after the PhD. It allows the candidate to apply for some University Professor positions and/or to apply for a Research Director position at CNRS. Instead of explaining it in details, the selection phases process after an HDR is summarized with a Petri Net model in Figure 1. After obtaining the HDR Thesis Degree, the candidate is allowed to apply for a Research Director position at CNRS, after a national selection. On the other hand, in order to apply for some University Professor positions, the candidate should first apply for a National Qualifcation (CNU). Once this qualification obtained, the candidate can then apply to some University Professor positions, with a selection specific to each university. This HDR Thesis is an extended abstract of my research work from my PhD Thesis defense in 1993 until now. This report is organized as follows. • Chapter 1 is a Curriculum Vitae I had and am having the following administrative responsabilities at Mines Nantes: • 1997-2000: Last Year's Option AII (Automatique et Informatique Industrielle) • Research Activities My main topics of research are the following: 1. Analysis and control of hybrid and switched systems 2. Supervisory control of discrete-event systems These will be detailed in Chapter 2 and Chapter 3, respectively. The following other topics of research will not be presented. However, the corresponding papers can be found in the Complete List of Publications. • Resource Allocation • Holonic Systems • Inventory Control -Invited Session, Knowledge Based Systems, IEEE ISIC 1999, Cambridge, MA, USA, Sep. 1999 (jointly organized and chaired with Karl-Erik Årzèn). Funded and Submitted Projects -Workshop on G2 Expert System, LAAS-CNRS, Toulouse, France, Oct. 1995 (jointly organized and chaired with Joseph Aguilar-Martin). Complete List of Publications A summary of the papers, classified per year, from 1994 to 2017, is given in the following Chapter 2 Analysis and Control of Hybrid and Switched Systems Modeling and Control of MLD systems Piecewise affine (PWA) systems have been receiving increasing interest, as a particular class of hybrid system, see e.g. [2], [13], [11], [16], [14], [12] and references therein. PWA systems arise as an approximation of smooth nonlinear systems [15] and they are also equivalent to some classes of hybrid systems e.g. Linear complementarity systems [9]. On the other hand Mixed Logical and Dynamical (MLD) systems have been introduced by Bemporad and Morari as a suitable representation for hybrid dynamical systems [3]. MLD models are obtained originally from PWA system, where propositional logic relations are transformed into mixed-integer inequalities involving integer and continuous variables. Then mixed-integer optimization techniques are applied to the MLD system in order to stabilize MLD system on desired reference trajectories under some constraints. Equivalences between PWA systems and MLD models have been established in [9]. More precisely, every well-posed PWA system can be rewritten as an MLD system assumung that the set of feasible states and inputs is bounded and a completely well-posed MLD system can be rewritten as a PWA system [9]. Conversion methods from MLD systems to equivalent PWA models have been proposed in [4], [5], [6] and [?]. Vice versa, translation methods from PWA to MLD systems have been studied in [3] (the original one), and then in [8], [?]. A tool that deals with both MLD and PWA systems is HYSDEL [17]. The motivations for studying new methods of conversion from PWA systems into their equivalent MLD models are the following. Firstly the original motivation of obtaining MLD models is to rewrite a PWA system into a model that allows the designer to use existing optimization algorithms such as mixed integer quadratic programming (MIQP) or mixed integer linear programmimg (MILP). Secondly there is no unique formulation of PWA systems. We can always address some particular cases that introduce some differences in the conversions. Finally, it has been shown that the stability analysis of PWA systems with two polyhedral regions is in general NP-complete or undecidable [7]. The conversion to MLD systems may be another way to tackle this problem. Piecewise Affine (PWA) Systems A particular class of hybrid dynamical systems is the system described as follows. { ẋ(t) = A i x(t) + a i + B i u(t) y(t) = C i x(t) + c i + D i u(t) (2.1) where i ∈ I, the set of indexes, x(t) ∈ X i which is a sub-space of the real space R n , and R + is the set of positive real numbers including the zero element. In addition to this equation it is necessary to define the form as the system switches among its several modes. This equation is affine in the state space x and the systems described in this form are called piecewise affine (PWA) systems NR HDR 22 [15], [9]. The discrete-time version of this equation will be used in this work and can be described as follows. { x(k + 1) = A i x(k) + b i + B i u(k) y(k) = C i x(k) + d i + D i u(k) (2.2) where i ∈ I is a set of indexes, X i is a sub-space of the real space R n , and R + is the set of positive integer numbers including the zero element, or an homeomorphic set to Z + . Mixed Logical Dynamical (MLD) Systems The idea in the MLD framework is to represent logical propositions with the equivalent mixed integer expressions. MLD form is obtained in three steps [3], [4]. The first step is to associate a binary variable δ ∈ {0, 1} with a proposition S, that may be true or false. δ is equal to 1 if and only if proposition S is true. A composed proposition of elementary propositions S 1 , . . . , S q combined using the boolean operators like AND, OR, NOT may be expressed with integer inequalities over corresponding binary variables δ i , i = 1, . . . q. The second step is to replace the products of linear functions and logic variables by a new auxiliary variable z = δa T x where a T is a constant vector. The variable z is obtained by mixed linear inequalities evaluation. The third step is to describe the dynamical system, binary variables and auxiliary variables in a linear time invariant system. An hybrid system described in MLD form is represented by Equations (2.3-2.5). x(k + 1) = Ax(k) + B 1 u(k) + B 2 δ(k) + B 3 z(k) (2. 3) y(k) = Cx(k) + D 1 u(k) + D 2 δ(k) + D 3 z(k) (2.4) E 2 δ(k) + E 3 z(k) ≤ E 1 u(k) + E 4 x(k) + E 5 (2.5) where x = [x T c x T l ] ∈ R nc × {0, 1} n l are the continuous and binary states, respectively, u = [u T C u T l ] ∈ R mc × {0, 1} m l are the inputs, y = [y T c y T l ] ∈ R pc × {0, 1} p l the outputs, and δ ∈ {0, 1} r l , z ∈ R rc , represent the binary and continuous auxiliary variables, respectively. The constraints over state, input, output, z and δ variables are included in (2.5). Converting PWA into MLD Systems In this subsection two algorithms for converting PWA systems into MLD systems are given. The first case consists of several sub-affine systems with switching regions are explained in detail. The second case deals with several sub-affine systems, each of them belongs to a region which is described by linear inequalities is a variation of the first case. Each case is applied to an example in order to show the validity of the algorithm. A. Case I The PWA system is represented by the following equations:    x(k + 1) = A i x(k) + B i u(k) + f i y(k) = C i x(k) + D i u(k) + g i S ij = {x, u|k T 1ij x + k T 2ij u + k 3ij ≤ 0} (2.6) where i ∈ I = {1, . . . , n}. The case with jumps can be included in this representation considering each jump as a discrete affine behavior valid during only one sample time. The switching region S ij is a convex polytope which volume, or hypervolume, can be infinite, and the sub-scripts denotes the switching from mode i to mode j. For this purpose we introduce a binary variable δ i for each index of the set I and a binary variable δ i,j for each switching region S ij . In order to gain insight in the following equations, we consider the hybrid the partition and the corresponding automaton is depicted in Figure 2.1. Introductory material on hybrid automata can be found in [1] and [10]. The δ ij variables are not dynamical and, when the elements k in S ij are vectors, the binary variable can be evaluated by the next mixed integer inequality (δ ij = 1) ⇔ (k T 1ij x + k T 2ij u + k 3ij ≤ 0) (2.7) which is equivalent to: { k 1ij x + k 2ij u + k 3ij -M (1 -δ ij ) ≤ 0 -k 1ij x -k 2ij u -k 3ij + ϵ + (m -ϵ)δ ij ≤ 0 (2.8) When the elements k in S ij are matrices, it is necessary to introduce some auxiliary binary variables for each row describing a sub-constraint in S ij in the next form: δ k = 1(⇔ k 1,k x + k 2,k u + k 3,k ≤ 0) δ ij = ∧ k δ k (2.9) which is equivalent to:        k 1ij,k x + k 2ij,k u + k 3ij,k -M (1 -δ ij,k ) ≤ 0 -k 1ij,k x -k 2ij,k u -k 3ij,k + ϵ + (m -ϵ)δ ij,k ≤ 0 δ ij -δ ij,k ≤ 0 ∑ k (δ ij,k -1) -δ ij ≤ -1 (2.10) The binary vector x δ = [δ 1 δ 2 . . . δ n ] T is such that its dynamics is given by: x δi (k + 1) = (x δi (k) ∧ ∧ j̸ =i ¬δij) ∨ ∨ j̸ =i (x δj (k) ∧ δji) (2.11) where k is an index of time, and ∧, ∨, and ¬, are standard for the logical operations AND, OR, NOT, respectively. This equation can be explained as follows: The mode of the system in the next time is i if the current mode is mode i and any switching region is enabled in this time, or, the current mode of the system is j different to i and a switching region that enables the system to go into mode i is enabled. Considering that the PWA system is well posed, i.e. for a given initial state [x T i T ] T 0 and a given input u 0,τ there exists only one possible trajectory [x T i T ] T 0,x . That is equivalent to the following conditions: ∑ i∈I x δi = 1, ∏ i∈I x δi = 0 (2.12) The dynamical equations for x δ vector are equivalent to the next integer inequalities:            x δj (k) + δ ji -x δi (k + 1) ≤ 1, ∀i, j ∈ I, i ̸ = j x δi (k) - ∑ j̸ =i δ ij -x δi (k + 1) ≤ 0, ∀i, j ∈ I, i ̸ = j -x δi (k) - ∑ j̸ =i δ ji -x δi (k + 1) ≤ 0, ∀i, j ∈ I, i ̸ = j (2.13) The first inequality states that the next mode of the system should be mode i if the current mode is j different to i and a switching region for going from mode j to mode i is enabled. The second inequality means that the next mode of the system should be mode i if the current mode is i and any switching region for going from mode i into mode j different to i is enabled. And the third equation states that the system cannot be in mode i in the next time if the current mode of the system is not mode i and any switching region for going from mode i, (j different to i), into mode i is enabled. NR HDR 24 This form for finding x δ (k + 1) causes a problem in the final model because it cannot be represented by a linear equation in function of x, u, δ and Z. For this reason, x δ (k + 1) is aggregated to the δ general vector of binary variables, and finally assigned directly to x δ (k + 1). The dynamics and outputs of the system can be represented by the next equations: { x(k + 1) = Ax(k) + Bu(k) + ∑ i∈I (A i x(k) + B i u(k) + f i ) × x δi (k) y(k) = Cx(k) + Du(k) + ∑ i∈I (C i x(k) + D i u(k) + g i ) × x δi (k) (2.14) If we introduce some auxiliary variables: { Z 1i (k) = (A i x(k) + B i u(k) + f i ) × x δi (k) Z 2i (k) = (C i x(k) + D i u(k) + g i ) × x δi (k) (2.15) which are equivalent to:        Z 1i ≤ M x δi (k) -Z 1i ≤ -mx δi (k) Z 1i ≤ A i x(k) + B i u(k) + f i -m(1 -x δi (k)) -Z 1i ≤ -A i x(k) -B i u(k) -f i + M (1 -x δi (k)) (2.16)        Z 2i ≤ M x δi (k) -Z 2i ≤ -mx δi (k) Z 2i ≤ C i x(k) + D i u(k) + g i -m(1 -x δi (k)) -Z 2i ≤ -C i x(k) -D i u(k) -g i + M (1 -x δi (k)) (2.17) where M and m are vectors representing the maximum and minimum values, respectively, of the variables Z, these values can be arbitrary large. Using the previous equivalences, the PWA system ( 2.2) can be rewritten in an equivalent MLD model as follows:                        x(k + 1) = A rr x(k) + A br x δ (k) + B 1r u(k) + B 2r δ + B 3r ∑ i∈I Z 1i (k) x δ (k + 1) = A rb x(k) + A bb x δ (k) + B 1b u(k) + B 2b δ + B 3b ∑ i∈I Z 1i (k) y r (k) = C rr x(k) + C br x δ (k) + D 1r u(k) + D 2r δ + D 3r ∑ i∈I Z 2i (k) y δ (k) = C rb x(k) + C bb x δ (k) + D 1b u(k) + D 2b δ + D 3b ∑ i∈I Z 2i (k) (2.18) s.t. E 2   x δ (k + 1) δ ij δ k   + E 3 Z(k) ≤ E 4   x(k) δ ij δ k   + E 1 u(k) + E 5 (2.19) Using this algorithm, most part of the matrices are zero, because x and y are defined by Z, and x δ is defined by δ. This situation can be avoided by defining the next matrices at the beginning of the procedure:        A = 1 n (A 1 + . . . + A n ), A i = A i -A, ∀i ∈ I B = 1 n (B 1 + . . . + B n ), B i = B i -B, ∀i ∈ I C = 1 n (C 1 + . . . + C n ), C i = C i -C, ∀i ∈ I D = 1 n (D 1 + . . . + D n ), D i = D i -D, ∀i ∈ I (2.20) Finally, the equality matrices in (2.18) and (2.19) can be chosen as follows:        A rr = A, A br = 0 nc×n , B 1r = B, B 2r = 0 nc×(n+m+tk) , B 3r = [I nc×nc 0 nc×pc I nc×nc 0 nc×pc . . . I nc×nc 0 nc×pc ] nc×n×(nc+pc) A rb = 0 n×nc , A bb = 0 x×n , B 1b = 0 n×mc , B 2b = [I n×n 0 n×m 0 n×tk ] n×n×(n+m+tk) , B 3b = 0 nc×n×(nc+pc) (2.21) NR HDR 25        C rr = C, C br = 0 pc×n , D 1r = D, D 2r = 0 pc×(n+m+tk) , D 3r = [0 pc×nc I pc×pc 0 pc×nc I pc×pc . . . 0 pc×nc I pc×pc ] pc×n×(nc+pc) C rb = 0 n×nc , C bb = 0 x×n , D 1b = 0 n×mc , D 2b = [I n×n 0 n×m 0 n×tk ] n×n×(n+m) , D 3b = 0 n×n×(nc+pc) (2.22) where n C is the number of continuous state variables, m C the number of continuous input variables, p C the number of continuous output variables, n the number of affine sub-systems, m the number of switching regions and tk the number of auxiliary binary variables. The algorithm for converting a PWA system in the form of (2.1) into its equivalent MLD system can be summarized as follows: B. Algorithm 1 C. Example 1 Consider the system whose behavior is defined by the following PWA model:    x(k + 1) = A i x(k), i ∈ {1, 2} S 1,2 = {(x 1 , x 2 )|(x 1 ≤ 1.3x 2 ) ∧ (0.7x 2 ≤ x 1 ) ∧ (x 2 > 0) S 2,1 = {(x 1 , x 2 )|(x 1 ≤ 0.7x 2 ) ∧ (1.3x 2 ≤ x 1 ) ∧ (x 2 < 0) where A 1 = [ 0.9802 0.0987 -0.1974 0.9802 ] , A 2 = [ 0.9876 -0.0989 0.0495 0.9876 ] The behavior of the system is presented in Figure 2.2. The initial points are (x 10 , x 20 ) = (1, 0.8). We can see that the system switches between the two behaviors, from A 1 to A 2 in the switching region S 1,2 , and from A 2 to A 1 in the switching region S 2,1 , alternatively. The switched system is stable. D. Case 2 Consider now the system whose behavior is defined by the following PWA model: { x(k + 1) = A i x(k) + b i + B i u(k), i ∈ I, x(k) ∈ X i y(k) = C i x(k) + d i + D i u(k), i ∈ I, x(k) ∈ X i (2.23) with conditions X i ∩ X j̸ =i = ∅, ∀i, j ∈ I, ∪ i∈I X i = X, where X is the admissible space for the PWA system, and variables and can be represented using the appropriate δ variables instead of x δ (k) variables in the definition of Z in (2.16) and (2.17). However, note that the conditions X i ∩ X j̸ =i = ∅ ∀i, j ∈ I and ∪ i∈I X i = X require a careful definition in the sub-spaces X i in order to avoid a violation to these conditions in its bounds. On the other hand, the MLD representation uses non-strict inequalities in its representation and the ε value in (2.8) and (2.9) should be chosen appropriately. Another way to overcome this situation and to insure an appropriated representation is the use of the following conditions in the bounds of the sub-spaces X i : X i = {x, u|k 1i x + k 2i u + k 3i ≤ 0} δ ij = δ i ⊗ δ j which is equivalent to: { δ i + δ j -1 ≤ 0 1 -δ i -δ j ≤ 0 or more generally        ∑ i∈I δ i -1 ≤ 0 1 - ∑ i∈I δ i ≤ 0 (2.24) We now modify Equations (2.8), (2.10), (2.16), (2.17), (2.21), and (2.22) as follows: { k 1i x + k 2i u + k 3i -M (1 -δ i ) ≤ 0 -k 1i x -k 2i u -k 3i + ϵ + (m -ϵ)δ i ) ≤ 0 (2.25) NR HDR 27        k 1i,k x + k 2i,k u + k 3i,k -M (1 -δ i,k ) ≤ 0 -k 1i,k x -k 2i,k u -k 3i,k + ϵ + (m -ϵ)δ i,k ≤ 0 δ i -δ i,k ≤ 0 ∑ k (δ i,k -1) -δ i ≤ -1 (2.26) The auxiliary variables Z 1i become:        Z 1i ≤ M δ i (k) -Z 1i ≤ -mδ i (k) Z 1i ≤ A i x(k) + B i u(k) + f i -m(1 -δ i (k)) -Z 1i ≤ -A i x(k) -B i u(k) -f i + M (1 -δ i (k)) (2.27) where the matrices A i and B i are those previously defined in Equation (2.20).The auxiliary variable Z 2i is now modified according to the following equations:        Z 2i ≤ M δ i (k) -Z 2i ≤ -mδ i (k) Z 2i ≤ C i x(k) + D i u(k) + g i -m(1 -δ i (k)) -Z 2i ≤ -C i x(k) -D i u(k) -g i + M (1 -δ i (k)) (2.28) where the matrices C i and D i are those that have been defined in Equation (2.20). Finally the matrices from Equation (2.18) can be chosen as follows:        A rr = A, A br = 0 nc×n , B 1rr = B, B 2rb = 0 nc×(n+tk) , B 3rr = [I nc×nc 0 nc×pc I nc×nc 0 nc×pc . . . I nc×nc 0 nc×pc ] nc×n×(nc+pc) C rr = C, C br = 0 pc×n , D 1rr = D, D 1rb = [ ], D 2rb = 0 pc×(n+tk) , D 3rr = [0 pc×nc I pc×pc 0 pc×nc I pc×pc . . . 0 pc×nc I pc×pc ] pc×n×(nc+pc) (2.29) We give now an algorithm that converts a PWA system in the form of (2.23) into its equivalent MLD system. E. Algorithm 2 1. Compute matrices A, B, C, D and A i , B i , C i and D i using (2.20). Initialize E 1 , E 2 , E 3 , E 4 , E 5 matrices. 3. For i = 1 to n include the inequalities using (2.25) or (2.26) that represent the behavior on the n affine regions of the PWA system. 4. For all affine regions include the inequalities in (2.24). 5. For i = 1 to n generate the nc-dimensional Z 1i vector and p c -dimensional Z 2i vector of auxiliary variables Z. 6. For each Z 1i vector introduce the inequalities defined in (2.27). M and m are n c -dimensional vectors of maximum and minimum values of x, respectively. 7. For each Z 2i vector introduce the inequalities defined in (2.28). M and m are p c -dimensional vectors of maximum and minimum values of x, respectively (This completes the inequality matrices). 8. Compute the matrices defined in (2.29) where the binary state variables are removed. 9. End. F. Example 2 Consider the system whose behavior is defined by the following PWA model:    x(k + 1) = A i x(k), i ∈ {1, 2} i = 1 if x 1 x 2 ≥ 0 i = 2 if x 1 x 2 < 0 NR HDR 28 where The behavior of the system is presented in Figure 2.4. The PWA system with linear constraints has 4 sub-affine systems. Algorithm 2 produces an MLD system with 12 binary variables (4 variables for the affine sub-system, and 8 auxiliary variables), 16 auxiliary variables Z and 94 constraints. The behavior of the equivalent MLD system is shown in Figure 2.5. We can notice that the behavior of the MLD system is exactly the same as the original PWA model. A Stability of Switched Systems A polynomial approach to deal with the stability analysis of switched non-linear systems under arbitrary switching using dissipation inequalities is presented. It is shown that a representation of the original switched problem into a continuous polynomial system allows us to use dissipation inequalities for the stability analysis of polynomial systems. With this method and from a theoretical point of view, we provide an alternative way to search for a common Lyapunov function for switched non-linear systems. We deal with the stability analysis of switched non-linear systems, i.e., continuous systems with switching signals under arbitrary switching. Most of the efforts in switched systems research have been typically focused on the analysis of dynamical behavior with respect to switching signals. Several methods have been proposed for stability analysis (see [START_REF] Liberzon | Switching in Systems and Control[END_REF], [19], and references therein), but most of them have been focused on switched linear systems. Stability analysis under arbitrary switching is a fundamental problem in the analysis and design of switched systems. For this problem, it is necessary that all the subsystems be asymptotically stable. However, in general, the above stability condition is not sufficient to guarantee stability for the switched system under arbitrary switching. It is well known that if there exists a common Lyapunov function for all the subsystems, then the stability of the switched system is guaranteed under arbitrary switching. Previous attempts for general constructions of a common Lyapunov function for switched non-linear systems have been presented in [20], [21] using converse Lyapunov theorems. Also in [22], a construction of a common Lyapunov function is presented for a particular case when the individual systems are handled sequentially rather than simultaneously for a family of pairwise commuting systems. These methodologies are presented in a very general framework, and even though they are mathematically sound, they are too restrictive from a computational point of view, mainly because it is usually hard to check for the set of necessary conditions for a common function over all the subsystems (it could not exist). Also, these constructions are usually iterative, which involves running backwards in time for all possible switching signals, being prohibitive when the number of modes increases. The main contribution of this topic of stability of switched systems is twofold. First, we present a reformulation of the switched system as an ordinary differential equation on a constraint manifold. This representation opens several possibilities of analysis and design of switched systems in a consistent way, and also with numerical efficiency [C.39], [C.38], which is possible thanks to some tools developed in the last decade for polynomial differential-algebraic equations analysis [8,10]. The second contribution is an efficient numerical method to search for a common Lyapunov function for switched systems using results of stability analysis of polynomial systems based on dissipativity theory [23], [C.39]. We propose a methodology to construct common Lyapunov functions that provides a less conservative test for proving stability under arbitrary switching. It has been mentioned in [26] that the sum of squares decomposition, presented only for switched polynomial systems, can sometimes be made for a system with a non-polynomial vector fields. However, those cases are restricted to subsystems that preserve the same dimension after a recasting process. Optimal Control of Switched Systems Switched Linear Systems A polynomial approach to solve the optimal control problem of switched systems is presented. It is shown that the representation of the original switched problem into a continuous polynomial systems allow us to use the method of moments. With this method and from a theoretical point of view, we provide necessary and sufficient conditions for the existence of minimizer by using particular features of the minimizer of its relaxed, convex formulation. Even in the absence of classical minimizers of the switched system, the solution of its relaxed formulation provide minimizers. We consider the optimal control problem of switched systems, i.e., continuous systems with switch-NR HDR 30 ing signals. Recent efforts in switched systems research have been typically focused on the analysis of dynamic behaviors, such as stability, controllability and observability, etc. (e.g., [19], [START_REF] Liberzon | Switching in Systems and Control[END_REF]). Although there are several studies facing the problem of optimal control of switched systems (both from theoretical and from computational point of view [37], [36], [27], [39], there are still some problems not tackled, especially in issues where the switching mechanism is a design variable. There, we see how these difficulties arise, and how tools from non-smooth calculus and optimal control can be combined to solve optimal control problems. Previously, the approach based on convex analysis have been treated in [36], and further developed in [27], considering an optimal control problem for a switched system, these approaches do not take into account assumptions about the number of switches nor about the mode sequence, because they are given by the solution of the problem. The authors use a switched system that is embedded into a larger family of systems and the optimal control problem is formulated for this family. When the necessary conditions indicate a bang-bangtype of solution, they obtain a solution to the original problem. However, in the cases when a bang-bang type solution does not exist, the solution to the embedded optimal control problem can be approximated by the trajectory of the switched system generated by an appropriate switching control. On the other hand, in [36] and [34] the authors determine the appropriated control law by finding the singular trajectory along some time with non null measure. Switched Nonlinear Systems The nonlinear, non-convex form of the control variable, prevents us from using the Hamilton equations of the maximum principle and nonlinear mathematical programming techniques on them. Both approaches would entail severe difficulties, either in the integration of the Hamilton equations or in the search method of any numerical optimization algorithm. Consequently, we propose to convexify the control variable by using the method of moments in the polynomial expression in order to deal with this kind of problems. In this paper we present a method for solving optimal control for an autonomous switched systems problem based on the method of moments developed in for optimal control, and in [28], [29], [30] and [32] for global optimization. We propose an alternative approach for computing effectively the solution of nonlinear, optimal control problems. This method works properly when the control variable (i.e., the switching signal) can be expressed as polynomials. The essential of this paper is the transformation of a nonlinear, non-convex optimal control problem (i.e., the switched system) into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high performance numerical computing. To this end, first of all, it is necessary to transform the original switched system into a continuous non-switched system for which the theory of moments is able to work. Namely, we relate with a given controllable switched system, a controllable continuous non-switched polynomial system. Optimal control problems for switched nonlinear systems are investigated. We propose an alternative approach for solving the optimal control problem for a nonlinear switched system based on the theory of moments. The essence of this method is the transformation of a nonlinear, non-convex optimal control problem, that is, the switched system, into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high-performance numerical computing. Consequently, we propose to convexify the control variables by means of the method of moments obtaining semidefinite programs. The paper dealing with this approach is given in the Appendix 2, paper [J.5]. Chapter 3 Supervisory Control of Discrete-Event Systems Multi-Agent Based Supervisory Control Supervisory control initiated by Ramadge and Wonham [START_REF] Ramadge | Supervisory control of a class of discrete-event processes[END_REF] provides a systematic approach for the control of discrete event system (DES) plant. The discrete event system plant be is modeled by a finite state automaton [START_REF] Hopcroft | Introduction to Automata Theory, Languages, and Computation[END_REF], [43]: Definition 1 (Finite-state automaton). A finite-state automaton is defined as a 5-tuple G = (Q, Σ, δ, q 0 , Q m , C) where • Q is the finite set of states, • Σ is the finite set of events, • δ : Q × Σ → Q is the partial transition function, • q 0 ⊆ Q is the initial state, • Q m ⊆ Q is the set of marked states (final states), Let Σ * be the set of all finite strings of elements in Σ including the empty string ε. The transition function δ can be generalized to δ : Σ * × Q → Q in the following recursive manner: δ(ε, q) = q δ(ωσ, q) = δ(σ, δ(ω, q)) for ω ∈ Σ * The notation δ(σ, q)! for any σ ∈ Σ * and q ∈ Q denotes that δ(σ, q) is defined. Let L(G) ⊆ Σ * be the language generated by G, that is, L(G) = {σ ∈ Σ * |δ(σ, q 0 )!} Let K ⊆ Σ * be a language. The set of all prefixes of strings in K is denoted by pr(K) with pr(K) = {σ ∈ Σ * |∃ t ∈ Σ * ; σt ∈ K}. A language K is said to be prefix closed if K = pr(K). The event set Σ is decomposed into two subsets Σ c and Σ uc of controllable and uncontrollable events, respectively, where Σ c ∩ Σ uc = ∅. A controller, called a supervisor, controls the plant by dynamically disabling some of the controllable events. A sequence σ 1 σ 2 . . . σ n ∈ Σ * is called a trace or a word in term of language. We call a valid trace a path from the initial state to a marked state (δ(ω, q 0 ) = q m where ω ∈ Σ * and q m ∈ Q m ). NR HDR 32 In this section we will focus on the Multi-Agent Based Supervisory Control, introduced by Hubbard and Caines [START_REF] Hubbard | Initial investigations on hierachical supervisory control of multi-agent systems[END_REF]; and the modified approach proposed by Takai and Ushio [START_REF] Takai | Supervisory control of a class of concurrent discrete-event systems[END_REF]. The two approaches have been applied to the supervisory control of the EMN Experimental Manufacturing Cell. This cell is composed of two robotized workstations connected to a central conveyor belt. Then, three new semi-automated workstations have been added in order to increase the flexibility aspects of the cell. Indeed, each semi-automated workstation can perform either manual of robotized tasks. These two aspects correspond to the two different approaches of multi-agent product of subsystems, for supervisory control purpose. The results can be found in [C.25]. Switched Discrete-Event Systems The notion of switched discrete-event systems corresponds to a class of DES where each automaton is the composition of two basic automata, but with different composition operators. A switching occurs when there is a change of the composition operator, but keeping the same two basic automata. A mode behavior, or mode for short, is defined to be by the DES behavior for a given composition operator. Composition operators are supposed to change more than once so that each mode is visited more than once. This new class of DES includes the DES in the context of fault diagnosis where different modes such as e.g., normal, degenerated, emergency modes can be found. The studied situations are the ones where the DES switch between different normal modes, and not necessary the degenerated and the emergency ones. The most common composition operators used in supervisory control theory are the product and the parallel composition [43], [START_REF] Wonham | Notes on Discrete Event Systems[END_REF] However many different types of composition operators have been defined, e.g., the prioritized synchronous composition [49], the biased synchronous composition [START_REF] Lafortune | The infimal closed controllable superlanguage and its application to supervisory control[END_REF], see [START_REF] Wenck | On composition oriented perspective on controllability of large DES[END_REF] for a review of most of the composition operators. Multi-Agent composition operator [START_REF] Romanovski | On the supervisory control of multi-agent product systems[END_REF], [START_REF] Romanovski | Multi-agent product system: Controllability and non-blocking properties[END_REF] is another kind of operator, which differs from the synchronous product in the aspects of simultaneity and synchronization. The new class of DES that we define in this paper includes the class of DES in the context of fault diagnosis, with different operating modes. Furthermore this new class addresses especially the DES for which the system can switch from a given normal mode, to another normal mode. More precisely this new class of DES is an automaton which is the composition of two basic automata, but with different composition operators. A switching corresponds to the change of composition operator, but the two basic automata remains the same. A mode behavior (or mode for short) is defined to be the DES situation for a given composition operator. Composition operators are supposed to change more than once so that each mode is visited more than once. We give here below some examples of switched DES: • Manufacturing systems where the operating modes are changing (e.g. from normal mode to degenerated mode) • Discrete event systems after an emergency signal (from normal to safety mode) • Complex systems changing from normal mode to recovery mode (or from safety mode to normal mode). We can distinguish, like for the switched continuous-time systems, the notion of autonomous switching where no external action is performed and the notion of controlled switching, where the switching is forced. The results for this section can be found in [START_REF] Rakoto-Ravalontsalama | Supervisory control of switched discrete-event systems[END_REF]. Switchable Languages of DES The notion of switchable languages has been defined by Kumar, Takai, Fabian and Ushio in [Kumaret-al. 2005]. It deals with switching supervisory control, where switching means switching between two specifi-cations. In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The switching supervisory control strategy is based on the cost associated to each event, and it allows us to synthesize an optimal supervisory controller. Finally the proposed methodology is applied to a simple example. We now give the main results of this paper. First, we define a triplet of switchable languages. Second we derive a necessary and sufficient condition for the transitivity of switchable languages (n = 3). Third we generalize this definition to a n-uplet of switchable languages, with n > 3. And fourth we derive a necessary and sufficient condition for the transitivity of switchable languages for n > 3. Triplet of Switchable Languages We extend the notion of pair of switchable languages, defined in [START_REF] Kumar | Maximally Permissive Mutually and Globally Nonblocking Supervision with Application to Switching Control[END_REF], to a triplet of switchable languages. Definition 2 (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if they are pairwise switchable languages, that is, SW (K 1 , K 2 , K 3 ) := SW (K i , K j ), i ̸ = j, i, j = {1, 2, 3}. Another expression of the triplet of switchable languages is given by the following lemma. Lemma 1 (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if the following holds: The following theorem gives a necessary and sufficient condition for the transitivity of switchable languages. SW (K 1 , K 2 , K 3 ) = {(H 1 , H 2 , H 3 ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j, Theorem 1 (Transitivity of switchable languages, n = 3) . Given 3 specifications The proof can be found in [42]. (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} such that SW (K 1 , K 2 ) and SW (K 2 , K 3 ). (K 1 , K 3 ) N-uplet of Switchable Languages We now extend the notion of switchable languages, to a n-uplet of switchable languages, with (n > 3). Definition 3 (N-uplet of switchable languages, n > 3). A n-uplet of languages (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 2 , is said to be a n-uplet of switchable languages if the languages are pairwise switchable that is, SW (K 1 , ..., K n ) := SW (K i , K j ), i ̸ = j, i, j = {1, ..., n}, n > 2. As for the triplet of switchable languages, an alternative expression of the n-uplet of switchable languages is given by the following lemma. Lemma 2 (N-uplet of switchable languages, n > 3). A n-uplet of languages (K 1 , . . . , K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, . .., n}, n > 3 are said to be a n-uplet of switchable languages if the following holds: SW (K 1 , ..., K n ) = {(H 1 , ..., H n ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j , and H i controllable}. Transitivity of Switchable Languages (n > 3) We are now able to derive the following theorem that gives a necessary and sufficient condition for the transitivity of n switchable languages. Theorem 2 (Transitivity of n switchable languages, n > 3) . Given n specifications (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}. Moreover, assume that each language K i is at least switchable with another language K j , i ̸ = j. A pair of languages (K k , K l ) is switchable i.e. SW (K k , K l ), if and only if 1. H k ∩ pr(H l ) = H k , and 2. H l ∩ pr(H k ) = H l . The proof is similar to the proof of Theorem 6 and can be found in [42]. It is to be noted that the assumption that each of the n languages be at least switchable with another language is important, in order to derive the above result. Perspective 1: Control of Smart Grids According to the US Department of Energy's Electricity Advisory Committee, "A Smart Grid brings the power of networked, interactive technologies into an electricity system, giving utilities and consumers unprecedented control over energy use, improving power grid operations and ultimately reducing costs to consumers." The transformation from traditional electric network, with centralized energy production to complex and interconnected network will lead to a smart grid. The five main triggers of Smart grid, according to a major industrial point of view, are 1) Smart energy generation, 2) Flexible distribution, 3) Active energy efficiency, 4) Electric vehicles, and 5) Demand response. From a control point of view, a smart grid is a system of interconnected micro-grids. A microgrid is a power distribution network where generators and users interact. Generators technologies include renewable energy such as wind turbines or photovoltaic cells. The objective of this project is to simulate and control a simplified model of a micro-grid that is a part of a Smart Grid. After a literature review, a simplified model for control will be chosen. Different realistic scenarios will be tested in simulation with MATLAB. Finally different NR Perspective 2: Simulation with Stochastic Petri Nets The Air France CDG Airport Hub in Paris-Roissy is dealing daily with 40,000 transfer luggages and 30,000 local luggages (leaving from or arriving at CDG Airport). For this purpose Air France is exploiting the Sorting Infrastructure of Paris Aeroport, and has to propose a Logistical Scheme Allocation for each luggage in order to optimize the sorting and to minimize the number of failed luggages. By failed luggages, we mean a luggage that does not arrive in time for the assigned flight. The KPI Objective for 2017 is to have less than 20 failed luggages out of 1000 passengers. [2] arise as a suitable representation for Hybrid Dynamical Systems (HDS), in particular for solving control-oriented problems. MLD models can be used for solving a model predictive control (MPC) problem of a particular class of HDS and it is proved that MLD models are equivalent to PieceWise Affine Models in [6]. In the paper by Heemels and coworkers, the equivalencies among PieceWise Affine (PWA) Systems, Mixed Logical and Dynamical (MLD) systems, Linear Complementarity (LC) systems, Extended Linear Complementarity (ELC) systems and Max-Min-Plus-Scaling (MMPS) systems are proved, these relations are transcribed here in Fig. 1. I. INTRODUCTION Mixed and Logical Dynamical (MLD) models introduced by Bemporad and Morari in This equivalences are based on some propositions (see [6] for details) A more formal proof can be found in [3], where an efficient technique for obtaining a PWA representation of a MLD model is proposed. The technique in [3] describes a methodology for obtaining, in an efficient form, a partition of the state-input space. The algorithm in [3] uses some tools from polytopes theory in order to avoid the enumeration of the all possible combinations of the integer variables contained in the MLD model. However, the technique does not describe the form to obtain a suitable choice of the PWA model, even though this part is introduced in the implementation provided by the author in [4]. The objective of this paper is to propose an algorithm of the suitable choice of the PWA description and use the PWA description for obtaining some analysis and control of Hybrid Dynamical Systems. II. MLD SYSTEMS AND PWA SYSTEMS A. Mixed and Logical Dynamical (MLD) Systems The idea in the MLD framework is to represent logical propositions as equivalent integer expressions. MLD form is obtained by three basic steps [5]. The first step is to associate a binary variable δ ∈{0,1} with a proposition S, that may be true or false. δ is 1 if and only if proposition S is true. A composed proposition of elementary propositions S 1 ,…,S q combined using the boolean operators like AND(^), OR (∨), NOT(~) may be expressed like integer inequalities over corresponding binary variables δ i , i=1,…,q. The second step is to replace the products of linear functions and logic variables by a new auxiliary variable z = δa T x where a T is a constant vector. The z value is obtained by mixed linear inequalities evaluation. The third step is to describe the dynamical system, binary variables and auxiliary variables in a linear time invariant (LTI) system. A hybrid system MLD described in general form is represented by (1). 1 2 3 1 2 3 2 3 1 4 5 ( 1 ) ( ) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x k Ax k B u k B k B z k y k Cx k D u k D k D z k E k E z k E u k E x k E δ δ δ   + = + + +     = + + +     + ≤ + +    ) (1) where l are the continuous and [ ] { 0 ,1 } c T T n n C l x x x = ∈ × R binary states, u u are the inputs, the outputs, and , , represent the binary and continuous auxiliary variables, respectively. The constraints over state, input, output, z and δ variables are included in the third term in (1). [ ] { 0 ,1 } c T T m m C l u = ∈ × R { 0 ,1 } c l p × , , i i i i i i i x a Bu x u X i x c Du = + + × ∈ ∈ = + + ) ( ) , ( ) i i i i i i i b B u k x u d D u k + + × + l )  [ ] T T p C l y y y = ∈ R x A y C        ( 1 ) ( ( ) ( ) x k Ax k y k C x k  + =     = +   {0,1} l r δ ∈ , t + ∈ Ι , X i k ∈ ∈I c r z ∈ R , + ∈Z B. PieceWise Affine Systems A particular class of hybrid dynamical systems is the system described as follows, ( ) 2 where I is a set of indexes, X i is a sub-space of the real space R n , and R + is the set of positive real numbers including the zero element. In addition to this equation it is necessary to define the form as the system switches among its several modes. This equation is affine in the state space x and the systems described in this form are called PieceWise Affine Systems (PWA). In the literature of hybrid dynamical systems the systems described by the autonomous version of this representation are called Switched Systems. If the system vanishes when x brings near to zero, i.e. a i and b i are zero, then the representation is called PieceWise Linear (PWL) system. The discrete-time version of this equation will be used in this work and can be described as follows, where I is a set of indexes, X i is a sub-space of the real space R n . III. MLD SYSTEMS INTO PWA SYSTEMS The MLD framework is a powerful structure for representing hybrid systems in an integrated form. Although E 1 , E 2 , E 3 , E 4 and E 5 matrices are, in general, large matrices, they can be obtained automatically. An example is the HYSDEL compiler [10]. However, some analysis of the system with the MLD representation are computationally more expensive with respect to some tools developed for PWA representations. Exploiting the MLD and PWA equivalencies, it is possible to obtain analysis and control of a system using this equivalent representations. Nevertheless, as it is underlined in [3], this procedure is more complex with respect to the PWA into MLD conversion, and there exist more assumptions. To our knowledge, the only previous approach has been proposed by Bemporad [3]. We propose then a new approach of translating MLD into PWA systems. The MLD structure can be rewritten as follows, 1 1 2 3 1 1 2 3 2 3 2 1 1 5 ( ) ( 1 ) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) c c l l c c l l c c l l u k x k Ax k B B B k B z k u k u k y k Cx k D D D k D z k u k u k E k E z k E x k E E E u k δ δ δ          + = + + +                         = + + +                        + ≤ + +               (4) Here, the binary inputs are distinguished from the continuous inputs, because they induce switching modes in the system, in general. Supposing that the system is well posed, z(k) has only one possible value for a given x(k) and u(k), and can be rewritten as: 1 2 3 ( ) ( ) ( ) | [ , ] T T T c z k k x k k u k k m x u b = + + ≤ (5) Replacing this value in the original equations the system can be represented as, 3 1 1 3 2 3 3 3 1 1 3 2 3 3 4 31 1 32 5 2 33 ( 1 ) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) c c x k A B k x k B B k u k B k y k C D k x k D D k u k D k E E k x E E k u E E E k δ   + = + + + +     + = + + + +     -+ +-+ ≤ - -    (6) If an enumeration technique is used for generating all the feasible binary states of the [u l T δ T ] T vector, the first problem is to find a value of [x T u T ] T feasible for the problem, that can be obtained solving the linear programming problem, 1 3 4 5 2 1 min [ ] . . T T T T c c l l X u z x s t E u E z E x E E E u δ  =      - + - ≤ - +    (7) The solution is a feasible value [x *T u *T ] T . The next problem is to find k 1 , k 2 and k 3 . The inequalities can be rewritten as, 3 4 1 1 2 5 41 1 2 5 c c l l c c E z E x E u E u E E E k x E k u E k δ ≤ + + - + = + + 3 (8) where 5 E includes every constant in the problem, i.e. u l and δ. On the other hand, the E 3 matrix reflects the interaction among the z variables, and we can write: 1 2 F z k x k u k 3 × ≤ + + (9) The matrix F represents the interaction among the z variables, if the system is well posed F -1 should exist. With this last equation, for finding 3 k the next linear programming problem is solved, 3 * * 3 3 5 2 1 max . . l l k s t E k E E E u δ       ≤ - +    (10) The solution to this problem is 3 k , in this case we assume that all components in 5 E are the maximum and minimum values of z and the only solution for the problem is 3 k . With 3 k we can obtain the other matrices. For obtaining 1 k it is necessary to solve n x , i.e. the length of the state vector, linear programming problems, * * 3 5 2 1 max . . i i l k s t E k E E E u E δ       ≤ - + +    4 l i (11) where E 4i represents the column i of the E 4 matrix and 1 i i k k = -3 k is the column i of the matrix 1 k . For obtaining 2 k it is necessary to solve n u , i.e. the length of the continuous input vector, linear programming problems, * * 3 5 2 1 max . . i i l k s t E k E E E u E δ       ≤ - + +    1 l c i ( 12 ) where E 1ci represents the column i of the E 1c matrix and 2 i i k k k = -3 is the column i of the matrix 2 k . The matrix F should be found solving n z , i.e. the length of the z vector, linear programming problems, * * 3 5 2 1 max . . i i l k s t E k E E E u E δ       ≤ - + +    3 l i ( 13 ) where E 3i represents the column i of the E 3 matrix and 3 i i F k k = -is the column i of the matrix . F Finally, k 1 , k 2 , and k 3 , can be computed as, 1 1 1 2 2 1 3 3 k F k k F k k F k - - -   =     =     =    1 (14) With these equations, the algorithm for translating the MLD model into PWA model is given as follows, Algorithm 1 1. Find a feasible point for the binary vector, composed by the binary inputs and binary auxiliary variables. 2. Compute 3 k using Eq. ( 10). 3. Compute 1 k , 2 k and F using Eq. ( 11), ( 12) and (13). 4. Compute k 1 , k 2 , and k 3 using Eq. ( 14). 5. Using Eq. ( 6), compute A i , B i , f i , C i , D i and g i and the valid region for this representation. 6. If there exists another feasible point go to step 1. 7. End. Some gains in the algorithm performance can be obtained if the vector z is evaluated after step one, using a linear program for finding the maximum and the minimum in z, if the z min and z max solutions are the same, it is not necessary to calculate steps 3, and 4, and z = z min = z max can be assigned directly. IV. EXAMPLES A. The Three-Tank Benchmark Problem The three-tank benchmark problem has been proposed as an interesting hybrid dynamical system. This Benchmark was proposed in [7] and [8]. See [13] and references there in for some control results using MLD framework in this system. The algorithm described in the last section is used for obtaining a PWA representation of this system. This system has three tanks each of them interconnected with another as depicted in Fig. 2. Fig. 2. Three Tank System The model is written using binary variables (δ i ) and relational expressions, 01 1 1 1 1 2 2 0 2 2 3 3 03 3 3 ( ) 1 1 ( 1 ( ) v v v v v v Z h h h h h h Z h h h h Z h h δ δ δ δ δ δ     = - = ↔ >         = ↔ > = -         = ↔ > = -       2 ) 1 2 1 0 1 0 3 2 0 2 0 3 13 1 3 13 23 2 3 23 ( ) ( ) ( ) ( ) Z Z Z V Z Z Z V Z h h V Z h h V  = -     = -      = -     = -    1 1 1 3 1 1 1 1 1 1 31 1 ( ) ( ) ( ) ( ) ( 1 ) ( ) *( s L q k h k Z k Z k h k h k T C RC R C RC + = + - - - 1 ) 2 2 3 2 2 2 2 2 3 2 2 ( ) ( ) ( ) ( 1 ) ( ) *( q k Z k Z k h k h k Ts C R C RC + = - - - 2 3 1 3 2 3 1 2 3 3 3 1 3 3 2 3 2 1 3 2 ( ) ( ) ( ) ( ) ( ) ( 1 ) ( ) *( N h k Z k Z k Z k Z k h k h k Ts R C R C R C RC R C + = + - + + + + 3 ) ) The simulation of the system using the MLD framework and a Mixed Integer Quadratic Programming MIQP algorithm running in an Intel Celeron 2GHz processor and 256MB of RAM was 592.2s, using the PWA representation the same simulation was 1.33s. The time for obtaining the PWA model using the technique described in this work is 72.90s and the algorithm found 128 regions. Using the algorithm in [4] the computation time of the PWA form was 93.88s and the total regions found was 100 and the simulation took 5.89s. These results are summarized in Table I. Where Computation Time is the time taken by the computer for computing the PWA model based in the MLD model, and Simulation Time is the time taken by the computer for computing a trajectory given a model, an initial state and an input sequence. The simulation results with MLD model and the error between PWA simulation results and MLD simulation results, for the same input are shown in Fig. 3 In this case, at t=30s, the simulation with the PWA system in the Figure 3.b produces a switching to an invalid operation mode. B. Car with Robotized Manual Gear Shift The example of a Hybrid Model of a Car with Robotized Manual Gear Shift was reported in [9] and is used in [3] as example. The car dynamics is driven by the following equation, e b mx F F x β = -- ( 15 ) where m is the car mass, x and x is the car speed and acceleration, respectively, F e is the traction force, F b is the brake force and β is the friction coefficient. The Transmission Kinematics are given by, ( ) ( ) g s g e s R i x k R i F M k ω = = where ω is the engine speed, M the engine torque and i is the gear position. The engine torque M is restricted to belongs between the minimum engine torque C and the maximum engine torque C . ( ) e ω - ( ) e ω + The model has two continuous states, position and velocity of car, two continuous inputs, engine torque and breaking force, and six binary inputs, the gear positions. The MLD model was obtained using the HYSDEL tool. The translation of the MLD model took 155.73 s and the PWA model found 30 sub-models, using the algorithm proposed in this work, and the PWA model using the algorithm proposed in [3] took 115.52 s and contains 18 sub-models. The simulation time with MLD model and a MIQP algorithm for 250 iterations took 296.25s, using the PWA model obtained with the algorithm proposed here took 0.17s, and using the PWA model obtained using the algorithm in [4] the simulation took 0.35s. These results are summarized in Table II C. The Drinking Water Treatment Plant The example of a Drinking Water Treatment Plant has been reported in [11] and [12]. This plant was modeled using identification techniques for hybrid dynamical systems, and its behavior includes autonomous jumps. The plant modeled is based in the current operation of drinking water plant Francisco Wiesner situated at the periphery of Bogotá D.C. city (Colombia), which treats on average 12m 3 /s. The volume of water produced by this plant is near to 60% of consumption by the Colombian capital. In this plant, there exist two water sources: Chingaza and San Rafael reservoirs which can provide till 22m 3 /s of water. The process mixes inlet water with a chemical solution in order to generate aggregated particles that can be caught in a filter. The dynamic of the filter is governed by the differential pressure across the filter and the outlet water turbidity. An automaton associated to the filter executes a back-washing operation when the filter performance is degraded. Because of process non-linearity, the behavior of the system is different with two water sources, that is the case for the particular plant modeled. The model for each water source includes a dynamic for the aggregation particle process which dynamical variable is called Streaming Current (SC) and is modeled using two state variables, a dynamic for the differential pressure called Head Loss (HL) with only one state variable, a dynamic for the outlet turbidity (T o ) with two state variables. The identified model consists of four affine models, two for each water source in normal operation, one model in maintenance operation, one model representing the jump produced at the end of the maintenance operation. i i i i i i x k Ax k B u k f y k C x k D u k g i  + = + +     = + +   ∈ to normal operation} where water source is an input variables, maintenance operation is executed if outlet turbidity (T o ) is greater than a predefined threshold, or, Head Loss (HL) is greater than a predefined threshold, or, operation time is greater than a predefined threshold. The MLD model has 7 continuous states (including two variables for two timers in the automaton), 4 continuous inputs (dosage, water flow, inlet turbidity and pH), 3 binary inputs (water source, back-washing operation and normal operation), 8 auxiliary binary variables, and 51 auxiliary variables. The complete model can be obtained by mail from the corresponding authors. The The simulation results for the same input are shown in Fig. 5, (a) MLD Model (b) Error between MLD and PWA [4] (c) Error between MLD and PWA-This Work Fig. 5. Simulation results for a water plant model. In this case, at t=168min, the simulation with the PWA system in the Figure 5.b is not valid because there exist no mode in the PWA representation that belongs to the stateinput vector reached in this point. Some other results can be found in [14]. V. CONCLUSIONS This work presents new algorithm for obtaining a suitable choice of the PWA description from a MLD representation. The results are applied to the three-tank benchmark problem, to a car with robotized gear shift and to a drinking water plant, the three examples have been reported in the literature as examples of hybrid dynamical systems modeled with MLD formalism. The simulation results show that the PWA models obtained have the same behavior with respect to the MLD models. However in some cases the obtained PWA model does not have a valid solution for some state-input sub-spaces. As a consequence of the enumeration procedure, our PWA models have more submodels/regions than the algorithm in [3], however we show that the procedure does not spent much more computation time because of the simplicity in its formulation, and it ensures the covering of all regions included in the original MLD model. Ongoing work concerns the analysis of MLD Systems with some results from PWA systems. INTRODUCTION Switched nonlinear control systems are characterized by a set of several continuous nonlinear state dynamics with a logic-based controller, which determines simultaneously a sequence of switching times and a sequence of modes. As performance and efficiency are key issues in modern technological system such as automobiles, robots, chemical processes, power systems among others, the design of optimal logic-based controllers, covering all those functionalities while satisfying physical and operational constraints, plays a fundamental role. In the last years, several researchers have considered the optimal control of switched systems. An early work on the problem is presented in [1], where a class of hybrid-state continuous-time dynamic system is investigated. Later, a generalization of the optimal control problem and algorithms of hybrid systems is presented [2]. The particular case of the optimal control problem of switched systems is presented in [3] and [4]. However, most of the efforts have been typically focused on linear subsystems [5]. In general, the optimal control problem of switched system is often computationally hard as it encloses both elements of optimal control as well as combinatorial optimization [6]. In particular, necessary optimality conditions for hybrid systems have been derived using general versions of the Maximum Principle [7,8] and more recently in [9]. In the case of switching systems [4] and [6], the switched system has been embedded into a larger family of systems, and the optimization problem is formulated. For general hybrid systems, with nonlinear dynamics in each location and with autonomous and controlled switching, necessary optimality conditions have recently been presented in [10]; and using these conditions, algorithms based on the hybrid Maximum Principle have been derived. Focusing on real-time applications, an optimal control problem for switched dynamical systems is considered, where the objective is to minimize a cost functional defined on the state, and where the control variable consists of the switching times [11]. It is widely perceived that the best numerical methods available for hybrid optimal control problems involve mixed integer programming (MIP) [12,13]. Even though great progress has been made in recent years in improving these methods, the MIP is an NP-hard problem, so scalability is problematic. One solution for this problem is to use the traditional nonlinear programming techniques such as sequential quadratic programming, which reduces dramatically the computational complexity over existing approaches [6]. The main contribution of this paper is an alternative approach to solve effectively the optimal control problem for an autonomous nonlinear switched system based on the probability measures introduced in [14], and later used in [15] and [16] to establish existence conditions for an infinitedimensional linear program over a space of measure. Then, we apply the theory of moments, a method previously introduced for global optimization with polynomials in [17,18], and later extended to nonlinear 0 1 programs using an explicit equivalent positive semidefinite program in [19]. We also use some results recently introduced for optimal control problems with the control variable expressed as polynomials [20][21][22]. The first approach relating switched systems and polynomial representations can be found in [23]. The moment approach for global polynomial optimization based on semidefinite programming (SDP) is consistent, as it simplifies and/or has better convergence properties when solving convex problems. This approach works properly when the control variable (i.e., the switching signal) can be expressed as a polynomial. Essentially, this method transforms a nonlinear, nonconvex optimal control problem (i.e., the switched system) into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high-performance numerical computing. In other words, we transform a given controllable switched nonlinear system into a controllable continuous system with a linear and convex structure in the control variable. This paper is organized as follows. In Section 2, we present some definitions and preliminaries. A semidefinite relaxation using the moment approach is developed in Section 3. An algorithm is developed on the basis of the semidefinite approach in Section 4 with a numerical example to illustrate our approach, and finally in Section 5, some conclusions are drawn. THE SWITCHED OPTIMAL CONTROL PROBLEM Switched systems The switched system adopted in this work has a general mathematical model described by P x.t/ D f .t/ .x.t //, ( 1 ) where x.t/ is the state, f i W R n 7 ! R n is the i th vector field, x.t 0 / D x 0 are fixed initial values, and W OEt 0 , t f 7 ! Q 2 ¹0, 1, 2, ..., qº is a piecewise constant function of time, with t 0 and t f as the initial and final times, respectively. Every mode of operation corresponds to a specific subsystem P x.t/ D f i .x.t //, for some i 2 Q, and the switching signal determines which subsystem is followed at each point of time into the interval OEt 0 , t f . The control input is a measurable function. In addition, we consider a non-Zeno behavior, that is, we exclude an infinite switching accumulation points in time. Finally, we assume that the state does not have jump discontinuities. Moreover, for the interval OEt 0 , t f , the control functions must be chosen so that the initial and final conditions are satisfied. Definition 1 A control for the switched system in (1) is a duplet consisting of (i) a finite sequence of modes, and (ii) a finite sequence of switching times such that, t 0 < t 1 < < t q D t f . Switched optimal control problem Let us define the optimization functional in Bolza form to be minimized as J D '.x.t f // C Z t f t 0 L .t/ .t , x.t//dt , ( 2 ) where '.x.t f // is a real-valued function, and the running switched costs L .t/ W R C R n 7 ! R are continuously differentiable for each 2 Q. A switched optimal control problem (SOCP) can be stated in a general form as follows. Definition 2 Given the switched system in (1) and a Bolza cost functional J as in (2), the SOCP is given by min .t/2Q J.t 0 , t f , x.t 0 /, x.t f /, x.t/, .t// (3) subject to the state x. / satisfying Equation (1). The SOCP can have the usual variations of fixed or free initial or terminal state, free terminal time, and so forth. A Polynomial representation The starting point is to rewrite (1) as a continuous non-switched control system as it has been shown in [24]. The polynomial expression in the control variable able to mimic the behavior of the switched system is developed using a variable v, which works as a control variable. A polynomial expression in the new control variable v.t/ can be obtained through Lagrange polynomial interpolation and a constraint polynomial as follows. First, let the Lagrange polynomial interpolation quotients be defined as [25], l k .v/ D q Y i D0 i ¤k .v i/ .k i/ . ( 4 ) The control variable is restricted by the set D ¹v 2 R jg.v/ D 0 º, where g.v/ is defined by g.v/ D q Y kD0 .v k/. ( 5 ) General conditions for the subsystems functions should be satisfied. Assumption 3 The nonlinear switched system satisfies growth, Lipschitz continuity, and coercivity qualifications concerning the mappings f i W R n 7 ! R n L i W R n 7 ! R to ensure existence of solutions of (1). The solution of this system may be interpreted as an explicit ODE on the manifold . A related continuous polynomial system of the switched system (1) is constructed in the following proposition [24]. Proposition 4 Consider a switched system of the form given in (1). There exists a unique continuous state system with polynomial dependence in the control variable v, F.x, v/ of degree q in v, with v 2 as follows: P x D F.x, v/ D q X kD0 f k .x/l k .v/. ( 6 ) Then, this polynomial system is an equivalent polynomial representation of the switched system (1). Similarly, we define a polynomial equivalent representation for the running cost L .t/ by using the Lagrange's quotients as follows. Proposition 5 Consider a switched running cost of the form given in (2). There exists a unique polynomial running cost equation L.x, v/ of degree q in v, with v 2 as follows: L.x, v/ D q X kD0 L k .x/l k .v/ (7) with l k .v/ defined in (4). Then, this polynomial system is an equivalent polynomial representation of the switched running cost in (2). The equivalent optimal control problem (EOCP), which is based on the equivalent polynomial representation is described next. The functional using Equation ( 7) is defined by J D '.x.t f // C Z t f t 0 L.x, v/dt , ( 8 ) subject to the system defined in (6), with x 2 R n , v 2 , and x.t 0 / D x 0 , where l k .v/, , and L are defined earlier. Note that this control problem is a continuous polynomial system with the input constrained by a polynomial g.v/. This polynomial constraint is nonconvex with a disjoint feasible set, and traditional optimization solvers perform poorly on such equations, as the necessary constraint qualification is violated. This makes this problem intractable directly by traditional nonlinear optimization solvers. Next, we propose a convexification of the EOCP using the special structure of the control variable v, which improves the optimization process. SEMIDEFINITE RELAXATION USING A MOMENTS APPROACH Relaxation of the optimal control problem We describe the relaxation of the polynomial optimal control problem, for which, regardless of convexity assumptions, existence of optimal solutions can be achieved. Classical relaxation results establish, under some technical assumptions, that the infimum of any functional does not change when we replace the integrand by its convexification. In the previous section, a continuous representation of the switched system has been presented. This representation has a polynomial form in the control variable, which implies that this system is nonlinear and nonconvex with a disjoint feasible set. Thus, traditional optimization solvers have a disadvantaged performance, either by means of the direct methods (i.e., nonlinear programming) or indirect methods (i.e., Maximum Principle). We propose then, an alternative approach to deal with this problem. The main idea of this approach is to convexify the control variable in polynomial form by means of the method of moments. This method has been recently developed for optimization problems in polynomial form (see [17,18], among others). Therefore, a linear and convex relaxation of the polynomial problem ( 8) is presented next. The relaxed version of the problem is formulated in terms of probability measures associated with sequences of admissible controls [15]. Let be the set of admissible controls v.t/. The set of probability measures associated to the admissible controls in is ƒ D ® D ¹ t º t 2OEt 0 ,t f W supp. t / , a.e., t 2 OEt 0 , t f ¯, where is a probability measure supported in . The functional J.x, v/ defined on ƒ is now given by J.x, v/ D '.x.t f // C Z t f t 0 Z L.x.t /, v/d t .v/dt , where x.t/ is the solution of P x.t/ D Z F.x, v/d t .v/, x.t 0 / D x 0 . We have obtained a reformulation of the problem that is an infinite dimensional linear program and thus not tractable as it stands. However, the polynomial dependence in the control variable allows us to obtain a semidefinite program or linear matrix inequality relaxation, with finitely many constraints and variables. By means of moments variables, an equivalent convex formulation more appropriate to be solved by numerical computing can be rendered. The method of moments takes a proper formulation in probability measures of a nonconvex optimization problem ( [18,23], and references therein). Thus, when the problem can be stated in terms of polynomial expressions in the control variable, we can transform the measures into algebraic moments to obtain a new convex program defined in a new set of variables that represent the moments of every measure [17,18,22]. We define the space of moments as D ² m D ¹m k º W m k D Z v k d .v/, 2 P . / ³ , where P . / is the convex set of all probability measures supported in . In addition, a sequence m D ¹m k º has a representing measure supported in only if these moments are restricted to be entries on positive semidefinite moments and localizing matrices [17,19]. For this particular case, when the control variable is of dimension one, the moment matrix is a Hankel matrix with m 0 D 1, that is, for a moment matrix of degree d , we have M d .m/ D 2 6 6 6 4 m 0 m 1 m d m 1 m 2 m d C1 . . . . . . . . . m d m d C1 m 2d 3 7 7 7 5 . The localizing matrix is defined on the basis of corresponding moment matrix, whose positivity is directly related to the existence of a representing measure with support in as follows. Consider the set defined by the polynomial ˇ.v/ D ˇ0 C ˇ1v C ˇd v Á . It can be represented in moment variables as ˇ.m/ D ˇ0 C ˇ1m 1 C ˇÁm Á , or in compact form as ˇ.m/ D P Á D0 ˇ m . Suppose that the entries of the corresponding moment matrix are m , with 2 OE0, 1, : : : , 2d . Thus, every entry of the localizing matrix is defined as l D P d D0 ˇ m C . Note that the localizing matrix has the same dimension of the moment matrix, that is, if d D 1 and the polynomial ˇD v C 2v 2 , then the moment and localizing matrices are M 1 .m/ D Ä 1 m 1 m 1 m 2 , M 1 .ˇm/ D Ä m 1 C 2m 2 m 2 C 2m 3 m 2 C 2m 3 m 3 C 2m 4 . More details on the method of moments can be found in [19,26]. Because J is a polynomial in v of degree q, the criterion R Ld involves only the moments of up to order q and is linear in the moment variables. Hence, we replace with the finite sequence m D ¹m k º of all its moments up to order q. We can then express the linear combination of the functional J and the space of moments as follows min v2 J.x, v/ ! min 2P . / Z J.x, v/d .v/ D min m k 2 Z t f t 0 X i X k L i .x/˛i k m k , ( 9 ) where ˛ik are the coefficients resulting of the factorization of Equation ( 4). Similarly, we obtain the convexification of the state equation P x.t/ D Z F.x, v/d .v/ D X i X k f i .x/˛i k m k . ( 10 ) We have now a problem in moment variables, which can be solved by efficient computational tools as it is shown in the next section. Semidefinite programs for the EOCP We can use the functional and the state equation with moment structure to rewrite the relaxed formulation as a SDP. First, we need to redefine the control set to be coherent with the definitions of localizing matrix and representation results. We treat the polynomial g.v/ as two opposite inequalities, that is, g 1 .v/ D g.v/ > 0 and g 2 .v/ D g.v/ > 0, and we redefine the compact set to be D ¹g i .v/ > 0, i D 1, 2º. Also, we define also a prefixed order of relaxation, which is directly related to the number of subsystems. Let w be the degree of the polynomial g.v/, which is equivalent to the degree of the polynomials g 1 and g 2 . Considering its parity, we have that if w is even (odd) then r D w=2 (r D .w C 1/=2). In this case, r corresponds to the prefixed order of relaxation. We use a direct transcription method to obtain an SDP to be solved through a nonlinear programming (NLP) algorithm [27]. Using a discretization method, the first step is to split the time interval OEt 0 , t f into N subintervals as t 0 < t 1 < t 2 < : : : < t N D t f , with a time step h predefined by the user. The integral term in the functional is implicitly represented as an additional state variable, transforming the original problem in Bolza form into a problem in Mayer form, which is a standard transformation [27]. Therefore, we obtain a set of discrete equations in moment variables. In this particular case, we have used a trapezoidal discretization, but we could have used a more elaborated discretization scheme. Thus, the optimal control problem can be formulated as an SDP. Consider a fixed t in the time interval OEt 0 , t f and let Assumption 3 holds. We can state the following SDP of relaxation order r (SDP r ). Semidefinite program-SDP r : For every j D ¹1, 2, : : : , N º, a semidefinite program SDP r can be described by J r D min m.t j / h 2 N 1 X j D0 L.x.t j /, m.t j // s.t. x.t j C1 / D x.t j / C h X i X k f i .x.t j //˛i k m k .t j /, x.t 0 / D x 0 , (11) M r .m.t j // 0, M 0 .g 1 m.t j // 0, M 0 .g 2 m.t j // 0. Notice that in this case, the localizing matrices are linear. Let us consider the two subsystems case, that is, we have g D v 2 v that leads to polynomials g 1 D v 2 v and g 2 D v v 2 , thus w D deg g D 2. The localizing matrices are M 0 .g 1 m/ D m 2 m 1 , so M 0 .g 2 m/ D m 1 m 2 . This happens because we are using the minimum order of relaxation, r D w=2 or r D .w C 1/=2 depending on its parity. It is also known that the optimum J r is not always an optimal solution. However, in this case, a suboptimal solution is obtained, which corresponds to a lower bound on the global optimum J of the original problem. If we are interested in searching for an optimal solution, we can use a higher order of relaxation, that is, r > w=2, but the number of moment variables will increase, which can make the problem numerically inefficient. However, in many cases, low order relaxations will provide the optimal value J as shown in the next section, where we use a criterion to test whether the SDP r relaxation achieves the optimal value J for a fixed time. Still, suboptimal solutions of the original problem are obtained in the iteration that can be used. In order to solve a traditional NLP, we use the characteristic form of the moment and localizing matrices. We know that the moment matrices, and so the localizing matrices, are symmetric positive definite, which implies that every principal subdeterminant is positive [21]. Then, we use the set of subdeterminants of each matrix as algebraic constraints. Analysis of solutions Once a solution has been obtained in a subinterval OEt j 1 , t j , we obtain a vector of moments m .t j / D OEm 1 .t j /, m 2 .t j /, : : : , m r .t j /. Then, we need to verify if we have attained an optimal solution. On the basis of a rank condition of the moment matrix [26], we can test if we have obtained a global optimum at a relaxation order r. Also, on the basis of the same rank condition, we can check whether the optimal solution is unique or if it is a convex combination of several minimizers. The next result is based on an important result presented in [26] and used in [19] for optimization of 0 1 problems. Proposition 6 For a fixed time t j in the interval OEt 0 , t f , the SDP r (11) is solved with an optimal vector solution m .t j /, if r D rank M r m .t j / D rank M 0 m .t j / , (12) then the global optimum has been reached and the problem for the fixed time t j has r optimal solutions. Note that the rank condition ( 12) is a sufficient condition, which implies that the global optimum could be reached at some relaxation of order r and still the rank M r > rank M 0 . It should be noted that for the particular case of minimum order of relaxation, the rank condition yields r D rank M r .m.t j // D rank M 0 .m.t j // D 1, because M 0 D 1. Then, the rank M 0 D 1, which implies that when r > 1, that is, several solutions arise. In this case, we obtain a suboptimal switching solution. Using the previous result, we can state some relations between solutions that can be used to obtain the switching signal in every t j . First, we state the following result valid for the unique solution case. Theorem 7 If Problem (11) is solved for a fixed t j 2 OEt 0 , t f and the rank condition in ( 12) is verified with r D rank M r .m .t j // D 1, then the vector of moments m .t j / has attained a unique optimal global solution; and therefore, the optimal switching signal of the switched problem (3) for the fixed time t j is obtained as .t j / D m 1 .t j /, (13) where m 1 .t j / is the first moment of the vector of moments m .t j /. Proof Suppose the problem (11) has been solved for a fixed t j , and a solution has been obtained. Let m .t j / be the solution obtained and the rank condition (12) has been verified. From a result presented in [19], it follows that min 2P . / Z J.x, v/d .v/ D min m k 2 Z t f t 0 X i X k L i .x/˛i k m k , where m .t j / D m 1 , : : : , m r is the vector of moments of some measure m . But then, as m is supported on , it also follows that m .t j / is an optimal solution and because of rank M r m .t j / D 1, this solution is unique and it is the solution of the polynomial problem (8). Then, we know that every optimal solution v corresponds to m .t j / D v .t j /, v Remark 8 Switched linear systems case. When we have a switched linear system, that is, when each subsystem is defined by a linear system, results presented in Theorem ( 7) can be directly applied, because Assumption (3) is satisfied for linear systems because the Lipschitz condition is satisfied globally [28]. Also, we can notice that if the switched linear system has one and only one switching solution, it corresponds to the first moment solution of the SDP r program for all t 2 OEt 0 , t f , that is, m 1 .t j / D .t j /, for all t j 2 OEt 0 , t f . This can be verified by means of the rank condition (12), which should be r D 1, for all t 2 OEt 0 , t f . This result states a correspondence between the minimizer of the original switched problem and the minimizer of the SDP r , and it can be used to obtain a switching signal directly from the solution of the SDP r . However, it is not always the case. Sometimes, we obtain a non-optimal solution that arises when the rank condition is not satisfied, that is, r > 1. But, we still can use information from the solution to obtain a switching suboptimal solution. In [29], a sum up rounding strategy is presented to obtain a suboptimal switched solution from a relaxed solution in the case of mixedinteger optimal control. We use a similar idea but extended to the case when the relaxed solution is any integer instead of the binary case. Consider the first moment m 1 . / W OEt 0 , t f 7 ! OE0, q, which is a relaxed solution of the NLP problem for t j when the rank condition is not satisfied. We can state a correspondence between the relaxed solution and a suboptimal switching solution, which is close to the relaxed solution in average and is given by .t j / D 8 < : dm 1 .t j /e if Z t j t 0 m 1 . /d ıt j 1 X kD0 .t k / > 0.5ıt bm 1 .t j /c otherwise (14) where d e and b c are the ceiling and floor functions, respectively. A SWITCHED OPTIMIZATION ALGORITHM The ideas presented earlier are summarized in the following algorithm, which is implemented in Section 4.2 on a simple numerical example presented as a benchmark in [30]. The core of the algorithm is the inter-relationship of three main ideas: (i) The equivalent optimal control problem The EOCP is formulated as in Section 2, where the equivalent representation of the switched system and the running cost are used to obtain a polynomial continuous system. (ii) The relaxation of the EOCP -the theory of moments The EOCP is now transformed into an SDP of order of relaxation r, which can be solved numerically efficiently. We obtain an equivalent linear convex formulation in the control variable. (iii)The relationship between the solutions of the original switched problem and the SDP solutions The solutions of the SDP r for each t j 2 OEt 0 , t f are obtained; and through an extracting algorithm, the solutions of the original problem are obtained. Algorithm SDP r -SOCP The optimal control pseudo-code algorithm for the switched systems is shown in Algorithm 1. In the next section, we present a numerical example to illustrate the results presented in this work. Numerical example: Lotka-Volterra problem We present an illustrative example of a switched nonlinear optimal control problem reformulated as a polynomial optimal control problem. Then, this reformulation allows us to apply the semidefinite relaxation based on the theory of moments. We illustrate an efficient computational treatment to study the optimal control problem of switched systems reformulated as a polynomial expression. We deal with the Lotka-Volterra fishing problem. Basically, the idea is to find an optimal strategy on a fixed time horizon to bring the biomass of both predator as prey fish to a prescribed steadystate. The system has two operation modes and a switching signal as a control variable. The optimal integer control shows chattering behavior, which makes this problem a benchmark to test different types of algorithms ‡ . The Lotka-Volterra model, also known as the predator-prey model, is a coupled nonlinear differential equations where the biomasses of two fish species are the differential states x 1 and x 2 , the binary control is the operation of a fishing fleet, and the objective is to penalize deviation from a steady-state. The optimal control problem is described as follows: min u Z t f t 0 .x 1 1/ 2 C .x 2 1/ 2 dt s.t. P x 1 D x 1 x 1 x 2 0.4x 1 u P x 2 D x 2 C x 1 x 2 0.2x 2 u x.0/ D .0.5, 0.7/ > , u.t / 2 ¹0, 1º, t 2 OE0, 12. ( The problem can be represented by the approach described earlier. Consider a subsystem f 0 when the control variable takes value 0, and a subsystem f 1 when the control variable takes value 1. This leads to a two operation modes and a switching control variable . / W OE0, 12 7 ! ¹0, 1º. Thus, by means of the algorithm SDP r -EOCP, an SDP program can be stated. First, we define the order of relaxation as r D w=2 D 1; the constraint control set as D ¹g i .v/ > 0, g 1 .v/ D v 2 v, g 2 D v v 2 º ; the moment matrix with r D 1, M 1 .m/; and the localizing matrices, M 0 .g 1 m/ and M 0 .g 2 m/. Using the set and the moment and localizing matrices, we set the problem in moment variables obtaining the positive semidefinite program .SDP r /. Solving the SDP r program for each t 2 OE0, 12, with a step time h, we obtain an optimal trajectory, and the moment sequence allows us to calculate the switching signal. Figure 1 shows the trajectories, the relaxed moment solution, and the switching signal obtained for an order of relaxation r D 1. It can be appreciated that when the relaxed solution has a unique optimal solution, that is, when the rank condition is satisfied, the relaxed solution has an exact unique solution that is integer and corresponds to the switching signal, which shows the validity of Theorem 7. Also, it is shown that when the rank condition is not satisfied, the algorithm proposed gives a suitable solution, that in average is close to the relaxed solution. The algorithm has shown that even if there is no global optimal solution, a local suboptimal solution is found. Furthermore, for the intervals where there is no optimal solution, a suboptimal solution has been found using the relaxed solution. In comparison with traditional algorithms, where a global suboptimal solution based on a relaxation is found, the proposed algorithm is able to detect whether an optimal solution is found in a time interval, which implies that if the system is composed by convex functions, a global optimal solution is found. The computational efficiency is based on the semidefinite methods of solutions. CONCLUSIONS AND FUTURE WORK In this paper, we have developed a new method for solving the optimal control problem of switched nonlinear systems based on a polynomial approach. First, we transform the original problem into a polynomial system, which is able to mimic the switching behavior with a continuous polynomial representation. Next, we transform the polynomial problem into a relaxed convex problem using the method of moments. From a theoretical point of view, we have provided sufficient conditions for the existence of the minimizer by using particular features of the relaxed, convex formulation. Even in the absence of classical minimizers of the switched system, the solution of its relaxed formulation provides minimizers. We have introduced the moment approach as a computational useful tool to solve this problem, which has been illustrated by means of a classical example used in switched systems. As a future work, the algorithm can be extended to the case when an external control input and the switching signal should be obtained. Introduction The ). These methods exploit the communication capabilities of the agents to coordinate their decisions based on the information received from their neighbours. Fully decentralised methodologies have important advantages, among which we highlight the increase of the autonomy and resilience of the whole system since the dependence on a central authority is avoided. In this paper, we propose a distributed resource allocation algorithm that does not require a central coordinator. An important characteristic of our method is the capability of handling lower bounds on the decision variables. This feature is crucial in a large number of practical applications, e.g. in [START_REF] Conrad | Resource economics[END_REF], Pantoja and Quijano (2012), and Lee et al. (2016), where it is required to capture the non-negativity of the resource allocated to each entity. We use a Lyapunov-based analysis in order to prove that the proposed algorithm asymptotically converges to the optimal solution under some mild assumptions related to the convexity of the cost function, and the connectivity of the graph that represents the communication topology. In order to illustrate our theoretical results, we perform some simulations and compare our method with other techniques reported in the literature. Finally, we present two engineering applications of the proposed algorithm. The first one seeks to improve the energy efficiency in large-scale air-conditioning systems. The second one is related to the distributed computation of the Euclidean projection onto a given set. Our approach is based on a continuous time version of the centre-free algorithm presented in Xiao and Boyd (2006). The key difference is that the method in Xiao and Boyd (2006) does not allow the explicit inclusion of lower bounds on the decision variables, unless they are added by means of barrier functions (either logarithmic or exact; [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF]. The problem of using barrier functions is that they can adversely affect the convergence time (in the case of using exact barrier functions) and the accuracy of the solution (in the case of using classic logarithmic barrier functions), especially for large-scale problems [START_REF] Jensen | Operations research models and methods[END_REF]. There are other methods that consider lower bound constraints in the problem formulation. For instance, Dominguez-Garcia, Cady, and Hadjicostis ( 2012 2009) use consensus steps to refine an estimation of the system state, while in our approach, consensus is used to equalise a quantity that depends on both the marginal cost perceived by each agent in the network and the Karush-Kuhn-Tucker (KKT) multiplier related to the corresponding resource's lower bound. In this regard, it is worth noting that the method studied in this paper requires less computational capability than the methods mentioned above. Finally, there are other techniques based on game theory and mechanism design (Kakhbod & Teneketzis, 2012; Sharma & Teneketzis, 2009) that decompose and solve resource allocation problems. Nonetheless, those techniques need that each agent broadcasts a variable to all the other agents, i.e. a communication topology given by a complete graph is required. In contrast, the method developed in this paper only uses a communication topology given by a connected graph, which generally requires lower infrastructure. The remainder of this paper is organised as follows. Section 2 shows preliminary concepts related to graph theory. In Section 3, the resource allocation problem is stated. Then, in Section 4, we present our distributed algorithm and the main results on convergence and optimality. A comparison with other techniques reported in the literature is performed in Section 5. In Section 6, we describe two applications of the proposed method: (i) the optimal chiller loading problem in large-scale airconditioning systems, and (ii) the distributed computation of Euclidean projections. Finally, in Sections 7 and 8, arguments and conclusions of the developed work are presented. Preliminaries First, we describe the notation used throughout the paper and presents some preliminary results on graph theory that are used in the proofs of our main contributions. In the multi-agent framework considered in this article, we use a graph to model the communication network that allows the agents to coordinate their decisions. A graph is mathematically represented by the pair G = (V, E ), where V = {1, . . . , n} is the set of nodes, and E ⊆ V × V is the set of edges connecting the nodes. G is also characterised by its adjacency matrix A = [a i j ]. The adjacency matrix A is an n × n non-negative matrix that satisfies: a ij = 1 if and only if (i, j) ∈ E, and a ij = 0 if and only if (i, j) / ∈ E. Each node of the graph corresponds to an agent of the multi-agent system, and the edges represent the available communication channels (i.e. (i, j) ∈ E if and only if agents i and j can share information). We assume that there is no edges connecting a node with itself, i.e. a ii = 0, for all i ∈ V; and that the communication channels are bidirectional, i.e. a ij = a ji . The last assumption implies that G is undirected. Additionally, we denote by N i = { j ∈ V : (i, j) ∈ E}, the set of neighbours of node i, i.e. the set of nodes that are able to receive/send information from/to node i. Let us define the n × n matrix L(G) = [l i j ], known as the graph Laplacian of G, as follows: l i j = ⎧ ⎨ ⎩ j∈V a i j if i = j -a i j if i ̸ = j. (1) Properties of L(G) are related to connectivity characteristics of G as shown in the following theorem. We remark that a graph G is said to be connected if there exists a path connecting any pair of nodes. From Equation ( 1), it can be verified that L(G)1 = 0, where 1 = [1, . . . , 1] ⊤ , 0 = [0, . . . , 0] ⊤ . A consequence of this fact is that L(G) is a singular matrix. However, we can modify L(G) to obtain a nonsingular matrix as shown in the following lemma. Lemma 2.1: Let L k r (G) ∈ R (n-1 )×n be the submatrix obtained by removing the kth row of the graph Laplacian L(G), and let L k (G) ∈ R (n-1)×(n-1) be the submatrix obtained by removing the kth column of L k r (G). If G is connected, then L k (G) is positive definite. Furthermore, the inverse matrix of L k (G) satisfies (L k (G)) -1 l k r k = -1, where l k r k is the kth column of the matrix L k r (G). Proof: First, notice that L(G) is a symmetric matrix because G is an undirected graph. Moreover, notice that according to Equation (1), L(G) is diagonally dominant with non-negative diagonal entries. The same holds for L k (G) since this is a sub-matrix obtained by removing the kth row and column of L(G). Thus, to show that L k (G) is positive definite, it is sufficient to prove that L k (G) is nonsingular. According to Theorem 2.1, since G is connected, L(G) has exactly n -1 linearly independent columns (resp. rows). Let us show that the kth column (resp. row) of L(G) can be obtained by a linear combination of the other columns (resp. rows), i.e. the kth column (resp. row) is not linearly independent of the rest of the columns (resp. rows). Since L(G)1 = 0, notice that l ik = -j∈V, j̸ =k l i j , for all i ∈ V, i.e. the kth column can be obtained by a linear combination of the rest of the columns. Furthermore, since L(G) is a symmetric matrix, the same occurs with the kth row. Therefore, the submatrix L k (G) is nonsingular since its n -1 columns (resp. rows) are linearly independent. Now, let us prove that (L k (G)) -1 l k r k = -1. To do so, we use the fact that (L k (G)) -1 L k (G) = I, where I is the identity matrix. Hence, by the definition of matrix multiplication, we have that n-1 m=1 lk im l k m j = 1 if i = j 0 if i ̸ = j , (2) where l k i j and lk i j are the elements located in the ith row and jth column of the matrices L(G) and L k (G) -1 , respectively. Thus, n-1 m=1 lk im l k mi = 1, for all i = 1, . . . n -1. ( 3 ) Let l k r k m be the mth entry of the vector l k r k . Notice that, according to the definition of L k (G) and since L(G)1 = 0, l k mi = -n-1 j=1, j̸ =i l k m j -l k r k m . Replacing this value in Equation (3), we obtain - n-1 j=1, j̸ =i n-1 m=1 lk im l k m j - n-1 m=1 lk im l kr km = 1, for all i = 1, . . . n -1. According to Equation ( 2), n-1 j=1, j̸ =i n-1 m=1 lk im l k m j = 0. This implies that n-1 m=1 lk im l k r k m = -1, for all i = 1, … , n -1. Therefore, L k (G) -1 l k r k = -1. Theorem 2. 1 and Lemma 2.1 will be used in the analysis of the method proposed in this paper. Problem statement In general terms, a resource allocation problem can be formulated as follows (Patriksson, 2008;Patriksson & Strömberg, 2015): min x φ(x) := n i=1 φ i (x i ) (4a) subject to n i=1 x i = X (4b) x i ≥ x i , for all i = 1, . . . , n, (4c) where x i ∈ R is the resource allocated to the ith zone; x = [x 1 , … , x n ] ; φ i : R → R is a strictly convex and differentiable cost function; X is the available resource; and x i , is the lower bound of x i , i.e. the minimum amount of resource that has to be allocated in the ith zone. Given the fact that we are interested in distributed algorithms to solve the problem stated in Equation ( 4), we consider a multi-agent network, where the ith agent is responsible for managing the resource allocated to the ith zone. Moreover, we assume that the agents have limited communication capabilities, so they can only share information with their neighbours. This constraint can be represented by a graph G = {V, E} as it was explained in Section 2. Avoiding the individual inequality constraints (4c), KKT conditions establish that at the optimal solution x * = [x * 1 , . . . , x * n ] ⊤ of the problem given in Equation (4a-4b), the marginal costs φ ′ i (x i ) = dφ i dx i must be equal, i.e. φ ′ i (x * i ) = λ, for all i = 1, … , n, where λ ∈ R. Hence, a valid alternative to solve (4a-4b) is the use of consensus methods. For instance, we can adapt the algorithm presented in Xiao and Boyd (2006), which is described as follows: ẋi = j∈N i φ ′ j (x j ) -φ ′ i (x i ) , for all i ∈ V. ( 5 ) This algorithm has two main properties: (i) at equilibrium, φ ′ i (x * i ) = φ ′ j (x * j ) if the nodes i and j are connected by a path; (ii) n i=1 x * i = n i=1 x i (0), where x i (0) is the initial condition of x i . Therefore, if the graph G is connected and the initial condition is feasible (i.e. n i=1 x i (0) = X), x asymptotically reaches the optimal solution of (4a-4b) under (5). However, the same method cannot be applied to solve (4) (the problem that considers lower bounds in the resource allocated to each zone) since some feasibility issues related with the constraints (4c) arise. In the following section, we propose a novel method that extends the algorithm in Equation ( 5) to deal with the individual inequality constraints given in Equation (4c). Centre-free resource allocation algorithm Resource allocation among a subset of nodes in a graph First, we consider the following subproblem: let G = {V, E} be a graph comprised by a subset of active nodes V a and a subset of passive nodes V p , such that V a V p = V. A certain amount of resource X has to be split among those nodes to minimise the cost function φ(x) subject to each passive node is allocated with its corresponding lower bound x i . Mathematically, we formulate this subproblem as: min x φ(x) (6a) subject to n i=1 x i = X (6b) x i = x i , for all i ∈ V p . (6c) Feasibility of ( 6) is guaranteed by making the following assumption. Assumption 4.1: At least one node is active, i.e. V a ̸ = ∅. According to KKT conditions, the active nodes have to equalise their marginal costs at the optimal solution. Therefore, a consensus among the active nodes is required to solve (6). Nonetheless, classic consensus algorithms, as the one given in Equation ( 5), cannot be used directly. For instance, if all the nodes of G apply ( 5) and G is connected, the marginal costs of both passive and active nodes are driven to be equal in steady state. This implies that the resource allocated to passive nodes can violate the constraint (6c). Besides, if the resource allocated to passive nodes is forced to satisfy (6c) by setting x * i = x i , for all i ∈ V p , there is no guarantee that the new solution satisfies (6b). Another alternative, is to apply (5) to only active nodes (in this case, the neighbourhood of node i ∈ V a in Equation ( 5) has to be taken as { j ∈ V a : (i, j) ∈ E}, and the initial condition must satisfy i∈V a x i (0) = X -i∈V p x i ). However, the sub-graph formed by the active nodes is not necessarily connected although G is connected. Hence, marginal cost of active nodes are not necessarily equalised at equilibrium, which implies that the obtained solution is sub-optimal. In conclusion, modification of (5) to address ( 6) is not trivial. In order to deal with this problem, we propose the following algorithm: ẋi = j∈N i (y j -y i ), for all i ∈ V (7a) ẋi = (x i -x i ) + j∈N i (y j -y i ), for all i ∈ V p (7b) y i = φ ′ i (x i ) if i ∈ V a φ ′ i (x i ) + xi if i ∈ V p . (7c) In the same way as in (5), the variables {x i , i ∈ V} in Equation ( 7) correspond to the resource allocated to both active and passive nodes. Notice that we have added auxiliary variables { xi , i ∈ V p } that allow the passive nodes to interact with their neighbours taking into account the constraint (6c). On the other hand, the term j∈N i (y jy i ), in Equations (7a)-(7b), leads to a consensus among the elements of the vector y = [y 1 , … , y n ] , which are given in Equation (7c). For active nodes, y i only depends on the marginal cost φ ′ i (x i ), while for passive nodes, y i depends on both the marginal cost and the state of the auxiliary variable xi . Therefore, if the ith node is passive, it has to compute both variables x i and xi . Furthermore, it can be seen that, if all the nodes are active, i.e. (V a = V), then the proposed algorithm becomes the one stated in Equation (5). Notice that the ith node only needs to know y i and the values {y j : j ∈ N i } to compute j∈N i (y j -y i ) in (7a)-(7b). In other words, L(G)y = [ j∈N 1 (y j -y 1 ), . . . , j∈N n (y j -y n )] ⊤ is a distributed map over the graph G [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF]. This implies that the dynamics given in Equation ( 7) can be computed by each node using only local information. In fact, the message that the ith node must send to its neighbours is solely composed by the variable y i . .. Feasibility Let us prove that, under the multi-agent system proposed in Equation ( 7), x(t) satisfies the first constraint of the problem given by Equation ( 6), for all t ࣙ 0, provided that n i=1 x i (0) = X. Lemma 4.1: The quantity n i=1 x i (t ) is invariant under Equation (7), i.e. if n i=1 x i (0) = X, then n i=1 x i (t ) = X, for all t ࣙ 0. Proof: It is sufficient to prove that ˙ = 0, where = n i=1 x i . Notice that ˙ = n i=1 ẋi = 1 ⊤ ẋ, where ẋ = [ ẋ1 , . . . , ẋn ] ⊤ . Moreover, according to Equation ( 7), 1 ⊤ ẋ = -1 ⊤ L(G)y. Since G is undirected, 1 ⊤ L(G) = L(G)1 = 0. Therefore, ˙ = 0. The above lemma does not guarantee that x(t) is always feasible because of the second constraint in Equation ( 6), i.e. x i = x i , for all i ∈ V p . However, it is possible to prove that, at equilibrium, this constraint is properly satisfied. .. Equilibrium point The next proposition characterises the equilibrium point of the multi-agent system given in Equation (7). Proposition 4.1: If G is connected, the system in Equation (7) has an equilibrium point x * , { x * i , i ∈ V p }, such that: φ ′ i (x * i ) = λ, for all i ∈ V a , where λ ∈ R is a constant; and x * i = x i , for all i ∈ V p . Moreover, x * i = λ -φ ′ i (x * i ), for all i ∈ V p . Proof: Let x * , { x * i , i ∈ V p } be the equilibrium point of Equation (7). Since G is connected by assumption, it follows from Equation (7a) that y * i = λ, for all i ∈ V, where λ is a constant. Thus, y * i = φ ′ i (x * i ) if i ∈ V a , and y * i = φ ′ i (x * i ) + x * i , if i ∈ V p . Hence, φ ′ i (x * i ) = λ, for all i ∈ V a , and x * i = λ -φ ′ i (x * i ) , for all i ∈ V p . Moreover, given the fact that j∈N i (y * j -y * i ) = 0, it follows from Equation (7b) that x * i = x i , for all i ∈ V p Remark 4.1: Proposition 4.1 states that, at the equilibrium point of ( 7), the active nodes equalise their marginal costs, while each passive node is allocated with an amount of resource equal to its corresponding lower bound. In conclusion, if n i=1 x * i = X, then it follows from Proposition 4.1, that x * minimises the optimisation problem given in Equation (6). Additionally, notice that the values { x * i , i ∈ V p } are equal to the KKT multipliers associated with the constraint (6c). .. Convergence Let us prove that the dynamics in Equation ( 7) converge to x * , { x * i , i ∈ V p }, provided that each φ i (x i ) is strictly convex. Proposition 4.2: Assume that φ i (x i ) is a strictly convex cost function, for all i ∈ V. If G is connected, n i=1 x i (0) = X, and Assumption 4.1 holds, then x(t) converges to x * under Equation (7), where x * is the solution of the optimisation problem stated in Equation (6), i.e. x * is the same given in Proposition 4.1. Furthermore, xi converges to x * i , for all i ∈ V p . Proof: According to Lemma 4.1, since n i=1 x i (0) = X, then x(t) satisfies the first constraint of the problem stated in Equation ( 6), for all t ࣙ 0. Therefore, it is sufficient to prove that the equilibrium point x * , { x * i , i ∈ V p } (which is given in Proposition 4.1) of the system proposed in Equation ( 7) is asymptotically stable (AS). In order to do that, let us express our multi-agent system in error coordinates, as follows: ė = -L(G)e y ėi = e i -L(G)e y i , for all i ∈ V p e y i = φ ′ i (x i ) -φ ′ i (x * i ) if i ∈ V a φ ′ i (x i ) -φ ′ i (x * i ) + êi if i ∈ V p , (8) where L(G) is the graph Laplacian of G; e i = x i -x * i , and e y i = y i -y * i , for all i ∈ V; êi = xix * i , for all i ∈ V p ; e = [e 1 , … , e n ] ; e y = [e y 1 , . . . , e y n ] ⊤ ; and L(G)e y i represents the ith element of the vector L(G)e y . Since Assumption 4.1 holds, V a ̸ = ∅. Let k be an active node, i.e. k ∈ V a , and let e k , e k y be the vectors obtained by removing the kth element from vectors e and e y , respectively. We notice that, according to Lemma 4.1, e k (t) = i ࢠ ν, i ࣔ k e i (t), for all t ࣙ 0. Therefore, Equation ( 8) can be expressed as ėk = -L k (G)e k y -l k r k e y k e k = -i∈ν,i̸ =k e i ėi = e i -L k (G)e k y + l k r k e y k i , for all i ∈ V p e y i = φ ′ i (x i ) -φ ′ i (x * i ) if i ∈ V a φ ′ i (x i ) -φ ′ i (x * i ) + êi if i ∈ V p , (9) where L k (G) and l k r k are defined in Lemma 2.1. In order to prove that the origin of the above system is AS, let us define the following Lyapunov function (adapted from Obando, Quijano, & Rakoto-Ravalontsalama, 2014): V = 1 2 e k ⊤ L k (G) -1 e k + 1 2 i∈V p (e i -êi ) 2 . ( 10 ) The function V is positive definite since G is connected (the reason of this fact is that, according to Lemma 2.1, L k (G) and its inverse are positive definite matrices if G is connected). The derivative of V along the trajectories of the system stated in Equation ( 9) is given by, V = -e k ⊤ e k y -e k ⊤ (L(G)) -1 l k r k e y k - i∈V p e i (e i -êi ) Taking into account that L k (G) -1 l k r k = -1 (cf. Lemma 2.1), we obtain V = -e k ⊤ e k y + e y k i∈V,i̸ =k e i - i∈V p e i (e i -êi ) = - n i=1 e i (φ ′ i (x i ) -φ ′ i (x * i )) - i∈V p e i êi + i∈V p e i ( êi -e i ) = - n i=1 (x i -x * i )(φ ′ i (x i ) -φ ′ i (x * i )) - i∈V p e 2 i , where φ ′ i is strictly increasing given the fact that φ i is strictly convex, for all i ∈ V. Therefore, (x i - x * i )(φ ′ i (x i ) -φ ′ i (x * i )) ≥ 0, for all i ∈ V, and thus V ≤ 0. Since V does not depend on { êi , i ∈ V p }, it is negative semidefinite. Let S = {{e i , i ∈ V}, { êi , i ∈ V p } : V = 0}, i.e. S = {e i , i ∈ V}, { êi , i ∈ V p } : e i = 0, for all i ∈ V . Given the fact that G is connected and V ̸ = V p (by Assumption 4.1), then ė = 0 iff e y = 0 (see Equation ( 8)). Therefore, the only solution that stays identically in S is the trivial solution, i.e. e i (t) = 0, for all i ∈ V, êi (t ) = 0, for all i ∈ V p . Hence, we can conclude that the origin is AS by applying the Lasalle's invariance principle. In summary, we have shown that the algorithm described in Equation ( 7) asymptotically solves the subproblem in Equation ( 6), i.e. (7) guarantees that the resource allocated to each passive node is equal to its corresponding lower bound, while the remaining resource X -i∈V p x i is optimally allocated to active nodes. Optimal resource allocation with lower bounds Now, let us consider our original problem stated in Equation (4), i.e. the resource allocation problem that includes lower bound constraints. Let x * = [x * 1 , . . . , x * n ] ⊤ be the optimal solution of this problem. Notice that, if we know in advance which nodes will satisfy the constraint (4c) with strict equality after making the optimal resource allocation process, i.e. I := {i ∈ V : x * i = x i }, we can mark these nodes as passive and reformulate (4) as a subproblem of the form (6). Based on this idea, we propose a solution method for (4), which is divided in two stages: in the first one, the nodes that belong to I are identified and marked as passive; in the second one, the resulting subproblem of the form ( 6) is solved by using (7). Protocol (7) can be also used in the first stage of the method as follows: in order to identify the nodes that will satisfy (4c) with strict equality at the optimal allocation, we start marking all nodes as active and apply the resource allocation process given by (7). The nodes that are allocated with an amount of resource below their lower bounds at equilibrium are marked as passive, and then ( 7) is newly applied (in this way, passive nodes are forced to meet (4c)). This iterative process is performed until all nodes satisfy their lower bound constraints. Notice that the last iteration of this procedure corresponds to solve a subproblem of the form (6) where the set of passive nodes is equal to the set I. Therefore, this last iteration is equivalent to the second stage of the proposed method. Summarising, our method relies on an iterative process that uses the continuous-time protocol (7) as a subroutine. The main idea of this methodology is to identify in each step the nodes that have an allocated resource out of their lower bounds. These nodes are marked as passive, so they are forced to satisfy their constraints in subsequent iterations, while active nodes seek to equalise their marginal costs using the remaining resource. In the worst case scenario, the classification between active and passive nodes requires |V| iterations, where |V| is the number of nodes in the network. This fact arises when only one active node becomes passive at each iteration. The proposed method is formally described in Algorithm 1. Notice that this algorithm is fully decentralised since Steps 4-6 can be computed by each agent using only local information. Step 4 corresponds to solve Equation (7), while Steps 5 and 6 describe the conditions for converting an active node into passive. Let us note that Steps 4-6 have to be performed |V| times since we are considering the worst case scenario. Therefore, each agent needs to know the total number of nodes in the network. This requirement can be computed in a distributed way by using the method proposed in Garin and Schenato (2010, p. 90). We also notice the fact that the agents have to be synchronised (as usual in several distributed algorithms; [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF][START_REF] Garin | A survey on distributed estimation and control applications using linear consensus algorithms[END_REF]Xiao & Boyd, 2006) in order to apply the Step 4 of Algorithm 1, i.e. all agents must start solving Equation ( 7) at the same time. Algorithm 1: Resource allocation with lower bounds Input: -Parameters of the problem in Equation ( 4). -An initial value x (0) , such that n i=1 x (0) i =X. Output: Optimal allocation x * 1 Mark all nodes as active, i.e. Ṽa,0 ← V, Ṽp,0 ← ∅.; 2 xi,0 ← x (0) i , for all i ∈ V.; 3 for l ← 1 to |V| do 4 xi,l ← x i (t l ), for all i ∈ V, where x i (t l ) is the solution of Equation (7a) at time t l , with initial conditions x(0) = [ x1,l-1 , . . . , xn,l-1 ] ⊤ , V a = Ṽa,l-1 , V p = Ṽp,l-1 , and { xi (0) = 0, ∀i ∈ V p }.; 5 Ṽp,l ← Ṽp,l-1 {i ∈ Ṽa,l-1 : xi < x i }, and Ṽa,l ← Ṽa,l-1 \{i ∈ Ṽa,l-1 : xi < x i }.; 6 x * ← [ x1,l , . . . , xn,l ] ⊤ .; 7 return x * ; According to the reasoning described at the beginning of this subsection, we ideally require to know the steady-state solution of Equation ( 7) at each iteration of Algorithm 1 (since we need to identify which nodes are allocated with an amount of resource below their lower bounds in steady state). This implies that the time t l in Step 4 of Algorithm 1 goes to infinity. Under this requirement, each iteration would demand infinite time and the algorithm would not be implementable. Hence, to relax the infinite time condition, we state the following assumption on the time t l . Assumption 4.2: Let x * i,l be the steady state of x i (t) under Equation (7), with initial conditions x(0) = xi,l-1 , V a = Ṽa,l-1 , V p = Ṽp,l-1 , and { xi (0) = 0, ∀i ∈ V p } 1 . For each l = 1, . . . , |V| -1, the time t l satisfies the following condition: x i (t l ) < x i if and only if x * i,l < x i , for all i ∈ V. According to assumption 4.2, for the first |V| -1 iterations, we only need a solution of (7) that is close enough to the steady-state solution. We point out the fact that, if the conditions of Proposition 4.2 are met in the lth iteration of Algorithm 1, then x i (t) asymptotically converges to x * i,l , for all i ∈ V, under Equation (7). Therefore, Assumption 4.2 is satisfied for large values of t 1 , . . . , t |V|-1 . Taking into account all the previous considerations, the next theorem states our main result regarding the optimality of the output of Algorithm 1. Theorem 4.1: Assume that G is a connected graph. Moreover, assume that φ i is a strictly convex function for all i = 1, … , n. If t 1 , . . . , t |V|-1 satisfy Assumption 4.2, and the problem stated in Equation (4) is feasible, then the output of Algorithm 4 tends to the optimal solution of the problem given in Equation (4) as t |V| → ∞. Proof: The ith component of the output of Algorithm 1 is equal to xi,|V| = x i (t |V| ), where x i (t |V| ) is the solution of Equation (7a) at time t |V| , with initial conditions [ x1,|V|-1 , . . . , xn,|V|-1 ] ⊤ , V a = Ṽa,|V| , and V p = Ṽp,|V| . Hence, it is sufficient to prove that {x * 1,|V| , . . . , x * n,|V| } solves the problem in Equation ( 4). In order to do that, let us consider the following premises (proof of each premise is written in brackets). P1: { x1,l , . . . , xn,l } satisfies (4b), for all l = 1, . . . , |V| (this follows from Lemma 4.1, and form the fact that n i=1 xi,0 = X). P2: x * i,l = x i , for all i ∈ Ṽp,l-1 , and for all l = 1, . . . , |V| (this follows directly from Proposition 4.2). P3: Ṽp,l = Ṽp,l-1 {i ∈ Ṽa,l-1 : x * i,l < x i }, and Ṽa,l = Ṽa,l-1 \{i ∈ Ṽa,l-1 : x * i,l < x i }, for all l = 1, . . . , |V| (this follows from Step 5 of Algorithm 1, and from Assumption 4.2). P4: If for some l, Ṽp,l = Ṽp,l-1 , then Ṽp,l+ j = Ṽp,l-1 , for all j = 0, . . . , |V| -l (this can be seen from the fact that if the set of passive nodes does not change from one iteration to the next, the steady state of Equation (7a) is the same for both iterations). P5: Ṽa,l Ṽp,l = V, for all l = 1, . . . , |V| (from P3, we know that Ṽa,l Ṽp,l = Ṽa,l-1 Ṽp,l-1 , for all l = 1, . . . , |V|. Moreover, given the fact that Ṽp,0 = ∅, and Ṽa,0 = V, (see step 1 of Algorithm 1) we can conclude P5). P6: Since the problem in Equation ( 4) is feasible by assumption, then | Ṽp,l | < |V|, for all l = 1, . . . , |V| (the fact that | Ṽp,l | ≤ |V|, for all l = 1, . . . , V, follows directly from P5. Let us prove that | Ṽp,l | ̸ = |V|, for all l = 1, . . . , V. We proceed by contradiction: Assume that there exists some l, such that | Ṽp,l-1 | < |V| and | Ṽp,l | = |V|. Hence, from P2 and P3, we know that x * i,l ≤ x i , for all i ∈ V; moreover, {i ∈ Ṽa,l-1 : x * i,l < x i } ̸ = ∅. Therefore, n i=1 x * i,l < n i=1 x i . According to P1, we know that n i=1 x * i,l = X; thus, X < n i=1 x i , which contradicts the feasibility assumption). P7: {x * 1,|V| , . . . , x * n,|V| } satisfies the constraints (4c) (in order to prove P7, we proceed by contradiction: assume that {x * 1,|V| , . . . , x * n,|V| } does not satisfy the constraints (4c). Since P2 holds, this assumption implies that {i ∈ Ṽa,|V-1| : x * i,|V| < x i } ̸ = ∅. Therefore, Ṽp,|V| ̸ = Ṽp,|V|-1 (see P3). Using P4, we can conclude that Ṽp,|V| ̸ = Ṽp,|V|-1 ̸ = • • • ̸ = Ṽp,0 = ∅, i.e. {i ∈ Ṽa,|V|-j : x * i,|V|-j+1 < x i } ̸ = ∅, for all j = 1, . . . , |V|. Thus, according to P3, | Ṽp,|V| | > | Ṽp,|V|-1 | > • • • > | Ṽp,1 | > 0. Hence, | Ṽp,|V| | ≥ |V|, which contradicts P6). P8: i∈ Ṽa,l x * i,l ≥ i∈ Ṽa,l x * i,l+1 (we prove P8 as follows: using P1 and the result in Lemma 4.1, we know that i∈V x * i,l = i∈V x * i,l+1 = X. Moreover, according to P5, V can be expressed as V = Ṽa,l Ṽp,l , where Ṽp,l-1 ⊂ Ṽp,l (see P3). Thus, we have that i∈ Ṽa,l x * i,l + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l + i∈ Ṽp,l-1 x * i,l = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l+1 + i∈ Ṽp,l-1 x * i,l+1 . Furthermore, since P2 holds, we have that i∈ Ṽa,l x * i,l + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l + i∈ Ṽp,l-1 x i = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x i + i∈ Ṽp,l-1 x i . Therefore, i∈ Ṽa,l x * i,l = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 (x i -x * i,l ), where x i -x * i,l > 0, for all i ∈ Ṽp,l , i / ∈ Ṽp,l-1 (according to P3). Hence, we can conclude P8). P9: There exists k, such that k ∈ Ṽa,l , for all l = 1, . . . , |V| (in order to prove P9, we use the fact that, if k ∈ Ṽa,l , then k ∈ Ṽa,lj , for all j = 1, … , l (this follows from P3). Moreover, according to P5 and P6, | Ṽa,|V| | ̸ = 0; hence, there exists k, such that k ∈ Ṽa,|V| . Therefore, P9 holds). P9 guarantees that Assumption 4.1 is satisfied at each iteration. P10: φ ′ i (x * i,l ) ≥ φ ′ i (x * i,l+1 ), for all i ∈ Ṽa,l (we prove P10 by contradiction: assume that φ ′ i (x * i,l ) < φ ′ i (x * i,l+1 ), for some i ∈ Ṽa,l . According to Proposition 4.2, and since P1 and P9 hold, x * i,l has the characteristics given in Proposition 4.1, for all i ∈ V, and for all l = 1, . . . , |V|. Hence, φ ′ i (x * i,l ) has the same value for all i ∈ Ṽa,l-1 , and φ ′ i (x * i,l+1 ) has the same value for all i ∈ Ṽa,l . Moreover, since Ṽa,l ⊂ Ṽa,l-1 (according to P3), we have that φ ′ i (x * i,l ) < φ ′ i (x * i,l+1 ), for all i ∈ Ṽa,l . Thus, x * i,l < x * i,l+1 , for all i ∈ V a,l , because φ ′ i is strictly increasing (this follows from the fact that φ i is strictly convex by assumption). Therefore, i∈ Ṽa,l x * i,l < i∈ Ṽa,l x * i,l+1 , which contradicts P8). Now, let us prove that {x * 1,|V| , . . . , x * n,|V| } solves the Problem in Equation (4). First, the solution {x * 1,|V| , . . . , x * n,|V| } is feasible according to P1 and P7. On the other hand, from P9, it is known that ∃k : k ∈ Ṽa,l , for all l = 1, . . . , |V|. Let φ ′ k (x * k,|V| ) = λ, where λ ∈ R. Moreover, let us define V 0 = { j ∈ V : x * i,|V| > x i }, and V 1 = { j ∈ V : x * i,|V| = x i }. If i ∈ V 0 , then i ∈ Ṽa,|V|-1 (given the fact that, if i / ∈ Ṽa,|V-1| ⇒ i ∈ Ṽp,|V-1| ⇒ x * i,|V| = x i ⇒ i / ∈ V 0 ). Hence, φ ′ i (x * i,|V| ) = φ ′ k (x * k,|V| ) = λ (this follows from the fact that φ ′ j (x * j,l ) has the same value for all j ∈ Ṽa,l-1 , which in turn follows directly from step 4 of Algorithm 1, and Proposition 4.2). If i ∈ V 1 , then either i ∈ Ṽa,|V|-1 or i ∈ Ṽp,|V|-1 . In the first case, φ ′ i (x * i,|V| ) = φ ′ k (x * k,|V| ) = λ (following the rea- soning used when i ∈ V 0 ). In the second case, ∃l : i ∈ ( Ṽp,l \ Ṽp,l-1 ); hence, φ ′ i (x * i,l ) = φ ′ k (x * k,l ) (this follows from the fact that, if i ∈ ( Ṽp,l \ Ṽp,l-1 ), then i ∈ Ṽa,l-1 ). Furthermore, since i ∈ ( Ṽp,l \ Ṽp,l-1 ), x * i,l < x i (see P3), and given the fact that φ i is strictly increasing, we have that φ ′ i (x * i,l ) < φ ′ i (x i ). Moreover, according to P10, φ ′ k (x * k,l ) ≥ φ ′ k (x * k,|V| ). Hence, φ ′ i (x i ) > φ ′ k (x * k,|V| ) = λ. In conclusion, if i ∈ V 1 , then φ ′ i (x * i,|V| ) ≥ λ. Thus, we can choose µ i ࣙ 0, for all i ∈ V, such that φ ′ i (x * i,|V| ) -µ i = λ, where µ i = 0 if i ∈ V 0 . Hence, let us note that ∂φ ∂x i | x i =x * i,|V| -µ i -λ = 0, for all i ∈ V, where ∂φ ∂x i | x i =x * i,|V| = φ ′ i (x * i,|V| ). Therefore, {x * 1,|V| , . . . , x * n,|V| , µ 1 , . . . , µ n , -λ} satisfies the KKT conditions for the problem given in Equation ( 4). Furthermore, since φ(x) is a strictly convex function by assumption, then {x * 1,|V| , . . . , x * n,|V| } is the optimal solution to that problem. Early stopping criterion Notice that, if the set of passive nodes does not change in the kth iteration of Algorithm 1 because all active nodes satisfy the lower bound constraints (see step 5), then the steady state solutions x * i,k and x * i,k+1 are the same, for all i ∈ V, which implies that the set of passive nodes also does not change in the (k + 1)th iteration. Following the same reasoning, we can conclude that x * i,k = x * i,k+1 = • • • = x * i,|V| , for all i ∈ V. Therefore, in this case, {x * 1,k , . . . , x * n,k } is the solution of our resource allocation problem. Practically speaking, this implies that Algorithm 1 does not need to perform more iterations after the kth one. Thus, it is possible to implement a flag z * i (in a distributed way) that alerts the agents if all active nodes satisfy the lower bound constraints after step 4 of Algorithm 1. A way to do that is by applying a min-consensus protocol [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF] with initial conditions z i (0) = 0 if the node i is active and does not satisfy its lower bound constraint, and z i (0) = 1 otherwise. Hence, notice that our flag z * i (i.e. the result of the min-consensus protocol) is equal to one, for all i ∈ V, only if all the active nodes satisfy the lower bound constraints, which corresponds to the early stopping criterion described above. Simulation results and comparison In this section, we compare the performance of our algorithm with other continuous-time distributed techniques found in the literature. We have selected three techniques that are capable to address nonlinear problems and can handle lower bound constraints: (i) a distributed interior point method (Xiao & Boyd, 2006), (ii) the local replicator equation (Pantoja & Quijano, 2012), and (iii) a distributed interior point method with exact barrier functions [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF]. The first one is a traditional methodology that uses barrier functions; the second one is a novel technique based on population dynamics; and the third one is a recently proposed method that follows the same ideas as the first one, but replaces classic logarithmic barrier functions by exact penalty functions. Below, we briefly describe the aforementioned algorithms. Distributed interior point (DIP) method This algorithm is a variation of the one presented in Equation (5) that includes strictly convex barrier functions to prevent the solution to flow outside the feasible region. The barrier functions b i (x i ) are added to the original cost function as follows: φ b (x) = φ(x) + ϵ n i=1 b i (x i ) b i (x i ) = -ln x i -x i , for all i ∈ V, where φ b (x) is the new cost function, and ϵ > 0 is a constant that minimises the effect of the barrier function when the solution is far from the boundary of the feasible set. With this modification, the distributed algorithm is described by the following equation: ẋi = j∈N i φ ′ b j (x j ) -φ ′ b i (x i ) , for all i ∈ V, (11) where φ ′ b i (x i ) = dφ i dx i -ϵ db i dx i , i.e. φ ′ b i (x i ) is equal to the marginal cost plus a penalty term induced by the derivative of the corresponding barrier function. Local replicator equation (LRE) This methodology is based on the classical replicator dynamics from evolutionary game theory. In the LRE, the growth rate of a population that plays a certain strategy only depends on its own fitness function and on the fitness of its neighbours. Mathematically, the LRE is given by ẋi = j∈N i (x i -x i )(x j -x j )(v i (x i ) -v j (x j )), v i = -φ ′ i (x i ), for all i ∈ V, (12) where v i is the fitness perceived by the individuals that play the ith strategy. In this case, the strategies correspond to the nodes of the network, and the fitness functions to the negative marginal costs (the minus appears because replicator dynamics are used to maximise utilities instead of minimise costs). On the other hand, it can be shown that, if the initial condition x(0) is feasible for the problem given in Equation (4), then x(t) remains feasible for all t ࣙ 0, under the LRE. Distributed interior point method with exact barrier functions (DIPe) This technique follows the same reasoning of the DIP algorithm. The difference is that DIPe uses exact barrier functions [START_REF] Bertsekas | Necessary and sufficient conditions for a penalty method to be exact[END_REF] to guarantee satisfaction of the lower bound constraints. The exact barrier function for the ith node is given by: b e i (x i ) = 1 ε [x i -x i ] + , where [•] + = max ( •, 0), 0 < ε < 1 2 max x∈F ∥∇φ(x)∥ ∞ , and F = {x ∈ R n : n i=1 x i = 1, x i ≥ x i } is the feasible region of x for the problem (4). Using these exact barrier functions, the augmented cost function can be expressed as: φ e b (x) = φ(x) + n i=1 b e i (x i ). The DIPe algorithm is given in terms of the augmented cost function and its generalised gradient ∂φ e b (x) = [∂ 1 φ e b (x), . . . , ∂ n φ e b (x)] ⊤ as follows: ẋi ∈ j∈N i ∂ j φ e b (x) -∂ i φ e b (x) , for all i ∈ V, (13) where ∂ i φ e b (x) = ⎧ ⎨ ⎩ {φ ′ i (x i ) -1 ε } if x i < x i φ ′ i (x i ) -1 ε , φ ′ i (x i ) if x i = x i {φ ′ i (x i )} if x i > x i In [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF], the authors show that the differential inclusion (13) converges to the optimal solution of the problem (4), provided that x(0) is feasible. Comparison In order to compare the performance of our algorithm with the three methods described above, we use the following simulation scenario: a set of n nodes connected as in Figure 1 (we use this topology to verify the behaviour of the different algorithms in the face of few communication channels since previous studies have shown that algorithms' performance decreases with the number of (x) = n i=1 e a i (x i -b i ) + e -a i (x i -b i ) , where a i and b i are random numbers that belong to the intervals [1,2] and [- 1 2 , 1 2 ], respectively; a resource constraint X = 1; and a set of lower bounds {x i = 0 : i ∈ V}. For each n, we generate 50 problems with the characteristics described above. The four distributed methods are implemented in Matlab employing the solver function ode23s. Moreover, we use the solution provided by a centralised technique as reference. The results on the average percentage decrease in the cost function reached with each algorithm and the average computation time (time taken by each algorithm for solving a problem 2 ) are summarised in Table 1. Results of DIPe for 100 and 200 nodes were not computed for practicality since the time required by this algorithm to solve a 100/200-nodes problem is very high. We notice that the algorithm proposed in this paper always reaches the maximum reduction, regardless of the number of nodes that comprise the network. The same happens with the DIPe algorithm. This is an important advantage of our method compared to other techniques. In contrast, the algorithm based on the LRE performs far from the optimal solution. This unsatisfactory behaviour is due to the small number of links of the considered communication network. In Pantoja and Quijano (2012), the authors prove the optimality of the LRE in problems involving well connected networks; however, they also argue that this technique can converge to suboptimal solutions in other cases. On the other hand, the DIP method provides solutions close to the optimum. Nonetheless, its performance decreases when the number of nodes increases. This tendency is due to the influence of barrier functions on the original problem. Notice that, the larger the number of nodes, the bigger the effect of the barrier functions in Equation (11). Regarding the computation time, although convergence of the proposed method is slower than the one shown by LRE and DIP, it is faster than the convergence of the method based on exact barrier functions, i.e. DIPe. Therefore, among the methods that guarantee optimality of the solution, our technique shows the best convergence speed. Computation time taken by DIPe is affected by the use of penalty terms that generate strong changes in the value of the cost function near to the boundaries of the feasible set. The drastic variations of the generalised gradient of exact barrier functions produces oscillations of numerical solvers around the lower bounds (a visual inspection of the results given in Figure 3 of [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF] confirms this claim). These oscillations are the main responsible for the low convergence speed shown by DIPe. On the other hand, LRE and DIP exhibit the fastest convergence. Hence, LRE and DIP are appealing to be implemented in applications that require fast computation and tolerate suboptimal solutions. Applications This section describes the use of the approach developed in this paper to solve two engineering problems. First, we present an application for sharing load in multiple chillers plants. Although this is not a large-scale application (multi-chiller plants are typically comprised of less than ten chillers; Yu & Chan, 2007), it aims to illustrate the essence of the proposed method and shows algorithm's performance in small-size problems. One of the reasons to use a distributed approach in small-/medium-size systems is due to the need of enhancing systems resilience in the face of central failures (e.g. in multiple chiller plants, central failures can occur due to cyber-attacks (Manic, Wijayasekara, Amarasinghe, & Rodriguez-Andina, 2016) against building management systems (Yu & Chan, 2007)). The second application deals with the distributed computation of the Euclidean projection of a vector onto a given set. Particularly, we use the proposed algorithm as part of a distributed technique that computes optimal control inputs for plants composed of a large number of sub-systems. This application aims to illustrate the performance of the method proposed in this paper when coping with large-scale problems. Optimal chiller loading The optimal chiller loading problem in multiple chiller systems arises in decoupled chilled-water plants, which are widely used in large air-conditioning systems [START_REF] Chang | Optimal chilled water temperature calculation of multiple chiller systems using Hopfield neural network for saving energy[END_REF]). The goal is to distribute the cooling load among the chillers that comprise the plant for minimising the total amount of power used by them. For a better understanding of the problem, below we present a brief description of the system. A decoupled chilled-water plant comprised by n chillers is depicted in Figure 2. The purpose of this plant is to provide a water flow f T at a certain temperature T s to the rest of the air-conditioning system. In order to do this task the plant needs to meet a cooling load C L that is given by the following expression: C L = m f T (T r -T s ), (14) where m > 0 is the specific heat of the water, and T r is the temperature of the water returning to the chillers. Since there are multiple chillers, the total cooling load C L is split among them, i.e. C L = n i=1 Q i , where Q i is the cooling power provided by the ith chiller, which, in turn, is given by Q i = m f i (T r -T i ), (15) where f i > 0 and T i are, respectively, the flow rate of chilled water and the water supply temperature of the ith chiller. As it is shown in Figure 2, we have that f T = n i=1 f i . In order to meet the corresponding cooling load, the ith chiller consumes a power P i that can be calculated using the following expression [START_REF] Chang | Optimal chilled water temperature calculation of multiple chiller systems using Hopfield neural network for saving energy[END_REF]: P i = k 0,i + k 1,i m f i T r + k 2,i (m f i T r ) 2 + + k 3,i -k 1,i m f i -k 4,i m f i T r -2k 2,i (m f i ) 2 T r T i + k 5,i + k 6,i m f i + k 2,i (m f i ) 2 T 2 i , (16) where k j, i , for j = 0, … , 6, are constants related to the ith chiller. If we assume that the flow rate f i of each chiller is constant, then P i is a quadratic function of the temperature T i . The optimal chiller loading problem involves the calculation of the chillers' water supply temperatures that meet the total cooling load given in Equation ( 14), and minimise the total amount of power consumed by the chillers, i.e. n i=1 P i . Moreover, given the fact that each chiller has a maximum cooling capacity, we have to consider the following additional constraints: m f i (T r -T i ) ≤ Q i for all i = 1, . . . , n, (17) where Q i is the maximum capacity (rated value) of the ith chiller. Summarising, the optimal chiller loading problem can be expressed as follows: min T 1 ,...,T n n i=1 P i (T i ) s.t. n i=1 m f i (T r -T i ) = C L T i ≥ T r -Q i m f i , for all i = 1, . . . , n. (18) Now, let us consider that we want to solve the aforementioned problem in a distributed way by using a multiagent system, in which each chiller is managed by an agent that decides the value of the water supply temperature. We assume that the ith agent knows (e.g. by measurements) the temperature of the water returning to the chillers, i.e. T r , and the flow rate of chilled water, i.e. f i . Moreover, agents can share their own information with their neighbours through a communication network with a topology given by the graph G. If each P i (T i ) is a convex function, then the problem can be solved by using the method proposed in Algorithm 1 (we take, in this case, x i = f i T i ). The main advantage of this approach is to increase the resilience of the whole system in the face of possible failures, due to the fact that the plant operation does not rely on a single control centre but on multiple individual controllers without the need for a centralised coordinator. .. Illustrative example We simulate a chilled-water plant comprised by 7 chillers. 3 The cooling capacity and the water flow rate of each chiller are, respectively, Q i = 1406.8 kW, and f i = 65 kg.s -1 , for i = 1, … , 7; the specific heat of the water is m = 4.19 kW.s.kg -1 . degC -1 ; the supply temperature of the system is T s = 11 degC; and the coefficients k j, i of Equation ( 16) are given in Table 2. We operate the system at two different cooling loads, the first one is 90% of the total capacity, i.e. C L = 0.9 n i=1 Q i , and the second one is 60% of the total capacity, i.e. C L = 0.6 n i=1 Q i . The P i -T i curves are shown in Figure 3(a) for both cases, it can be noticed that all functions are convex. In order to apply 3(a)) are more loaded than the less efficient ones (i.e. chiller 2 and chiller 5). This can be noticed from the fact that their supply temperatures, in steady state, reach the minimum value. Furthermore, the energy consumption is minimised and power saving reaches to 2.6%. The results for the second cooling load, i.e. C L = 5908.6 kW, are shown in Figure 3(c), where it can be noticed a similar performance to that obtained with the first cooling load. However, in this case, it is not necessary that the supply temperatures reach the minimum value to meet the required load. Newly, energy consumption is minimised and power saving reaches to 2.8%. As it is stated in Section 4, convergence and optimality of the method is guaranteed under the conditions given in Theorem 4.1. i =  i =  i =  i =  i =  i =  i =  k , i  In both cases we use the early stopping criterion given in Section 4. Although other techniques have been applied to solve the optimal chiller loading problem, e.g. the ones in Chang and Chen (2009), they require centralised information. In this regard, it is worth noting that the same objective is properly accomplished by using our approach, which is fully distributed. Distributed computation of the Euclidean projection Several applications require computing the Euclidean projection of a vector in a distributed way. These applications include matrix updates in quasi-Newton methods, balancing of traffic flows, and decomposition techniques for stochastic optimisation (Patriksson, 2008). The problem of finding the Euclidean projection of the vector ξ onto a given set X is formulated as follows: min ξ ∥ ξ -ξ ∥ 2 2 s.t. ξ ∈ X , (19) where ∥ • ∥ 2 is the Euclidean norm. The vector that minimises the above problem, which is denoted by ξ * , is the Euclidean projection. Roughly speaking, ξ * can be seen as the closest vector to ξ that belongs to the set X . In Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015), the authors use a distributed computation of the Euclidean projection to decouple large-scale control problems. Specifically, they propose a discrete time method to address problems involving plants comprised of a large number of decoupled sub-systems whose control inputs are coupled by a constraint. The control inputs are associated with the power applied to the subsystems, and the constraint limits the total power used to control the whole plant. At each time iteration, local controllers that manage the sub-systems compute optimal control inputs ignoring the coupled constraint (each local controller uses a model predictive control scheme that does not use global information since the sub-systems' dynamics are decoupled). Once this is done, the coupled constraint is addressed by finding the Euclidean projection of the vector of local control inputs (i.e. the vector formed by all the control inputs computed by the local controllers) onto a domain that satisfies the constraint associated with the total power applied to the plant. For a better explanation of the method, consider a plant comprised of n sub-systems. Let ûi (k) ≥ 0 be the control input computed by the ith local controller at the kth iteration ignoring the coupled constraint (non-negativity of ûi (k) is required since the control signals correspond to an applied power). Let û(k) = [ û1 (k), . . . , ûn (k)] ⊤ be the vector of local control inputs, and let u * (k) be the vector of control signals that are finally applied to the sub-systems. If the maximum allowed power to control the plant is U > 0, the power constraint that couples the control signals is given by n i=1 u * i (k) ≤ U . The vector u * (k) is calculated by using the Euclidean projection of û(k) onto a domain that satisfies the power constraint, i.e. u * (k) is the solution of the following optimisation problem (cf. Equation ( 19)): min u(k) ∥ û(k) -u(k)∥ 2 2 (20a) s. t. n i=1 u i (k) ≤ U (20b) u i (k) ≥ 0, for all i = 1, . . . , n, (20c) where u i (k) denotes the ith entry of the vector u(k). Notice that u * (k) satisfies the power constraint and minimises the Euclidean distance with respect to the control vector ûk that is initially calculated by the local controllers. Computation of u * (k) can be performed by using the approach proposed in this paper because the problem stated in Equation ( 20) is in the standard form given in Equation ( 4) except for the inequality constraint (20b). However, this constraint can be addressed by adding a slack variable. .. Illustrative example Consider a plant composed of 100 sub-systems. Assume that, at the kth iteration of the discrete time method presented in Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015), the control inputs that are initially computed by the local controllers are given by the entries of the vector û(k) = [ û1 (k), . . . , û100 (k)] ⊤ , where ûi (k) is a random number chosen from the interval [0, 1] kW. Furthermore, assume that the maximum allowed power to control the plant is U = 40 kW. To satisfy this constraint, the Euclidean projection described in Equation ( 20) is computed in a distributed way using Algorithm 1 with the early stopping criterion described in Section 4. The results under a communication network with path topology (see Figure 1) are depicted in Figure 4. The curve at the top of Figure 4 describes the evolution of the Euclidean distance. Notice that the proposed algorithm minimises this distance and reaches the optimum value (dashed line), which has been calculated employing a centralised method. On the other hand, the curves at the bottom of Figure 4 illustrate the evolution of the values 100 i=1 u i (k) (solid line) and min {u i (k)} (dash-dotted line). These curves show that the constraints of the problem stated in Equation (20) performance even considering that the communication graph is sparse and the optimal solution is not in the interior of the feasible domain. As shown in Section 5, this characteristic is an advantage of Algorithm 1 over population dynamics techniques as the one proposed in Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015) to compute the Euclidean projection in a distributed way. Discussion The method developed in this paper solves the problem of resource allocation with lower bounds given in Equation (4). The main advantage of the proposed technique is its distributed nature; indeed, our approach does not need the implementation of a centralised coordinator. This characteristic is appealing, especially in applications where communications are strongly limited. Moreover, fully distributed methodologies increase the autonomy and resilience of the system in the face of possible failures. In Section 5, we show by means of simulations that the performance of the method presented in this paper does not decrease when the number of nodes (which are related to the decision variables of the optimisation problem) is large, or the communication network that allows the nodes to share information has few channels. In these cases, the behaviour of our approach is better than the behaviour of other techniques found in the literature, such as the DIP method, or the LRE. Moreover, it is worth noting that our technique addresses the constraints as hard. This fact has two important consequences: (i) in all cases, the solution satisfies the imposed constraints, and (ii) the objective function (and therefore the optimum) is not modified (contrary to the DIP method that includes the constraints in the objective function decreasing the quality of the solution as shown in Section 5.4). Other advantage of the method proposed in this paper is that it does not require an initial feasible solution of the resource allocation problem (4). Similarly to the DIPe technique, our method only requires that the starting point satisfies the resource constraint (4b), i.e. we need that n i=1 x i (0) = X. Notice that an initial solution x(0) that satisfies (4b) is not hard to obtain in a distributed manner. For instance, if we assume that only the kth node has the information of the available resource X, we can use (x k (0) = X, {x i (0) = 0 : i ∈ V, i ̸ = k}) as our starting point. Thus, an initialisation phase is not required. In contrast, other distributed methods, such as DIP and LRE needs an initial feasible solution of the problem (4), i.e. a solution that satisfies (4b) and (4c). Finding this starting point is not a trivial problem for systems involving a large number of variables. Therefore, for these methods, it is necessary to employ distributed constraint satisfaction algorithm (as the one described in Domınguez-Garcıa & Hadjicostis, 2011) as a first step. On the other hand, we notice that to implement the early stopping criterion presented at the end of Section 4, it is required to perform an additional min-consensus step in each iteration. Despite this fact, if the number of nodes is large, this criterion saves computational time, because in most of the cases, all passive nodes are identified during the first iterations of Algorithm 1. Conclusions In this paper, we have developed a distributed method that solves a class of resource allocation problems with lower bound constraints. The proposed approach is based on a multi-agent system, where coordination among agents is done by using a consensus protocol. We have proved that convergence and optimality of the method is guaranteed under some mild assumptions, specifically, we require that the cost function is strictly convex and the graph related to the communication network that enables the agents to share information is connected. The main advantage of our technique is that it does not need a centralised coordinator, which makes the method appropriate to be applied in large-scale distributed systems, where the inclusion of centralised agents is undesirable or infeasible. As future work, we propose to use a switched approach in order to eliminate the iterations in Algorithm 1. Moreover, we plan to include upper bound constraints in our original formulation. Notes Disclosure statement No potential conflict of interest was reported by the authors. On Switchable Languages of Discrete-Event Systems with Weighted Automata Michael Canu and Naly Rakoto-Ravalontsalama Abstract-The notion of switchable languages has been defined by Kumar, Takai, Fabian and Ushio in [11]. It deals with switching supervisory control, where switching means switching between two specifications. In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The use of weighted automata is justified by the fact that it allows us to synthesize a switching supervisory controller based on the cost associated to each event, like the energy for example. Finally the proposed methodology is applied to a simple example. Keywords: Supervisory control; switching control; weighted automata. I. INTRODUCTION Supervisory control initiated by Ramadge and Wonham [15] provides a systematic approach for the control of discrete event system (DES) plant. There has been a considerable work in the DES community since this seminal paper. On the other hand, from the domain of continuous-time system, hybrid and switched systems have received a growing interests [12]. The notion of switching is an important feature that has to be taken into account, not only in the continuous-time domain but for the DES area too. As for non-blocking property, there exist different approaches. The first one is the non-blocking property defined in [15]. Since then other types of nonblocking properties have been defined. The mutually non-blocking property has been proposed in [5]. Other approaches of mutually and globally nonblocking supervision with application to switching control is proposed in [11]. Robust non-blocking supervisory control has been proposed in [1]. Other types of non-blocking include the generalised non-blocking property studied in [13]. Discrete-event modeling with switching maxplus systems is proposed in [17], an example of mode M. Canu is with Univ. los Andes, Bogota, Colombia, email: [email protected] N. Rakoto-Ravalontsalama is with IMT Atlantique and LS2N, France, e-mail: [email protected] switching DES is described in [6] and finally a modal supervisory control is considered in [7]. In this paper we will consider the notion of switching supervisory control defined by Kumar and Colleagues in [11] where switching means switching between a pair of specifications. Switching (supervisory) control is in fact an application of some results obtained in the same paper [11] about mutually non blocking properties of languages, mutually nonblocking supervisor existence, supremal controllable, relative-closed and mutually nonblocking languages. All these results led to the definition of a pair of switchable languages [11]. In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The switching supervisory control strategy is based on the cost associated to each event, and it allows us to synthesize an optimal supervisory controller. Finally the proposed methodology is applied to a simple example. This paper is organized as follows. In Section II, we recall the notation and some preliminaries. Then in Section III the main results on the extension of n switchable languages (n ≥ 3) are given. An illustrative example of supervisory control of AGVs is proposed in Section IV, and finally a conclusion is given in Section V. II. NOTATION AND PRELIMINARIES Let the discrete event system plant be modeled by a finite state automaton [10], [4] to which a cost function is added. Definition 1: (Weighted automaton). A weighted automaton is defined as a sixtuple G = (Q, Σ, δ, q 0 , Q m , C) where • Q is the finite set of states, • Σ is the finite set of events, • δ : Q × Σ → Q is the partial transition function, • q 0 ⊆ Q is the initial state, • Q m ⊆ Q is the set of marked states (final states), • C : Σ → N is the cost function. Let Σ * be the set of all finite strings of elements in Σ including the empty string ε. The transition function δ can be generalized to δ : Σ * × Q → Q in the following recursive manner: δ(ε, q) = q δ(ωσ, q) = δ(σ, δ(ω, q)) for ω ∈ Σ * The notation δ(σ, q)! for any σ ∈ Σ * and q ∈ Q denotes that δ(σ, q) is defined. Let L(G) ⊆ Σ * be the language generated by G, that is, L(G) = {σ ∈ Σ * |δ(σ, q 0 )!} Let K ⊆ Σ * be a language. The set of all prefixes of strings in K is denoted by pr(K) with pr(K) = {σ ∈ Σ * |∃ t ∈ Σ * ; σt ∈ K}. A language K is said to be prefix closed if K = pr(K). The event set Σ is decomposed into two subsets Σ c and Σ uc of controllable and uncontrollable events, respectively, where Σ c ∩ Σ uc = ∅. A controller, called a supervisor, controls the plant by dynamically disabling some of the controllable events. A sequence σ 1 σ 2 . . . σ n ∈ Σ * is called a trace or a word in term of language. We call a valid trace a path from the initial state to a marked state (δ(ω, q 0 ) = q m where ω ∈ Σ * and q m ∈ Q m ). The cost is by definition non negative. In the same way, the cost function C is generalized to the domain Σ * as follows: C(ε) = 0 C(ωσ) = C(ω) + C(σ) for ω ∈ Σ * In other words, the cost of a trace is the sum of the costs of each event that composes the trace. Definition 2: (Controllability) [15]. A language K ⊆ L(G) is said to be controllable with respect to (w.r.t.) L(G) and Σ uc if pr(K)Σ uc ∩ L(G) ⊆ pr(K). Definition 3: (Mutually non-blocking supervisor) [5]. a supervisor f : L(G) → 2 Σ-Σu is said to be (K 1 , K 2 )mutually non-blocking if K i ∩ L m (G f ) ⊆ pr(K j ∩ L m (G f )), for i, j ∈ {1, 2}. (1) In other words, a supervisor S is said to be mutually non-blocking w.r.t. two specifications K 1 and K 2 if whenever the closed-loop system has completed a task of one language (by completing a marked trace of that language), then it is always able to continue to complete a task of the other language [5]. Definition 4: (Mutually non-blocking language) [5]. A language H ⊆ K 1 ∪K 2 is said to be (K 1 , K 2 )-mutually non-blocking if H ∩K i ⊆ pr(H ∩K j ) for i, j ∈ {1, 2}. The following theorem gives a necessary and sufficient condition for the existence of a supervisor. Theorem 1: (Mutually nonblocking supervisor existence) [5]. Given a pair of specifications K 1 , K 2 ⊆ L m (G), there exists a globally and mutually nonblocking supervisor f such that L m (G f ) ⊆ K 1 ∪ K 2 if and only if there exists a nonempty, controllable, relative-closed, and (K 1 , K 2 )-mutually non-blocking sublanguage of K 1 ∪ K 2 . The largest possible language (the supremal element) that is controllable and mutually non-blocking exists, as stated by the following theorem. Theorem 2: (SupMRC(K 1 ∪ K 2 ) existence) [5]. The set of controllable, relative-closed, and mutually nonblocking languages is closed under union, so that the supremal such sublanguage of K 1 ∪ K 2 , denoted supM RC(K 1 ∪ K 2 ) exists. Recall that a pair of languages K 1 , K 2 are mutually nonconflicting if pr(K 1 ∩ K 2 ) = pr(K 1 ) ∩ pr(K 2 ) [18]. K 1 , K 2 are called mutually weakly nonconflicting if K i , pr(K j ) (i ̸ = j) are mutually nonconflicting [5]. Another useful result from [5] is the following. Given a pair of mutually weakly nonconflicting languages K 1 , K 2 ⊆ L m (G), the following holds ( [5], Lemma 3). If K 1 , K 2 are controllable then K 1 ∩ pr(K 2 ), K 2 ∩ pr(K 1 ) are also controllable. The following theorem is proposed in [11] and it gives the formula for the supremal controllable, relativeclosed, and mutually nonblocking languages. Theorem 3: (SupMRC(K 1 ∪ K 2 )) [11]. For relative-closed specifications K 1 , K 2 ⊆ L m (G), supM RC(K 1 ∪ K 2 ) = supRC(K 1 ∩ K 2 ). The following theorem, also from [11] gives another expression of the supremal controllable, relative-closed, and mutually nonblocking languages. Theorem 4: [11] Given a pair of controllable, relativeclosed, and mutually weakly nonconflicting languages K 1 , K 2 ⊆ L m (G), it holds that supM RC(K 1 ∪ K 2 ) = (K 1 ∩ K 2 ). And finally the following theorem gives a third formula of the supremal controllable, relative-closed, and mutually nonblocking languages. Theorem 5: [11] For specifications K 1 , K 2 ⊆ L m (G), supM RC(K 1 ∪ K 2 ) = supM C(supRC(K 1 ∩ K 2 )). In order to allow switching between specifications, a pair of supervisors is considered, such that the supervisor is switched when the specification is switched. The supervisor f i for the specification K i is designed to enforce a certain sublanguage H i ⊆ K i . Suppose a switching in specification from K i to K j is induced at a point when a trace s ∈ H i has been executed in the f i -controlled plant. Then in order to be able to continue with the new specification K j without reconfiguring the plant, the trace s must be a prefix of H j ⊆ K j . In other words, the two supervisors should enforce the languages H i and H j respectively such that H i ⊆ pr(H j ). Hence the set of pairs of such languages are defined to be switchable languages as follows. Definition 5: (Pair of switchable languages) [11]. A pair of specifications K 1 , K 2 ⊆ L m (G) are said to be switchable languages if SW (K 1 , K 2 ) := {(H 1 , H 2 )|H i ⊆ K i ∩ pr(H j ), i ̸ = j, and H i controllable}. The supremal pair of switchable languages exists and is given by the following theorem. Theorem 6: (Supremal pair of switchable languages) [11]. For specifications K 1 , K 2 ⊆ L m (G), supSW (K 1 , K 2 ) = (supM C(K 1 ∪ K 2 ) ∩ K 1 , supM C(K 1 ∪ K 2 ) ∩ K 2 ). III. MAIN RESULTS We now give the main results of this paper. First, we define a triplet of switchable languages. Second we derive a necessary and sufficient condition for the transitivity of switchable languages (n = 3). Third we generalize this definition to a n-uplet of switchable languages, with n > 3. And fourth we derive a necessary and sufficient condition for the transitivity of switchable languages for n > 3. A. Triplet of Switchable Languages We extend the notion of pair of switchable languages, defined in [11], to a triplet of switchable languages. Definition 6: (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if they are pairwise switchable languages, that is, SW (K 1 , K 2 , K 3 ) := SW (K i , K j ), i ̸ = j, i, j = {1, 2, 3}. Another expression of the triplet of switchable languages is given by the following lemma. Lemma 1: (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if the following holds: SW (K 1 , K 2 , K 3 ) = {(H 1 , H 2 , H 3 ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j, and H i controllable}. B. Transitivity of Switchable Languages (n = 3) The following theorem gives a necessary and sufficient condition for the transitivity of switchable languages. Theorem 7: (Transitivity of switchable languages, n = 3) . Given 3 specifications (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} such that SW (K 1 , K 2 ) and SW (K 2 , K 3 ). (K 1 , K 3 ) is a pair of switchable languages, i.e. SW (K 1 , K 3 ), if and only if 1) H 1 ∩ pr(H 3 ) = H 1 , and 2) H 3 ∩ pr(H 1 ) = H 3 . Proof: The proof can be found in [3]. C. N-uplet of Switchable Languages We now extend the notion of switchable languages, to a n-uplet of switchable languages, with (n > 3). Definition 7: (N-uplet of switchable languages, n > 3). A n-uplet of languages (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 2, is said to be a n-uplet of switchable languages if the languages are pairwise switchable that is, SW (K 1 , ..., K n ) := SW (K i , K j ), i ̸ = j, i, j = {1, ..., n}, n > 2. As for the triplet of switchable languages, an alternative expression of the n-uplet of switchable languages is given by the following lemma. Lemma 2: (N-uplet of switchable languages, n > 3). A n-uplet of languages (K 1 , . . . , K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 3 are said to be a n-uplet of switchable languages if the following holds: SW (K 1 , ..., K n ) = {(H 1 , ..., H n ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j, and H i controllable}. D. Transitivity of Switchable Languages (n > 3) We are now able to derive the following theorem that gives a necessary and sufficient condition for the transitivity of n switchable languages. Theorem 8: (Transitivity of n switchable languages, n > 3) . Given n specifications (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}. Moreover, assume that each language K i is at least switchable with another language K j , i ̸ = j. A pair of languages (K k , K l ) is switchable i.e. SW (K k , K l ), if and only if 1) H k ∩ pr(H l ) = H k , and 2) H l ∩ pr(H k ) = H l . Proof: The proof is similar to the proof of Theorem 6 and can be found in [3]. It is to be noted that the assumption that each of the n languages be at least switchable with another language is important, in order to derive the above result. IV. EXAMPLE: SWITCHING SUPERVISORY CONTROL OF AGVS The idea of switching supervisory control is now applied to a discrete-event system, modeled with weighted automata. We take as an illustrating example the supervisory control of a fleet of fleet automated guided vehicles (AGVs) that move in a given circuit area. The example is taken from [9]. A circuit is partitioned into sections and intersections. Each time an AGV moves in a new intersection or a new section, then the automaton will move to a new state in the associated automaton. An example of an area with its associated basic automaton is depicted in Figure 1. The area to be supervised is the square depicted in Figure 1 (left). The flow direction with the arrows are specified the four intersections {A, B, C, D} and the associated basic automaton are given in Figure 1 (right). The basic automaton is denoted G basic = (Q b , Σ b , δ b , ∅, ∅) where the initial state and the final state are not defined. The initial state is defined according to the physical position of the AGV and the final state is defined according to its mission, that is his position target. A state represents and intersection or a section. Each state corresponding to a section is named XY i where X is the beginning of the section, Y its end and i the number of the AGV. For each section, there are two transitions, the first transition C XY is an input which is controllable and represents the AGV moving on the section from X to Y . The second transition is an output transition U Y which is uncontrollable and represents the AGV arriving to the intersection Y . For example the basic automaton depicted in Figure 1 (right) can be interpreted as follows. If AGV i arrives at section A, then it has two possibilities, either to go to section B with the event C ABi , or the go section D with the event C ADi . If we choose to go to section B, then the next state is AB i . From this state, the uncontrollable event U AB is true so that the following state is B i . And from B i , the only possibility is to exit to Point F with the uncontrollable event exit i . Now consider for example that 2 AGVs are moving in the circuit of Figure 1 (left). Assume AGV 1 is in D and AGV 2 is in AB so that the state is in (D 1 , AB 2 ). AGV 1 is leaving the area when the event exit 1 is true so that the system will be in state (E 1 , AB 2 ). And since AGV 1 is out of the considered area, then the new state will be (E 1 , AB 2 ) = (∅ 1 , AB 2 ) = (AB 2 ) since AGV 1 is out of the area. We give here below the synthesis algorithm for calculating the supervisor S c as it aws proposed by Girault et Colleagues in [9]. For more details on the synthesis algorithm, the reader is referred to the above paper. Algorithm 1 -Synthesis algorithm of S C [9] Data: G w,1 , . . . G w,n Result: Supervisor S C G w ← {G w,1 , . . . G w,n } G u ← {∅} forall G w,i ∈ G w do G u ← G u ∪ U γi (G w,i ) end S C ← S(G u,i ) G u ← G u \{G u,1 } while G u ̸ = ∅ do x ← get(G u ) S C ← S(S C ||x) G u ← G u \{x} end V. CONCLUSIONS The notion of switchable languages has been defined by Kumar and Colleagues in [11]. It deals with switching supervisory control, where switching means switching between two specifications. In this paper, we have extended the notion of switchable languages to a triplet of languages (n = 3) and we gave a necessary and sufficient condition for the transitivity of two switchable languages. Then we generalized the notion of switchable languages of a n-uplet of languages, n > 3 and we gave also necessary and sufficient condition for the transitivity of two (out of n) switchable languages. Finally the proposed methodology is applied to a simple example for the supervisory control of a fleet of AGVs. Ongoing work deals with a) the calculation of the supremal of n-uplet of switchable languages, and b) the optimal switching supervisory control of DES exploiting the cost of the weighted automata for the synthesis strategy. 2 Figure 1 : 21 Figure 1: Selection Phases after the HDR 2001-2004: First and Second Year: Control and Industrial Eng. courses at DAP • 2006-2012: MSc. MLPS (Management of Logistic and Production Systems) • 2012-present: MSc. MOST (Management and Optimization of Supply Chains and Transport) C. Courses given abroad • May 2008: Univ. of Cagliari (Italy): Control of Hybrid Systems (10h) Erasmus • Apr. 2009: Univ. Tec. Bolivar UTB, Cartagena (Colombia): Tutorial on DES (15h) • May 2014: Univ. Tec. Bolivar UTB, Cartagena (Colombia): Intro. to DES (15h) • Dec 2015: ITB Bandung (Indonesia): Simulation with Petri Nets (10h) Erasmus • Apr. 2017: Univ. of Liverpool (UK): Course 1 (10h) Erasmus • May 2017: ITB Bandung (Indonesia): Simulation with Petri Nets (10h) Erasmus Figure 2 . 1 : 21 Figure 2.1: Partition and Automaton 26 Figure 2 . 2 :Figure 2 . 3 : 262223 Figure 2.2: Phase portrait of Example 1 in PWA Figure 2 . 4 : 24 Figure 2.4: Phase portrait of Example 2 in PWA Figure 2 . 5 : 25 Figure 2.5: Phase portrait of Example 2 in MLD Figure 3 . 1 : 31 Figure 3.1: EMN Cell Figure 4 . 1 : 41 Figure 4.1: Smart Grid Figure 4 . 2 : 40 [ 15 ] 424015 Figure 4.2: CDG Airport Paris-Roissy Fig. 1 . 1 Fig. 1. Equivalence relation between hybrid systems Every well-posed PWA system can be re-written as an MLD system assuming that the feasible states and inputs are bounded [6, proposition 4*]. A completely well-posed MLD system can be rewritten as a PWA system [6, proposition 5*]. , (a) MLD Model (b) Error between MLD and PWA [4] (C) Error between MLD and PWA-This Work Fig. 3. Simulation Results for the Three-Tank System Fig. 4. Simulation results for robotized gear shift water source1and normal operation, 2, if water source 2 and normal operation, 3, if maintenance operation, 4, change from maintenance operation Figure 1 . 1 Figure 1. States and switching signal for the Lotka-Volterra example. ) and Tan et al. (2013) have developed a decentralised technique based on broadcasting and consensus to optimally distribute a resource considering capacity constraints on each entity in the network. Nonetheless, compared to our algorithm, the approach in Dominguez-Garcia et al. (2012) and Tan et al. (2013) is only applicable to quadratic cost functions. On the other hand, Pantoja and Quijano (2012) propose a novel methodology based on population dynamics. The main drawback of this technique is that its performance is seriously degraded when the number of communication links decreases. We point out the fact that other distributed optimisation algorithms can be applied to solve resource allocation problems, as those presented in Nedic, Ozdaglar, and Parrilo (2010), Yi, Hong, and Liu (2015), and Johansson and Johansson (2009). Nevertheless, the underlying idea in these methods is different from the one used in our work, i.e. Nedic et al. (2010), Yi et al. (2015), and Johansson and Johansson ( Theorem 2 . 1 ( 21 adapted from Godsil & Royle, 2001): An undirected graph G of order n is connected if and only if rank(L(G)) = n -1. Figure  .  Figure . Single path topology for n nodes. T 1 , f 1 T 2 , f 2 TFigure  . 1122 Figure . Decoupled chilled-water plant with n chillers. PFigure Figure . (a) P i -T i curves for each chiller, (b) evolution of supply temperatures and total power consumed by the chillers, C L = . kW, (c) evolution of supply temperatures and total power consumed by the chillers, C L = . kW. Fig. 1 . 1 Fig. 1. An AGV circuit (left) and its basic automaton (right) • Chapter 2 presents the Analysis and Control of Hybrid and Switched System • Chapter 3 is devoted to Supervisory Control of Discrete-Event Systems • Chapter 4 gives the Conclusion and Future Work NR NR HDR HDR 8 10 Promo Course CM PC TD TP-MP PFE Resp Lang. A1 Automatique 10h 10h 10h Fr. A2 Optim. 10h 5h Fr. A2 AII SED 10h 5h 5h Fr. A3 AII SysHybrides 7.5h 7.5h Fr. MSc. PM3E Control 7.5h 7.5h Eng. MSc. MOST Simulation 5h 5h 5h Eng. MSc. MOST Resp. MSc+UV 90h Eng. A3 PFE superv. 36h Fr. Masters PFE superv. 36h Eng. Total 1 272h 10h 30h 30h 40h 72h 90h Total 2 283up 15up 36up 30up 40up 72up 90up Table 1.1: Teaching in 2015-2016 B. Responsabilities (Option AII, Auto-Prod, MSc MLPS, MSc MOST) Diagnosis and Prognosis of Discrete-Event Systems, 48th IEEE CDC Shanghai, China, Dec 2009 (jointly organized and chaired with Shigemasa Takai). -Invited Session, Diagnosis of DES Systems, 1st IFAC DCDS 2007, Paris, France, June 2007 (jointly organized and chaired with Shigemasa Takai). Hybrid Systems, IEEE ISIC 2001, Mexico City, Mexico, Sep. 2001 (jointly organized and chaired with Michael Lemmon). NR HDR 13 1.6 Organization of Invited Sessions -Invited Session, DES and Hybrid Systems, IEICE NOLTA 2006, Bologna, Italy, Sep. 2006 (jointly organized and chaired with Shigemasa Takai). -Invited Session, Supervisory Control, IFAC WODES, Reims, France, Sep. 2004 (jointly organized and chaired with Toshimitsu Ushio). -Invited Session, • Participant, French "Contrat Etat-Région" 2000-2006, CER STIC 9 / N.18036, J.J. Loiseau PI, Euro 182,940 (US$ 182,940). • Co-Principal Investigator (with Ph. Chevrel), Modeling and Simulation of ESP Program, Peugeot-Citroen PSA France, Sep. 2000 -Jan 2001, FF 20,000 (US$ 3,000). • Co-Principal Investigator (with Andi Cakravastia, ITB), LOG-FLOW, PHC NUSANTARA France Indonesia, Project N. 39069ZJ, 2017, Accepted on 31 May 2017. • Participant, "Industrial Validation of Hybrid Systems", France and Colombia ECOS Nord Project N.C07M03, A. Gauthier and J.J. Loiseau PIs, Jan. 2007 to Dec. 2009 (3 years) Euro 12,000. • Co-Principal Investigator (with J. Aguilar-Martin), Control and Supervision of a Distillation Process, Conseil Régional Midi-Pyrénées, France, 1994-1995, FF 200,000 (US$ 30,000). • Participant, European Esprit Project IPCES (Intelligent Process Control by means of Expert Systems), J. Aguilar-Martin PI, 1989-1992, Euro 500,000 (US$ 500,000). -Invited Session, Table 1 . 1 table. Automatic clustering for symbolic evaluation for dynamical system supervision. In Proc. of IEEE American Control Conference (ACC 1992), Chicago, USA, June 1992, vol. 3, pp. 1895-1897. Conf. Book Chap. Book Ed. Journal Total 1994 2 1995 2 1 1 1996 1 1 1997 1 1998 2 1999 1 2000 2001 4 1 2002 1 2003 5 1 2004 6 2005 1 2006 3 2007 4 2008 2 2009 1 2010 1 2011 2012 1 2013 2 2014 4 1 2015 2 1 2016 4 2017 1+3* 1* 1+4* 2: Number of published papers per year (as of 30 June 2017) -where (*) means submitted NR HDR 20 [C.1] N. Rakoto-Ravalontsalama and J. Aguilar-Martin, 1. Compute matrices A, B, C, D and A i , B i , C i and D i using (2.20).2. InitializeE 1 , E 2 , E 3 , E 4 , E 5 matrices.3. For the m switching regions S j,i , include the inequalities defined in (2.8) or (2.10) which define the values of the m auxiliary binary variables δ j,i .4. Generate 2 * nx δi auxiliary binary dynamical variables associated with the n affine models and m auxiliary binary variables δ j,i associated with the m S ij switching regions. 5. For i = 1 to n include the inequalities using (2.13) representing the behavior on the x δ vector. 6. For i = 1 to n generate the n c -dimensional Z 1i vector and p c -dimensional Z 2i vector of auxiliary variables Z. 7. For each Z 1i vector introduce the inequalities defined in (2.16), by replacing A i , and B i by A i , and B i , computed in Step 1. M and m are n c -dimensional vectors of maximum and minimum values of x, respectively. 8. For each Z 2i vector introduce the inequalities defined in (2.17), by replacing C i , and D i by C i , and D i , computed in Step 1. M and m are p c -dimensional vectors of maximum and minimum values of x, respectively (This completes the inequality matrices). 9. Compute the matrices defined in (2.21) and (2.22) 10. End. .3.2 Transitivity of Switchable Languages and H i controllable}. NR HDR 34 3 (n = 3) Chapter 4 Conclusion and Future Work 4.1 Summary of Contributions The results can be found in [C.sub1]. In this HDR Thesis, I have presented a summary of contribution, in Analysis and Control of Hybrid Systems, as well as in Supervisory Control of Discrete-event Systems. • Analysis and Control of Hybrid and Switched Systems -Modeling and Control of MLD Systems -Stability of Switched Systems -Optimal Control of Switched Systems • Supervisory Control of Discrete-Event Systems -Multi-Agent Based Supervisory Control -Switched Discrete-Event Systems -Switchable Languages of DES I have chosen to not present some work like the Distributed Resource Allocation Problem, the Holonic Systems, and the VMI-Inventory Control work. However the references of the corresponding papers are given in the complete list of publications. My perspectives of research in the coming years are threefold: 1) Control of Smart Grids, 2) Simulation with Stochastic Petri Nets and 3) Planning and Inventory Control. Table I . I Computation and Simulation Times Representation Computation Simulation Time (s.) Time (s.) MLD - 592.20 PWA-[4] 93.88 5.89 PWA-This work 72.90 1.33 Table II . II , Computation and Simulation Times Representation Computation Simulation Time (s.) Time (s.) MLD - 296.25 PWA-[4] 115.52 0.35 PWA-This work 155.73 0.17 translation from the MLD model into PWA model took 572.19 s, with the algorithm proposed here, generating 127 sub-models. The translation into PWA model took 137.37s, with the algorithm in[3], generating 14 submodels. The simulation time for 300 iterations with the MLD model and a MIQP algorithm took 4249.301s, the same simulation with the PWA model obtained with the algorithm proposed here took 0.14s, and the same simulation with the PWA model obtained using the algorithm in[4] took 0.31s. These results are summarized in TableIII, Table III. Computation and Simulation Times. Representation Computation Simulation Time (s.) Time (s.) MLD - 4249.30 PWA-[4] 137.37 0.31 PWA-This work 572.20 0.14 .t j / 2 , : : : , v .t j / m 1 .t j / D v .t j /. Now, using the equivalence stated in Proposition 5, we know that the solutions of the polynomial Problem (8) are solutions of the switching system; and in this case, it is only one. Hence, we obtain .t j / D v .t j /, which implies that .t j / D v .t j / D m 1 .t j /, where m 1 is the first moment of the vector of moments. 2d Á , which implies that Table  .  Distributed algorithms' performance. Percentage decrease, computation time Number of nodes Proposed approach DIP LRE DIPe n =  n =  n =  n =  n =     % ,  .   s %, . s %, . s %, . s %, . s    % ,  .   s %, . s %, . s %, . s %, . s   % ,  .   s    % ,  s %, . s %,  s %, . s %,  s %, . s -%, . s - available communication links); a nonlinear cost func- tion φ Table  .  Chillers' parameters. Evolution of the Euclidean distance and constraint satisfaction using the proposed algorithm. Right y-axis corresponds to the dash-dotted line. Euclidean distance (kW) 0 1 2 3 4 Euclidean distance evolution Proposed algorithm Optimal value 60 Constraints 1 Power (kW) 40 50 100 i=1 u i (k) min{u i (k)} 0 0.5 30 0 0 0.2 0.2 0.4 0.4 0.6 time 0.6 0.8 0.8 1 1 1.2 1.2 -0.5 x 10 x 10 -4 -4 iteration 1 iteration 2 iteration 3 are properly satisfied in steady i (k) = 40 kW and min{u * i=1 u * state, i.e. 100 i (k)} = 0 kW. As a final observation, our algorithm exhibits a suitable Figure . D Appendix 4 -Paper [C.sub1]: ©  Informa UK Limited, trading as Taylor & Francis Group Remerciements VI. ACKNOWLEDMENT This work has been supported in part by « Contrat Etat -Région No STIC 9-18036, 2000-2006 », Nantes, France. The authors are thankful to the reviewers for their valuable comments and suggestions. ACKNOWLEDGEMENTS This study was supported by Proyecto CIFI 2011, Facultad de Ingeniería, Universidad de Los Andes. ACKNOWLEDGMENT Part of this work was carried out when the second author (N.R.) was visiting Prof. Stephane Lafortune at University of Michigan, Ann Arbor, MI, USA, in Sep. 2013. Grant #EMN-DAP-2013-09 is gratefully acknowledged. Funding G. Obando is supported in part by Convocatoria 528 Colciencias-Colfuturo and in part by OCAD-Fondo de CTel SGR, Colombia (ALTERNAR project, BPIN 20130001000089).
174,398
[ "8417" ]
[ "481382", "489559" ]
01761889
en
[ "spi" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01761889/file/Robust_Structurally_Contrained_Controllers_ECC_2018.pdf
Design of robust structurally constrained controllers for MIMO plants with time-delays Deesh Dileep 1 , Wim Michiels 2 , Laurentiu Hetel 3 , and Jean-Pierre Richard 4 Abstract-The structurally constrained controller design problem for linear time invariant neutral and retarded timedelay systems (TDS) is considered in this paper. The closedloop system of the plant and structurally constrained controller is modelled by a system of delay differential algebraic equations (DDAEs). A robust controller design approach using the existing spectrum based stabilisation and the H-infinity norm optimisation of DDAEs has been proposed. A MATLAB based tool has been made available to realise this approach. This tool allows the designer to select the sub-controller inputoutput interactions and fix their orders. The results obtained while stabilising and optimising two TDS using structurally constrained (decentralised and overlapping) controllers have been presented in this paper. Index Terms-Decentralized control, Time-delay systems, H2/H-infinity methods, linear systems, Large-scale systems. I. INTRODUCTION This article contributes to the field of complex interconnected dynamical systems with time-delays. It is common to observe time-delays in these systems due to their inherent properties or due to the delays in communication. It is almost infeasible, if not, costly to implement centralised controllers for large scale dynamical systems (see [START_REF] Siljak | Decentralized Control of Complex Systems[END_REF] and references within). Therefore, decentralised or overlapping controllers are often considered as favourable alternatives. There are many methods suggested by multiple authors for the design of full order controllers that stabilise finite dimensional LTI MIMO systems. The design problem of such a controller is usually translated into a convex optimisation problem expressed in terms of linear matrix inequalities (LMIs). However, determining a reduced dimension (order) controller or imposing special structural constrains on the controller introduces complexity. Since the constraints on structure or dimension prevent a formulation in terms of LMIs. Such problems typically lead to solving bilinear matrix inequalities directly or using other non-convex optimisation techniques. Solutions obtaining full order controllers for higher order plants are not favourable, since lower order controllers are preferred for implementation. Time-delay systems (TDS) can be seen as infinite dimensional LTI MIMO systems. Designing a finite dimensional controller for TDS is hence equivalent to obtaining a reduced order controller. Therefore, in this paper we combine both the problems of determining a reduced order (or fixed structure) controller and imposing constrains on the structure of the controller. Linear time invariant (LTI) neutral (and retarded) timedelay systems are considered in this article. The algorithms from [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF] and [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] with a direct optimisation based approach have been extended in this paper for designing structurally constrained robust controllers. In [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF], the design of stabilising fixed-order controllers for TDS has been translated into solving a non-smooth non-convex optimisation problem of minimising the spectral abscissa. This approach is similar in concept to the design of reduced-order controllers for LTI systems as implemented in the HIFOO package (see [START_REF] Burke | HIFOO -a matlab package for fixed-order controller design and H-infinity optimization[END_REF]). The core algorithm of HANSO matlab code is used for solving the non-smooth non-convex optimisation problems (see [START_REF] Overton | HANSO: a hybrid algorithm for nonsmooth optimization[END_REF]). In many control applications, robust design requirements are usually defined in terms of H ∞ norms of the closedloop transfer function including the plant, the controller, and weights for uncertainties and disturbances. In [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], the design of a robust fixed-order controller for TDS has been translated into a non-smooth non-convex optimisation problem. There are other methods available to design optimal H ∞ controllers for LTI finite dimensional MIMO systems based on Riccati equations and linear matrix inequalities (see [START_REF] Doyle | Statespace solutions to standard H 2 and H∞ control problems[END_REF], [START_REF] Gahinet | A linear matrix inequality approach to h control[END_REF], and references within). However, the order of the controller designed by these methods is generally larger than or equal to the order of the plant. Also, imposing structural constrains in these controllers become difficult. There are many methods available to design decentralised controllers for non-delay systems, most of them do not carry over easily to the case of systems with time-delays. In this paper, the direct optimisation problem of designing overlapping or decentralised controllers is dealt with by imposing constrains on the controller parameters. Similar structural constrain methodologies were already mentioned in [START_REF] Siljak | Decentralized Control of Complex Systems[END_REF], [START_REF] Sojoudi | Structurally Constrained Controllers: Analysis and Synthesis, ser. SpringerLink : Bücher[END_REF], [START_REF] Alavian | Q-parametrization and an sdp for hinf-optimal decentralized control[END_REF], and [START_REF] Ozer | Simultaneous decentralized controller design for time-delay systems[END_REF]. This work allows system models in terms of delaydifferential algebraic equations (DDAEs), whose power in modelling large classes of delay equations is illustrated in the next section. In [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], the authors state that such a system description form can be adapted for designing controllers due to the generality in modelling interconnected systems and controllers. In this way, elimination technique can be avoided which might not be possible for systems with delays. In the DDAE form, the linearity of the closed-loop system, with respect to the matrices of the controllers, can be preserved for various types of delays and combinations of plants and controllers. The rest of the paper is organised as follows. Section II formally introduces the problem of time-delay systems and the existing methods available to stabilise and optimise the performance of such systems using centralised fixed-order controllers. Section III presents the proposed concept of structurally constrained controllers and its implementation methodology. Section IV provides some example MIMO problems from literature which are stabilised and optimised using structurally constrained controllers. Section V concludes the paper with a few remarks. II. PRELIMINARIES In this article, TDS or plants of the following form are considered, E p ẋp (t) = A p0 x p (t) + m A i=1 A pi x p (t -h A i ) + B p1 u(t) + B p2 w(t), y(t) = C p1 x p (t), z(t) = C p2 x p (t). ( 1 ) Here t is the time variable, x p (t) ∈ R n is the instantaneous state vector at time t, similarly, u(t) ∈ R w and y(t) ∈ R z are instantaneous controlled input and measured output vectors respectively at time t. We use the notations R, R + and R + 0 to represent sets of real numbers, non-negative real numbers and strictly positive real numbers respectively, and x p ∈ R n is a short notation for (x p1 , ..., x pn ). A, B, C, D and E are constant real-valued matrices, m A is a positive integer representing the number of distinct time-delays present in the state, the inputs, the outputs, the feed-through (inputoutput) and the first order derivative of instantaneous state vector. The time-delays, 0 < h A i ≤ h max , have a minimum value greater than zero and a maximum value of h max . The instantaneous exogenous input and the instantaneous exogenous (or controlled) output are represented as w(t) and z(t) respectively. Even though there are no feed-through components, input delays or output delays, the LTI system description of ( 1) is in the most general form. This can be portrayed with the help of some simple examples. Example 1. Consider a system with non-trivial feed-through matrices.      ψ(t) = Aψ(t) + B 1 u(t) + B 2 w(t) y(t) = C 1 ψ(t) + D 11 u(t) + D 12 w(t) z(t) = C 2 ψ(t) + D 21 u(t) + D 22 w(t) If we consider x p (t) = [ψ(t) T γ u (t) T γ w (t) T ] T , we can bring this system to the form of (1) with the help of the dummy variables (γ u and γ w ),     I 0 0 0 0 0 0 0 0     ẋp(t)=     A B 1 B 2 0 I 0 0 0 I     xp(t)+     0 -I 0     u(t)+     0 0 -I     w(t), y(t)= C 1 D 11 D 12 xp(t), z(t)= C 2 D 21 D 22 x(t). Example 2. Consider an LTI system with time-delays at the input. ψ(t) = Aψ(t) + B 10 u(t) + m B i=1 B 1i u(t -h B i ) y(t) = C 1 ψ(t) + D 11 u(t) If we consider x p (t) = [ψ(t) T γ u (t) T ] T , we can bring this system to the form of (1) with the help of the dummy variable (γ u ), I 0 0 0 ẋp (t) = A B 10 0 I x p (t) + m B i=1 0 B 1i 0 0 x p (t -h B i ) + 0 -I u(t), y(t) = C 1 D 11 x p (t). Simliarly, the output delays can be virtually "eliminated". Example 3. The presence of time-delays at the first order derivative of the state vector in an LTI system (neutral equation) can also be virtually eliminated using dummy variables. ψ(t) + m E i=1 E i ψ(t -h E i ) = Aψ(t) + B 1 u(t) y(t) = C 1 ψ(t) + D 11 u(t) We can bring this example LTI system to the form of (1) with the help of the dummy variables (γ ψ and γ u ), where γ ψ is given by, γ ψ (t) = ψ(t) + m E i=1 E i ψ(t -h E i ). More precisely, when defining x p (t) = [γ ψ (t) T ψ(t) T γ u (t) T ] T the system takes the following form consistent with (1): I 0 0 0 0 0 0 0 0 ẋp (t) = 0 A B1 -I I 0 0 0 I x p (t) + m E i=1 0 0 0 0 Ei 0 0 0 0 x p (t -h E i ) + 0 0 -I u(t) y(t) = 0 C 1 D 11 x p (t). The system described in (1) could be controlled using the following feedback controller of the prescribed order "n c ", ẋc (t) = A c x c (t) + B c y(t), u(t) = C c x c (t) + D c y(t). (2) The case of n c = 0 corresponds to a static or proportional controller of the form u(t) = D c y(t). The other cases of n c ≥ 1 corresponds to that of a dynamic controller as in the form (2), where, A c is a matrix of size n c × n c . The combination of the plant (1) and the feedback controller (2) can be re-written using x = [x T p u T γ T w x T c y T ] T , (3) in the general form of delay differential algebraic equation (DDAE) as shown below, E ẋ(t) = A 0 x(t) + m i=1 A i x(t -τ i ) + Bw(t), z(t) = Cx(t), (4) where, E =       I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0       , A 0 =       A p0 B p1 B p2 0 0 C p1 0 0 0 -I 0 0 -I 0 0 0 0 0 A c B c 0 -I 0 C c D c       . (5) Subsequently, A i =       A pi 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0       , B =       0 0 I 0 0       , C T =       C p2 0 0 0 0       . An useful property of this modelling approach using DDAEs is the linear dependence of closed-loop system matrices on the elements of the controller matrices. To stabilise and optimise the robustness of the closed-loop system, the timeindependent parameter vector of p is defined. We build on the approach of [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] and [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], that is to directly optimise stability and performance measures as a function of vector p, which contains the parameters of the controller, p = vec A c B c C c D c . (6) For a centralised controller, the matrices A c , B c , C c and D c are seldom sparse when computed using the algorithms presented in [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] or [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF]. Static controllers can be considered as a special case of the dynamic controller, for which A c , B c and C c are empty matrices. The vector p would then include only the elements from D c . The objective functions used for the performance evaluation of the closed-loop system will be explained in the following subsections. A. Robust Spectral Abscissa optimisation: The spectral abscissa (c(p)) of the closed-loop system (4) when w ≡ 0 can be expressed as follows, c(p; τ ) = sup λ∈C {R(λ) : det∆(λ, p; τ ) = 0}, where, ∆(λ, p; τ ) = λE -A 0 (p) - m i=1 A i (p)e -λτi (7) and R(λ) is the real part of the complex number λ. The exponential stability of the null solution of (4) determined by the condition c(p) < 0 (see [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF]). However, the function τ ∈ (R + 0 ) m → c(p; τ ) might not be continuous and could be sensitive to infinitesimal delay changes (in general, as neutral TDS could be included in ( 1)). Therefore, we define the robust spectral abscissa C(p; τ ) as in the following way, C(p; τ ) := lim →0+ sup τe∈B(τ , ) c(p; τ ) (8) In [START_REF] Sojoudi | Structurally Constrained Controllers: Analysis and Synthesis, ser. SpringerLink : Bücher[END_REF], B(τ , ) is an open ball of radius ∈ R + centered at τ ∈ (R + ) m , B(τ , ) := { θ ∈ R m : || θ -τ || < }. The sensitivity of the spectral abscissa with respect to infinitesimal delay perturbations has been resolved by considering the robust spectral abscissa, since this function can be shown to be a continuous function of the delay parameters (and also parameters in p), see [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF]. We now define the concept of strong exponential stability. Definition 1: The null solution of (4) when w ≡ 0 is strongly exponentially stable if there exists a number τ > 0 such that the null solution of E ẋ(t) = A 0 x(t) + m i=1 A i x(t -(τ i + δτ i ))) is exponentially stable for all δτ ∈ R m satisfying ||δτ || < τ and τ i + δτ i ≥ 0, i = 1, ...., m. In [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] it has been shown that the null solution is strongly exponentially stable iff C(p) < 0. To obtain a strongly exponentially stable closed-loop system and to maximise the exponential decay rate of the solutions, the controller parameters (in p) are optimised for minimum robust spectral abscissa, that is, min p -→ C(p). (9) B. Strong H ∞ norm optimisation The transfer function from w to z of the system represented by ( 4) is given by, G zw (λ, p; τ ) := C λE -A 0 (p) - m i=1 A i (p)e -λτi -1 B. (10) The H ∞ norm for a stable system with the transfer function given in [START_REF] Ozer | Simultaneous decentralized controller design for time-delay systems[END_REF] ||G zw (jω, p; τe )|| ∞ Contrary to the (standard) H ∞ norm, the strong H ∞ norm continuously depends on the delay parameter. The continuous dependence also holds with respect to the elements of the system matrices, which includes the elements in p (see [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF]). To improve the robustness expressed in terms of a H ∞ criterion, controller parameters (in p) are optimised for minimum strong H ∞ norm. This brings us to the optimisation problem, min p -→ |||G zw (jω, p; τ )||| ∞ . To solve the non-smooth non-convex objective function involving the strong H ∞ norm, it is essential to start with an initial set of controller parameters for which the closedloop system is strongly exponentially stable. If this is not the case, a preliminary optimisation is performed based on minimising the robust spectral abscissa. Weighted sum approach: A simple weighted sum based optimisation approach can also be performed using the two objectives mentioned in the sub-sections II-A and II-B. The controller parameters (in p) can be optimised for the minimum of a multi-objective function f o (p), that is, min p f o (p), (12) where, f o (p) = ∞, if C( p)≥0 α C( p)+ (1-α) |||Gzw(jω, p)|||∞, else if C( p)<0. (13) III. DESIGN OF STRUCTURALLY CONSTRAINED CONTROLLERS The direct optimisation based approach with structural constrains to selected elements within the controller matrices (A c , B c , C c and D c ) is presented in this section. These constrained or fixed elements are not considered as variables in the optimisation problem. Let us consider a matrix C M which contains all the controller gain matrices. This matrix is later vectorised for constructing the vector p containing the optimisation variables, C M = A c B c C c D c . Some of the elements within C M are fixed, and are not to be considered in the vector p. This can be portrayed with the help of an example problem of designing overlapping controllers. Example 4. Let us consider an example MIMO system for which a second order controller (with an overlapping configuration of C M ) has to be designed as follows,     ẋc1 ẋc2 u 1 u 2     = C M     a c11 0 b c11 b c12 0 a c22 0 b c22 c c11 0 d c11 d c12 0 c c22 0 d c22         x c1 x c2 y 1 y 2     . (14) This MIMO system has two inputs and two outputs. If b c12 and d c12 were to be zero elements, we would have decentralised sub-controllers. That is, input, output, and subcontroller state interactions are decoupled. Since b c12 and d c12 are non-zero elements or not fixed, we have to design overlapping sub-controllers. That is, only input and subcontroller state interactions are decoupled, while, one of the measured output is shared between the sub-controllers. In this example, to optimise the overlapping (or decentralised) sub-controllers without losing its structure, we must keep the 0 elements fixed. The difference between centralised, decentralised and overlapping configurations can be visualised with the help of Fig. 1. In general, imposing zero values to specific controller parameters could lead to segments (sub-controllers) within one controller having restricted access to certain measured outputs and/or restricted control of certain inputs. A. Decentralised and overlapping controllers As mentioned earlier, it is possible to design decentralised and overlapping controllers using the principle of structural constrains. The structural constrains can be enforced on C M in Example 4 with the help of a matrix F M . f M ij = 1, if c M ij is an optimisation variable 0, else if c M ij is a fixed element (15) In ( 15), c M ij and f M ij denote the elements of the i th row and the j th column in the matrices C M and F M respectively. By definition, the sizes of the matrices C M and F M are identical. p = vec F M C M = vec F M A c B c C c D c (16) Where, vec F M C M is a vector containing the elements of C M for which the corresponding element in F M is one, see (15). The elements in vec F M C M and vec C M are in the same order. We obtain the new controller parameter vector p using (16). For this purpose we can define two interaction matrices M Cu and M Cy , which denote the interaction between input, output and sub-controllers. We also define a vector nCa to contain information on the order of all the sub-controllers. M Cu , M Cy and nCa are given as input to the algorithm for the design of decentralised or overlapping type of structurally constrained controller. Letting m Cuij and m Cyij denote the elements of the i th row and the j th column in matrices M Cu and M Cy respectively, we have m Cuij = 1, if i th controller handles the j th input 0, otherwise m Cyij = 1, if i th controller considers the j th output 0, otherwise. Referring back to Example 4, the input given to the algorithm for designing (14) are given as, M Cu = 1 0 0 1 , M Cy = 1 1 0 1 , nCa = 1 1 T . (17) Therefore, we consider two first order sub-controllers. We need to fix some elements in the matrix C M to zero in order to have the same form as the matrix within (14). Subsequently, with the information available in (17), it is also possible to obtain the matrix F M , F M =     1 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1     . (18) Using (18), we can construct the new C M as in (14), this is the structurally (or sparsity) constrained form of the controller matrix. The corresponding vector p for Example 4 can be given as, p = [a c11 c c11 a c22 c c22 b c11 d c11 b c12 b c22 d c12 d c22 ] T . (19) We can represent the matrix F M in general form with the help of the matrix of ones (in what follows, J n×n denotes the matrix of size n by n with every entry equal to one). If l is the total number of sub-controllers, then k ∈ {1, ..., l} and n c k is the order of the k th sub-controller. If the total number of inputs is w, then h ∈ {1, ..., w}. Similarly, when the total number of outputs is z, then j ∈ {1, ..., z}. For Example 4 with input as in (17), there are two sub-controllers, two inputs and two outputs, then l = 2, w = 2 and z = 2 respectively. The general representations for matrices J n×n and F M are given below. F M =       J nc 1 ×nc 1 . . . 0 . . . . . . . . . 0 . . . J nc l ×nc l m Cykj • J nc k ×1 k,j m Cuhk • J 1×nc k h,k M Cu M Cy       . Here we use [ • ] i,j to denote the (i, j)-th block of a matrix. For both the cases of overlapping and decentralised controllers A c takes a block diagonal form as shown below. A c =    A c1 . . . 0 . . . . . . . . . 0 . . . A cl    The matrices B c , C c and D c will be sparsity constrained but they need not be block diagonal in structure. Also, this could be the case for decentralised configuration. Sparsity constrains are defined based on the interaction matrices and the order of the sub-controllers. Subsequently, the interaction matrices M Cu and M Cy which are not of diagonal form will result in controller gain matrices B c , C c , and D c which are not of block diagonal form. However, this does not restrict the implementation of this tool in anyway. B. Other controllers One can use the concept of structural constraints to design many other controllers. A kind of distributed controller can be considered by including the off-diagonal elements of the A c matrix in the vector p. PID controllers are commonly used as feedback controllers in the industry. It is also possible to structurally constrain the dynamic controller to represent a PID controller and optimise its gains. Let us consider the PID controller mentioned in [START_REF] Toscano | Structured Controllers for Uncertain Systems: A Stochastic Optimization Approach[END_REF]. K(s) = K P + K I 1 s + K D s 1 + τ d s , (20) for which a realisation is determined by the controller matrices, A c B c C c D c =   0 0 K i 0 -1 τ d I -1 τ 2 d K d I I K p + 1 τ d K d   (21) Here τ d is the time constant of the filter applied to the derivative action. The physical reliability is safeguarded by ensuring the properness of the PID controller using this lowpass first order filter (see [START_REF] Toscano | Structured Controllers for Uncertain Systems: A Stochastic Optimization Approach[END_REF]). If we assume τ d to be a constant, we can convert this into an optimisation problem for the proposed algorithm as given below. F M = 0 0 1 0 0 1 0 0 1 → C M = 0 0 bc 11 0 -1 τ d I bc 21 I I dc 11 → p = bc 11 bc 21 dc 11 The new values for the gains of the PID controller can be obtained from the optimised dynamic controller using In this section, two MIMO plants with time-delays are used by the proposed algorithm to obtain structurally constrained controllers. Some basic information on the structure of these plants are given in Table I. K i = b c11 , K d = -τ 2 d b c21 and K p = d c11 -1 τ d K d . IV. EXAMPLE MIMO PROBLEMS The results obtained for the closed-loop systems of the plants and decentralised or overlapping controllers are shown in Table II. Only the final results have been presented in the table due to the space limitation.1 . In both these example problems, when α = 1 controllers were optimised for minimum robust spectral abscissa (RSA). However, when α = 0 the controllers were optimised for minimum strong H ∞ norm (SHN). For the examples considered in this paper, we can observe that minimisation of the strong H ∞ norm occurs at the cost of reduced exponential decay rate (an increase in the value of robust spectral abscissa). Also, we can observe that the overlapping controllers generally perform better than the decentralised controllers which is expected since they result in less structural constraints on the controller parameters. V. CONCLUSION In this paper, a methodology to design structurally constrained dynamic (LTI) controllers was presented. It was concluded that decentralised controllers, overlapping controllers and many other types of controllers can be considered as structurally constrained controllers, for which a generic design approach was presented. The proposed frequency domain based approach was used to design stabilising and robust fixed-order decentralised and overlapping controllers for linear time invariant neutral and retarded time-delay systems. The approach has been implemented as an improvement to the algorithms in [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF] and [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF], therefore, the objective functions are in general non-convex. This is addressed by using randomly generated initial values for controller parameters, along with initial controllers specified by the user, and choosing the most optimal solution from them. The algorithm presented here relies on a routine for computing the objective function and its gradient whenever the objective function is differentiable. For the spectral abscissa, the value of the objective function is obtained by computing rightmost eigenvalues of the DDAE. The value for H ∞ norm is obtained by a generalisation of the Boyd-Balakrishnan-Kabamba / Bruinsma-Steinbuch algorithm relying on computing imaginary axis solutions of an associated Hamiltonian Fig. 1 . 1 Fig. 1. Overview of centralised, decentralised, and overlapping configurations. P is the MIMO plant with two inputs and two outputs whereas C, C 1 , and C 2 are the controllers. TABLE I INFORMATION I ON THE EXAMPLE TDS CONSIDERED Example Order of No. of in- No. of out- No. of time- plant puts puts delays Neutral TDS 3 2 2 5 Retarded TDS 4 2 2 1 Please referhttp://twr.cs.kuleuven.be/research/ software/delay-control/structurallyconstrainedTDS. zip to obtain the tool and more information on example problems and their solutions. eigenvalue problem. Evaluating the value of the objective function at every iteration constitutes the dominant computational cost. On the contrary, the derivatives with respect to the controller parameters are computed at a negligble cost from left and right eigenvectors. Due to this and the fact that controllers of lower order are desirable for application, introducing structural constraints will not have a considerable impact on the overall computational complexity of the control design problem. ACKNOWLEDGEMENTS This work was supported by the project C14/17/072 of the KU Leuven Research Council, by the project G0A5317N of the Research Foundation-Flanders (FWO -Vlaanderen), and by the project UCoCoS, funded by the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 675080.
28,511
[ "1030446", "866458", "19012", "17232" ]
[ "410272", "147974", "106906", "445253", "525219" ]
01762094
en
[ "sdv" ]
2024/03/05 22:32:13
2017
https://amu.hal.science/hal-01762094/file/9%20CUISSET.pdf
Pierre Deharo Jacques Quilici Laurence Camoin-Jau Thomas Johnson Clémence Bassez Guillaume Bonnet Marianne Fernandez Manal Ibrahim Pierre Suchon Valentine Verdier PhD Pierredeharomd Abc Jacquesquilicimd D Laurencecamoin-Jaumd Thomas W Johnsonmd Clémencebassezmd Ac Guillaumebonnetmd Ac Mariannefernandezmd Manali PhD Valentineverdiermd H Laurentfourcademd PhD bcei Pierre Emmanuelmorangemd PhD Jean Louisbonnetmd PhD Marie Christinealessimd PhD Bcei Thomascuissetmd Benefit of Switching Dual Antiplatelet Therapy After Acute Coronary Syndrome According to On-Treatment Platelet Reactivity: The TOPIC-VASP Pre-Specified Analysis of the TOPIC Randomized Study published or not. The documents may come Introduction After acute coronary syndrome (ACS), adequate platelet inhibition is crucial to minimize the risk of recurrent ischemic events [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF]. "Newer P2Y12 blockers" (i.e., prasugrel and ticagrelor) have a more pronounced inhibitory effect on platelet activation and have proved their superiority over clopidogrel, in association with aspirin [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. The clinical benefit provided by these drugs is related to a significant reduction in recurrent ischemic events, despite an increased incidence of bleeding complications [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. The TOPIC (Timing Of Platelet Inhibition after acute Coronary syndrome) study recently showed that switching from ticagrelor or prasugrel plus aspirin to fixed dose combination (FDC) of aspirin and clopidogrel, 1 month after ACS, was associated with a reduction in bleeding complications, without increase of ischemic events at 1 year (4). Platelet function testing has been used for years to assess individual response to antiplatelet agents. Indeed, platelet reactivity has been strongly associated with clinical outcomes after ACS [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF][START_REF] Kirtane | Is there an ideal level of platelet P2Y12receptor inhibition in patients undergoing percutaneous coronary intervention?: "Window" Analysis From the ADAPT-DES Study (Assessment of Dual AntiPlatelet Therapy With Drug-Eluting Stents)[END_REF][START_REF] Parodi | High residual platelet reactivity after clopidogrel loading and long-term cardiovascular events among patients with acute coronary syndromes undergoing[END_REF][START_REF] Stone | ADAPT-DES InvestigatorsPlatelet reactivity and clinical outcomes after coronary artery implantation of drug-eluting stents (ADAPT-DES): a prospective multicentre registry studyLancet[END_REF]. High on-treatment platelet reactivity (HTPR), defining biological resistance to dual antiplatelet therapy (DAPT) is frequent on clopidogrel and has been associated with an increased risk of cardiovascular events, including stent thrombosis [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF][START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF]. In contrast, HTPR is rarely observed with use of newer P2Y12 blockers (prasugrel, ticagrelor). Instead, biological hyper-response is frequently noticed and associated with bleeding events on P2Y12 blockers [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Low on-treatment platelet reactivity (LTPR) has been proposed to define hyperresponse to P2Y12 blockers [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Therefore, the objective of the present analysis was to investigate the impact of LTPR on clinical outcomes after ACS and the relation between initial platelet reactivity and benefit of the switched DAPT strategy tested in the TOPIC study. Methods Study design and patients The design of the TOPIC randomized study has been previously published [START_REF] Cuisset | Benefit of switching dual antiplatelet therapy after acute coronary syndrome: the TOPIC (timing of platelet inhibition after acute coronary syndrome) randomized studyEur[END_REF]. Briefly, this was an open-label, single-center, controlled trial randomizing patients admitted for ACS and treated with aspirin and a new P2Y12 inhibitor. One month after the ACS, eligible patients were then randomly assigned in a 1:1 ratio to receive a FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT) or continuation of aspirin plus the established new P2Y12 blocker (unchanged DAPT). Inclusion criteria were admission for ACS requiring early percutaneous coronary intervention (PCI) within 72 h, treatment with aspirin and a newer P2Y12 blocker at discharge, no major adverse event 1 month after the ACS, and >18 years of age. Exclusion criteria were history of intracranial bleeding; contraindication to use of aspirin, clopidogrel, prasugrel, or ticagrelor; major adverse event (ischemic or bleeding event) within a month of ACS diagnosis; thrombocytopenia (platelet concentration lower than 50×10 9 /l); major bleeding (according to the Bleeding Academic Research Consortium [BARC] criteria) in the past 12 months; long-term anticoagulation (contraindication for newer P2Y12 blockers); and pregnancy. During the randomization visit, patients had to present fasting and biological response to P2Y12 blocker was assessed by % platelet reactivity index vasodilator-stimulated phosphoprotein (PRI-VASP). On the basis of PRI-VASP, patients were classified as LTPR (PRI-VASP ≤20%), normal response (20% < PRI-VASP ≤50%), or HTPR (PRI-VASP >50%) [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Due to expected very low rates of HTPR, we decided to pool normal response and HTPR in the non-LTPR cohort (PRI-VASP >20%). Randomization All patients received treatment with aspirin and a newer P2Y12 inhibitor for 1 month after the ACS. One month after the ACS, eligible patients were then randomly assigned in a 1:1 ratio to receive an FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT) or continuation of aspirin plus continuation of newer P2Y12 blocker (unchanged DAPT with same treatment than before randomization). The randomization was performed independently of platelet inhibition status, with the investigators blinded to PRI-VASP results. The randomization sequence was computer generated at Timone Hospital, and patients' allocations were kept in sequentially numbered sealed envelopes. Group allocation was issued by the secretarial staff of the research department at Timone Hospital. Treatment During the index admission, a 300-mg loading dose of aspirin was given to patients who were treatment-naive before the study. All patients were pre-treated with a loading dose of ticagrelor 180 mg or prasugrel 60 mg before PCI. Regarding the PCI, the use of second-and third-generation drug-eluting stents was recommended. At the discretion of the attending physician, patients were discharged on ticagrelor 90 mg twice a day or prasugrel 10 mg daily in addition to aspirin. At 1-month patients were randomly assigned to either continue with the standard regimen of 75 mg of aspirin plus newer P2Y12 blocker (unchanged DAPT) or receive a single tablet FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT). To reduce the risk of bleeding, use of radial access, proton-pump inhibitors, and access site closure devices (when PCI was undertaken via the femoral artery) were recommended but not mandatory. Other cardiac medications were given according to local guidelines. Follow-up and endpoint assessments The primary endpoint of this analysis aimed to evaluate the impact of on-treatment platelet reactivity on clinical outcomes in both groups (unchanged and switched DAPT). The primary endpoint was a composite of cardiovascular death, unplanned hospitalization leading to urgent coronary revascularization, stroke, and bleeding episodes as defined by the BARC classification ≥2 at 1 year after ACS [START_REF] Mehran | Standardized bleeding definitions for cardiovascular clinical trials: a consensus report from the Bleeding Academic Research ConsortiumCirculation[END_REF]. This combination of both ischemic and bleeding events was defined as net clinical benefit. Each of the components was also evaluated independently, as well as the composite of all ischemic events and all BARC bleeding episodes. Factors associated with LTPR status were determined. Unplanned revascularization was defined as any unexpected coronary revascularization procedure (PCI or coronary artery bypass graft surgery) during the follow-up period. Stroke diagnosis was confirmed by a treating neurologist. Computed tomography or magnetic resonance imaging was used to distinguish ischemic from hemorrhagic stroke. All data were collected prospectively and entered into a central database. Clinical follow-up was planned for 1 year after the index event or until the time of death, whichever came first. After collection, data were analyzed by a physician at our institution dedicated to study follow-up. Platelet inhibition evaluation Platelet reactivity was measured using the VASP index. Blood samples for VASP index analysis were drawn by a traumatic venipuncture of the antecubital vein. Blood was taken at least 6 h after ticagrelor intake and 12 h after prasugrel intake. The initial blood drawn was discarded to avoid measuring platelet activation induced by needle puncture; blood was collected into a Vacutainer (Becton Dickinson, New Jersey) containing 3.8% trisodium citrate and filled to capacity. The Vacutainer was inverted 3 to 5 times for gentle mixing and sent immediately to the hemostasis laboratory. VASP index phosphorylation analysis was performed within 24 h of blood collection by an experienced investigator using the CY-QUANT VASP/P2Y12 enzyme-linked immunosorbent assay (Biocytex, Marseille, France) [START_REF] Schwarz | EigenthalerFlow cytometry analysis of intracellular VASP phosphorylation for the assessment of activating and inhibitory signal transduction pathways in human platelets-definition and detection of ticlopidine/clopidogrel effectsThromb[END_REF]. Briefly, after a first step of parallel whole blood sample activation with prostaglandin E1 (PGE1) and PGE1+adenosine diphosphate (ADP), platelets from the sample are lysed, allowing released VASP to be captured by an antihuman VASP antibody, which is coated in the microtiter plate. Then, a peroxidase-coupled antihuman VASP-P antibody binds to a phosphorylated serine 239-antigenic determinant of VASP. The bound enzyme peroxidase is then revealed by its activity on tetramethylbenzidine substrate over a pre-determined time. After stopping the reaction, absorbance at 450 nm is directly related to the concentration of VASP-P contained in the sample. The VASP index was calculated using the optical density (OD) (450 nm) of samples incubated with PGE1 or PGE1+ADP according to the formula: Maximal platelet reactivity was defined as the maximal PRI reached during the study. Ethics The ethics committee at our institution approved the study protocol, and we obtained written informed consent for participation in the study. We honored the ethical principles for medical research involving human subjects as set out in the Declaration of Helsinki. The data management and statistical analysis were performed by the research and development section, Cardiology Department, Timone Hospital (Marseille, France). Statistical analysis All calculations were performed using the SPSS version 20.00 (IBM Corporation, Armonk, New York) and GraphPad Prism version 7.0 (GraphPad Software, San Diego, California). Baseline characteristics of subjects with and without LTPR were compared. Because randomization was not stratified by LTPR status, baseline characteristics were compared among subjects with and without LTPR by treatment assignment. Continuous variables were reported as mean ± SD or as median (interquartile range) (according to their distribution), and categorical variables were reported as count and percentage. Standard 2-sided tests were used to compare continuous variables (Student t or Mann-Whitney U tests) or categorical variables (chi-square or Fisher exact tests) between patient groups. Multivariate regression models were used to evaluate the linear association between LTPR status (dependent variable) and clinical characteristics (independent variable) using binary logistic regression. The primary analysis was assessed by a modified intention-to-treat analysis. Percentages of patients with an event were reported. We analyzed the primary and secondary endpoints by means of a Cox model for survival analysis, with time to first event used for composite endpoints, and results reported as hazard ratio (HR) and 95% confidence interval (CI) for switched DAPT versus unchanged DAPT. Survival analysis methods were used to compare outcomes by treatment assignment (unchanged DAPT vs. switched DAPT) and by presence or absence of LTPR. Hazard ratios (HRs) were adjusted to the factors independently associated with LTPR status. Areas under the receiver-operating characteristic curve were determined using MedCalc Software version 12.3.0 (Ostend, Belgium). According to the receiver-operating characteristic curve, the value of PRI-VASP exhibiting the best accuracy was chosen as the threshold. This study is registered with ClinicalTrials.gov (NCT02099422). Results Baseline Between March 2014 and May 2016, 646 patients were enrolled; 323 patients were randomly assigned to the switched DAPT group, and 323 patients were randomly assigned to the unchanged DAPT group. Follow-up at 1 year was performed for 316 (98.1%) patients in the switched DAPT group and 318 (98.5%) in the unchanged DAPT group (Figure 1). The median follow-up for both groups was 359 days, and the mean follow-up was 355 days in the switched DAPT group versus 356 days in the unchanged DAPT group. The characteristics of the studied cohort are summarized in Table 1. Patients with LTPR had lower body mass index (BMI) and were less often diabetic (Table 1). Platelet reactivity testing was performed for all patients, and results were available for 644 (99.7%) patients. Values are n (%) or mean ± SD. Platelet inhibition In the whole cohort, 1 month after ACS, mean PRI-VASP was 26.1 ± 18.6%, corresponding to 27.3 ± 19.4% in the switched arm versus 25.0 ± 17.7% in the unchanged arm (p = 0.12). A total of 305 patients (47%) were classified as LTPR, corresponding to 151 (47%) patients in the switched arm and 154 (48%) patients in the unchanged arm (p = 0.84). Patients on ticagrelor had a significantly lower platelet reactivity and higher incidence of LTPR than did patients on prasugrel (mean PRI-VASP: 22.2 ± 18.7% vs. 29.1 ± 18.0%; p < 0.01; and 167 [55%] vs. 139 [45%]; p < 0.01, respectively) (Figure 2). Factors associated with LTPR status LTPR patients were older (p = 0.05), had lower BMI (p < 0.01), were less often diabetic (p = 0.01), and were more often on ticagrelor (p < 0.01). In multivariate analysis, BMI (p < 0.01), diabetes (p = 0.01), and ticagrelor treatment (p < 0.01) remained associated with LTPR. Clinical outcomes Results of the TOPIC study have been previously published and showed a significant reduction in the primary composite endpoint on switched DAPT strategy driven by a reduction in bleeding complications (9.3% vs. 23.5%; p < 0.01) without differences in ischemic endpoints (9.3% vs. 11.5%; p = 0.36). Effect of LTPR on clinical outcomes in both randomized arms Unchanged arm At 1-year follow-up, in the unchanged arm the rate of primary endpoint occurred in 51 (33.1%) patients defined as LTPR and in 34 (20.1%) patients defined as no LTPR (p = 0.01) (Table 2 and Figure 3). Bleeding events defined as BARC ≥2 occurred in 28 (18.2%) LTPR patients and in 20 (11.8%) non-LTPR patients (p = 0.19) (Table 3 and Figure 4), while bleeding events defined as all BARC occurred in 41 (26.6%) LTPR patients and in 35 (20.7%) non-LTPR patients (p = 0.39) (Table 4). Any ischemic endpoint occurred in 23 (14.9%) LTPR patients and in 14 (8.3%) non-LTPR patients (p = 0.04) (Table 5, Figure 5). Abbreviations as in Table 2. Switched arm Differently from the unchanged group, at 1-year follow-up, in the switched arm, the rate of primary endpoint was not significantly different and occurred in 18 (11.9%) LTPR patients and in 25 (14.6%) non-LTPR patients (p = 0.45) (Table 2, Figure 3). Bleeding events defined as BARC ≥2 occurred in 8 (5.3%) LTPR patients and in 5 (2.9%) non-LTPR patients (p = 0.29) (Table 3, Figure 4), while bleeding events defined as all BARC occurred in 19 (12.6%) LTPR patients and in 11 (6.4%) non-LTPR patients (p = 0.046) (Table 4). Any ischemic endpoint occurred in 10 (6.6%) LTPR patients and in 20 (11.7%) non-LTPR patients (p = 0.11) (Table 5, Figure 5). Impact of LTPR on benefit of switching strategy Patients with LTPR In LTPR patients, the rate of primary endpoint at 1 year was significantly lower after switching and occurred in 18 (11.9%) patients in the switched arm and in 51 (33.1%) patients in the unchanged arm (p < 0.01) (Table 2, Figure 3). This benefit on primary endpoint was related to lower incidence of both bleeding and ischemic complications. Indeed, the rate of bleeding BARC ≥2 occurred in 8 (5.3%) LTPR patients in the switched arm and in 28 (18.2%) LTPR patients in the unchanged arm (p < 0.01) (Table 3, Figure 4). Also, the rate of all BARC bleeding occurred in 19 (12.6%) patients in the switched arm and in 41 (26.6%) patients in the unchanged arm (p < 0.01) (Table 4). Finally, the rate of any ischemic endpoint occurred in 10 (6.6%) LTPR patients in the switched arm and in 23 (14.9%) LTPR patients in the unchanged arm (adjusted HR: 0.39; 95% CI: 0.18 to 0.85; p = 0.02) (Table 5, Figure 5). Patients without LTPR In patients without LTPR the rate of primary endpoint at 1 year was not significantly different but was numerically lower in patients in the switched group compared with the unchanged group: 25 (14.6%) patients versus 34 (20.1%) patients, respectively (p = 0.39) (Table 2, Figure 3). However, the risk of bleeding was, as LTPR patients, significantly lower in the non-LTPR patients after switching. Indeed, the rate of bleeding BARC ≥2 occurred in 5 (2.9%) non-LTPR patients in the switched arm and in 20 (11.8%) non-LTPR patients in the unchanged arm (p < 0.01) (Table 3 and Figure 4) and the rate of all BARC bleedings occurred in 11 (6.4%) patients in the switched arm and in 35 (20.7%) patients in the unchanged arm (p < 0.01) (Table 4). Finally, any ischemic endpoint occurred in 20 (11.7%) patients in the switched arm and in 14 (8.3%) patients in the unchanged arm (adjusted HR: 1.67; 95% CI: 0.81 to 3.45; p = 0.17) (Table 5, Figure 5). Discussion The main finding of our study is that the benefit of a switching DAPT strategy on bleeding prevention is observed regardless of a patient's biological response to newer P2Y12 blockers. Indeed, the switched strategy allows reduction of bleeding complications without apparent increase in ischemic complications in both the LTPR and the non-LTPR groups. However, benefit of switched DAPT was greater in LTPR patients, who had impaired prognosis with unchanged DAPT but similar rate of adverse events with a switched DAPT strategy. In patients treated with DAPT, the relationship between platelet reactivity and clinical outcomes has been extensively investigated in clopidogrel-treated patients [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Indeed, resistance to clopidogrel is frequent and defined by an HTPR [START_REF] Stone | ADAPT-DES InvestigatorsPlatelet reactivity and clinical outcomes after coronary artery implantation of drug-eluting stents (ADAPT-DES): a prospective multicentre registry studyLancet[END_REF][START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Aradi | Bleeding and stent thrombosis on P2Y12inhibitors: collaborative analysis on the role of platelet reactivity for risk stratification after percutaneous coronary intervention[END_REF]. Newer P2Y12 blockers are characterized by stronger and more predictable platelet inhibition in comparison with clopidogrel (2,3). Both ticagrelor and prasugrel proved, in large randomized trials, their clinical superiority over clopidogrel after ACS [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. Although resistance to newer P2Y12 blockers is infrequently observed, significant rates of hyper-responders emerged [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. This status, defined as LTPR, has been later associated with increased risk of bleeding events on DAPT [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF][START_REF] Aradi | Bleeding and stent thrombosis on P2Y12inhibitors: collaborative analysis on the role of platelet reactivity for risk stratification after percutaneous coronary intervention[END_REF][START_REF] Bonello | Relationship between post-treatment platelet reactivity and ischemic and bleeding events at 1-year follow-up in patients receiving prasugrelJ[END_REF]. Our study confirmed that biological hyper-response to DAPT is frequent on newer P2Y12 blockers, with 47% of the patients defined as LTPR, using the definition validated by our group on a large cohort of ACS patients [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. We also confirmed the significant association between LTPR and bleeding complications. Moreover, we observed that patients defined as LTPR on newer P2Y12 blockers had worse outcomes if they were maintained on their original "unchanged" DAPT regimen, whereas after switching a similar benefit was observed between LTPR and non-LTPR patients. Surprisingly, we noticed a trend in favor of the higher incidence of ischemic complications in LTPR patients who remained on unchanged DAPT. In the switched arm, LTPR was associated with nonsignificant reduction in ischemic events, which is in line with stronger platelet inhibition levels. We might hypothesize that hyper-responders maintained on newer P2Y12 blockers were exposed to ischemic complications following DAPT change or nonadherence due to side effects such as minor bleedings or ticagrelor-associated dyspnea as well as a play of chance that cannot be excluded. Despite the strong prognostic value of platelet function testing, strategies aiming to tailor DAPT according to individual platelet inhibition failed to prove significant clinical benefit [START_REF] Price | GRAVITAS InvestigatorsStandard-vs high-dose clopidogrel based on platelet function testing after percutaneous coronary intervention: the GRAVITAS randomized trialJAMA[END_REF][START_REF] Trenk | A randomized trial of prasugrel versus clopidogrel in patients with high platelet reactivity on clopidogrel after elective percutaneous coronary intervention with implantation of drug-eluting stents: results of the TRIGGER-PCI (Testing Platelet Reactivity In Patients Undergoing Elective Stent Placement on Clopidogrel to Guide Alternative Therapy With Prasugrel) studyJ[END_REF][START_REF] Collet | ARCTIC InvestigatorsBedside monitoring to adjust antiplatelet therapy for coronary stentingN[END_REF][START_REF] Cayla | ANTARCTIC investigatorsPlatelet function monitoring to adjust antiplatelet therapy in elderly patients stented for an acute coronary syndrome (ANTARCTIC): an open-label, blinded-endpoint, randomised controlled superiority trialLancet[END_REF]. All these studies included mostly patients treated with clopidogrel, or prasugrel last, and aimed to adjust the molecule or the dose according to platelet function. Three of 4 trials aimed to correct poor response to clopidogrel (14-16), whereas only 1 trial did adjust the DAPT regimen according to hyper-response in elderly patients only (>75 years of age) treated with a 5-mg dosage of prasugrel [START_REF] Cayla | ANTARCTIC investigatorsPlatelet function monitoring to adjust antiplatelet therapy in elderly patients stented for an acute coronary syndrome (ANTARCTIC): an open-label, blinded-endpoint, randomised controlled superiority trialLancet[END_REF]. However, it seems that ticagrelor is associated with higher rates of hyper-response than prasugrel is. Consequently, it is possible that platelet function testing may have a role in the management of selected patients treated with ticagrelor after ACS who are at risk of developing hyper-response (i.e., older patients, with low BMI, nondiabetic). Because no large study assessing the benefit of treatment adaptation based on platelet function has been conducted on ticagrelor so far, it is possible that higher rates of hyper-response make relevant the use of platelet function testing in this setting. The next challenge could be to identify which patients will benefit from platelet testing and treatment adaptation in case of hyper-response. However, in our study, benefit of switching DAPT was observed also in non-LTPR patients, which could mitigate the usefulness of platelet testing and reserve it to selected candidates after ACS (such as nondiabetics and lower BMI). Moreover, despite the fact that the recommended DAPT duration after ACS is 12 months [START_REF] Roffi | Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: Task Force for the Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology[END_REF], there is evidence that shorter DAPT duration could be safe after ACS in selected patients [START_REF] Naber | LEADERS FREE InvestigatorsBiolimus-A9 polymer-free coated stent in high bleeding risk patients with acute coronary syndrome: a Leaders Free ACS sub-studyEur[END_REF] and therefore benefit of the switched strategy would be less substantial, whereas P2Y12 blockers could be stopped after 1 to 3 months. Nevertheless, this strategy of short DAPT after ACS does not apply to all patients but is reserved to very high bleeding risk ACS patients [START_REF] Roffi | Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: Task Force for the Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology[END_REF]. Nevertheless, reduced platelet inhibition potency from 1 to 12 months could maintain ischemic protection while reducing the risk of bleeding as demonstrated in TOPIC study [START_REF] Cuisset | Benefit of switching dual antiplatelet therapy after acute coronary syndrome: the TOPIC (timing of platelet inhibition after acute coronary syndrome) randomized studyEur[END_REF]. The effect of switching from a newer P2Y12 inhibitor to clopidogrel on platelet inhibition has been assessed in crossover studies [START_REF] Gurbel | Response to ticagrelor in clopidogrel nonresponders and responders and effect of switching therapies: the RESPOND studyCirculation[END_REF][START_REF] Kerneis | Switching acute coronary syndrome patients from prasugrel to clopidogrelJ[END_REF][START_REF] Deharo | Effectiveness of switching 'hyper responders' from prasugrel to clopidogrel after acute coronary syndrome: the POBA (Predictor of Bleeding with Antiplatelet drugs) SWITCH studyInt[END_REF][START_REF] Pourdjabbar | CAPITAL InvestigatorsA randomised study for optimising crossover from ticagrelor to clopidogrel in patients with acute coronary syndrome[END_REF]. These studies have shown that switching to clopidogrel is associated with a reduction of platelet inhibition and an increase in rates of HTPR. Therefore, the concern may be that some of the patients switched will have insufficient platelet inhibition on clopidogrel and will be exposed to increased risk of ischemic recurrence. However, the reduced potency of DAPT offered by our switching strategy, 1 month after ACS in patients free of adverse events, was not associated with an increased risk of ischemic events, compared with an unchanged DAPT strategy (4). There is also evidence that 80% of stent thrombosis will occur within the first month after stent implantation [START_REF] Palmerini | Long-term safety of drug-eluting and bare-metal stents: evidence from a comprehensive network meta-analysisJ[END_REF]; it is likely that after this time point the impact of resistance to clopidogrel on stent thrombosis incidence is less critical. Finally, the large ongoing TROPICAL-ACS (Testing Responsiveness To Platelet Inhibition On Chronic Antiplatelet Treatment For Acute Coronary Syndromes) study will provide important additional information about both the concept of evolutive DAPT with switch as well as the value of platelet function testing to guide it [START_REF] Sibbing | TROPICAL-ACS InvestigatorsA randomised trial on platelet function-guided deescalation of antiplatelet treatment in ACS patients undergoing PCI. Rationale and design of the Testing Responsiveness to Platelet Inhibition on Chronic Antiplatelet Treatment for Acute Coronary Syndromes (TROPICAL-ACS) trialThromb Haemost[END_REF]. This trial will randomize 2,600 ACS patients to standard prasugrel treatment or de-escalation of antiplatelet therapy at 1 week with a switch to clopidogrel. This de-escalation group will undergo platelet testing 2 weeks after switching with a switch back to prasugrel in case of low response [START_REF] Sibbing | TROPICAL-ACS InvestigatorsA randomised trial on platelet function-guided deescalation of antiplatelet treatment in ACS patients undergoing PCI. Rationale and design of the Testing Responsiveness to Platelet Inhibition on Chronic Antiplatelet Treatment for Acute Coronary Syndromes (TROPICAL-ACS) trialThromb Haemost[END_REF]. Study limitations First, it was an open-label study. Nevertheless, all events for which medical attention was sought were adjudicated by a critical events committee unaware of treatment allocation. However, self-reported bleeding episodes and treatment discontinuation, for which patients did not consult a health care professional, were subjective. In case of adverse event reporting or treatment modification, the letters from general practitioners and medical reports were collected and analyzed. Second, this is a post hoc analysis of a randomized trial with inherent bias. Third, we used only the PRI-VASP assay to assess platelet inhibition. However, it is recognized as the most reliable assessment of platelet inhibition, being the only test that specifically measures P2Y12 receptor activity [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF]. Fourth, by protocol we did not reassess platelet inhibition after switching and then could not determine the prognosis and frequency of patients defined as HTPR after switching. Last, initial population calculation was made to compare switched versus unchanged strategy and therefore, the platelet reactivity analysis was underpowered for clinical outcomes and could only be considered as hypothesis generating. Conclusions Our data suggest that in patients on aspirin plus ticagrelor or prasugrel without evidence of an adverse event in the first month following an ACS, switching DAPT strategy to aspirin plus clopidogrel is beneficial regardless of biological platelet inhibition status. However, switching DAPT is highly efficient in hyper-responders. Indeed, hyper-response is associated with worse clinical outcomes with unchanged DAPT, which was corrected by a switched DAPT strategy. Therefore, platelet testing could facilitate tailoring DAPT 1 month after a coronary event, biological hyper-response being 1 more argument to switch DAPT. Further randomized evaluations are necessary to validate antiplatelet regimen adaptation in case of biological hyper-response to P2Y12 blockers. Perspectives WHAT IS KNOWN? "Newer" P2Y12 blockers (i.e., prasugrel and ticagrelor) have a more pronounced inhibitory effect on platelet activation and have proved their superiority over clopidogrel, in association with aspirin. The TOPIC study suggested that switching from ticagrelor or prasugrel plus aspirin to FDC of aspirin and clopidogrel (switched DAPT), 1 month after ACS, was associated with a reduction in bleeding complications, without an increase in ischemic events at 1 year. WHAT IS NEW? Biological hyper-response to a newer P2Y12 blocker is frequent and affects almost one-half of ACS patients. The benefit of a switching DAPT strategy is observed regardless of a patient's biological response to newer P2Y12 blockers. However, the benefit of switched DAPT is higher in hyper-responders who have impaired prognosis with unchanged DAPT, whereas switching the DAPT strategy significantly reduces the risk of bleeding and ischemic events at 1 year in this cohort. WHAT IS NEXT? The next challenge will be to identify which patients will benefit from platelet testing and treatment adaptation in case of hyper-response to a newer P2Y12 blocker after ACS. BMI = body mass index; BMS = bare-metal stent(s); BVS = bioresorbable vascular scaffold; CAD = coronary artery disease; DES = drug-eluting stent(s); EF = ejection fraction; HDL = high-density lipoprotein; LDL = low-density lipoprotein; LTPR = low on-treatment platelet reactivity; RAS = renin-angiotensin system; PPI = proton pump inhibitors; STEMI = STsegment elevation myocardial infarction; NSTEMI = non-ST-segment elevation myocardial infarction; UA = unstable angina. Table 1 . 1 Clinical Characteristics and Treatment at Baseline Whole Cohort (N = LTPR (n = Non-LTPR (n = p 646) 306) 340) Value Male 532 (82) 247 (81) 285 (84) 0.18 Age, yrs 60.1 ± 10.2 60.9 ± 10.3 59.3 ± 10.1 0.05 BMI, kg/m 2 27.2 ± 4.5 26.3 ± 4.0 28.0 ± 4.7 <0.01 Medical history Hypertension 313 (49) 148 (48) 165 (49) 0.52 Type II diabetes 177 (27) 68 (22) 109 (32) <0.01 Dyslipidemia 283 (44) 137 (45) 146 (43) 0.35 Current smoker 286 (44) 126 (41) 160 (47) 0.08 Previous CAD 197 (31) 89 (29) 108 (32) 0.26 Treatment Beta-blocker 445 (69) 221 (72) 224 (66) 0.05 RAS inhibitor 486 (75) 224 (73) 262 (77) 0.21 Statin 614 (95) 292 (95) 322 (95) 0.41 PPI 639 (99) 303 (99) 336 (99) 0.81 Antiplatelet agent <0.01 Ticagrelor 276 (43) 167 (55) 109 (32) Prasugrel 370 (57) 139 (45) 231 (68) Table 2 . 2 Primary Endpoint Incidence According to Treatment Arm Events Adjusted HR 95% CI p Value Table 3 . 3 Bleeding BARC ≥2 Incidence According to Treatment Arm Events Adjusted HR 95% CI p Value Table 4 . 4 Bleeding All BARC Incidence According to Treatment Arm Events Adjusted HR 95% CI p Value Table 5 . 5 Any Ischemic Endpoint Incidence According to Treatment Arm Events Adjusted HR 95% CI p Value Acknowledgments The authors thank their nurse team and technicians in executing this study.
38,486
[ "777994", "758246", "1025332", "911291", "908932", "12718" ]
[ "180118", "532052", "532054", "198056", "459024", "532057", "532052", "532052", "532054", "198056", "532058", "198056", "300093", "300093", "37973", "180118", "459024", "532058", "532052", "180118", "180118", "459024" ]
01762193
en
[ "shs" ]
2024/03/05 22:32:13
2005
https://hal.science/cel-01762193/file/Fl.O%27Connor.pdf
Paul Carmignani Paul Carmignani Flannery O'connor's Complete FLANNERY O'CONNOR'S COMPLETE STORIES à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. "The two circumstances that have given character to my own writing have been those of being Southern and being Catholic." "Art is never democratic; it is only for those who are willing to undergo the effort needed to understand it" "We all write at our own level of understanding" (Mysteries and Manners) Hence, the thought-provoking consequence that we all read at our own level of understanding; the aim of these lectures is consequently to raise our own level… Books by Flannery O'Connor A BIOGRAPHICAL SKETCH In spite of the author's warning : « If you're studying literature, the intentions of the writer have to be found in the work itself, and not in his life » (126), we'll conform to tradition and follow the usual pattern in literary studies i.e. deal with OC's life in relation to her work, for the simple reason that, as in most cases, the person and the writer are inextricably entwined. To sum up the gist of what is to follow: « The two circumstances that have given character to my own writing have been those of being Southern and being Catholic. ». The two essential components of O'C's worldview and work are her Southerness and her Catholicism. Mary Flanneryshe dropped her 1 st name Mary on the grounds that nobody "was likely to buy the stories of an Irish Washerwoman"was born in Savannah, Georgia on 25 March 1925 ; she was the only child of Edward Francis and Regina Cline O'Connor, a Catholic family. Her father, a real estate broker who encountered business difficulties, encouraged her literary efforts ; her mother came from a prominent family in the State. The region was part of the Christ-haunted Bible belt of the Southern States and the spiritual heritage of the section profoundly shaped OC's writing, as you all know by now. Her first years of schooling took place in the the Cathedral of St. John the Baptist, across from her home; she attended mass regularly In 1938, after her father had fallen gravely ill, the family moved to Milledgeville, her mother's birthplace, and the place where she lived for the rest of her life. Her father died in 1941. In 1945 graduated from the Georgia State College for Women with a major in social science. Later she went to the State University of Iowa and got a degree of Master of Fine Arts in Literature. She attended the Iowa's writer workshop conducted by Paul Engle and Allan Tate, who was to become a lifelong friend. She published her first story "The Geranium" while she was still a student. She won the Rinehart-Iowa Fiction Award for the publication of 4 chapters of what was to become Wise Blood. Later on, she was to win several times first place in the O. Henry contest for the best short-story of the year. After graduating, she spent the fall of 1947 as a teaching assistant while working on her first novel, Wise Blood, under the direction of Engle. In early 1948, she moved to Yadoo, an artist colony in upstate New York where she continued to work on her novel. There, she became acquainted with the prominent poet Robert Lowell and influential critic Alfred Kazin. She spent a year in New York with the Fitzgeralds from the spring of 1949 to the Christmas of 1950. Early in 1951 she was diagnosed with lupus erythematosus, a blood disease that had taken her father's life in 1941 ; she accepted the affliction with grace, viewing it as a necessary limitation that allowed her to develop her art. Consequently, she retreated permanently to the South ; for the next 13 years O'C lived as a semi-invalid with her mother at Andalusia, their farm house, a few miles outside Milledgeville, surrounded with her her famous peacocks and a whole array of animals (pheasants, swans, geese, ducks and chickens). She wrote each morning for three hours, wrote letters, and took trips with her mother into town for lunch. She read such thinkers as Pierre Teilhard de Chardin, George Santayana and Hannah Arendt. She held no jobs, subsisting solely on grants (from the National Institute of Arts and Letters and the Ford Foundation for instance), fellowships and royalties from her writing. Though her lupus confined her to home and she had to use crutches, she was able to travel to do interviews and lecture at a number of colleges throughout the 50's. In 1958, she even managed a trip to Lourdes and then to Rome for an audience with the Pope. An abdominal operation reactivated the lupus and O'C died on August 3 rd , 1964, at the age of 39. Although she managed to establish her reputation as a major writer by the end of the 50's, she was regardered as a master of the short-story, O'C grew increasingly disheartened in her attempt to make "the ultimate reality of the Incarnation real for an audience that has ceased to believe." This article of faith and her belief in Christ's resurrection account for her way of seeing the universe. However, her work is mostly concenred with Protestant, a paradox she explained in the following way: "I can write about Protestant believers better than Catholic believers-because they express their belief in diverse kinds of dramatic action is obvious enough for me to catch. I can't write about anything subtle." An important milsetone in her literary career and reputation : the publication of Mystery and Manners edited by Sally and Robert Fitzgearld in 1969. Through the 70s, this collection of essays became the chief lens for O'C interpretation. In 1979, the same people edited a collection of O'C's letters, The Habit of Being. From beyond the grave, O'C herself became the chief influence on O'C criticism. To sum up: O'C led a rather uneventful life that was focused almost exclusively on her vocation as a writer and devotion to her Catholic faith. All told, OC's was a somewhat humble life yet quite in keeping with her literary credo : "The fact is that the materials of the fiction writer are the humblest. Fiction is about everything human and we are made out of dust, and if you scorn getting yourself dusty, then you shouldn't try to write fiction. It's not a grand enough job for you". Influences: The Bible ; St Augustine ; Greek tragedy ; G. Bernanos ; T.S Eliot ; W. Faulkner ; G. Greene ; N. Hawthorne ; Søren Kierkegaard ; Gabriel Marcel ; Jacques Maritain ; F. Mauriac ; E. A. Poe ; Nathaniel West. THE SETTING OF F. O'CONNOR'S FICTION : THE SOUTH AND ITS LITERARY TRADITION A sketchy background aiming to place OC's work in its appropriate context. But before doing that it is necessary to take up a preliminary problem, a rather complex one i.e. the question of the relationship between fiction and life or the world around us, which will lead us to raise 2 fundamental questions : the first bears on the function and nature of literature ; the second regards the identification of a given novelist with the section known as the South (in other words, is a novelist to be labelled a Southern writer on account of his geographical origin only ?). A) Function and nature of literature A word of warning you against a possible misconception or fallacy (a mistaken belief), i.e. the sociological approach to literature which means that even if OC's fiction is deeply anchored to a specific time and place, even if fiction-writing is according to OC herself "a plunge into reality" beware of interpreting her stories as documents on the South or Southern culture. Such an approach -called the "sociological fallacy"takes it for granted that literature is a mirror of life, that all art aims at the accurate representation or imitation of life (a function subsumed under the name of mimesis, a term derived from Aristotle). That may be the case, and OC's fiction gives the reader a fairly accurate portrayal of the social structure and mores of the South but it can't be reduced to the status of a document. Consequently, there is no need to embark upon a futile quest for exact parallels between the fictional world and the experiential world. A characteristic delusion exposed by critic J. B. Hubbell 1 : Many Northern and European and, I fear, Southern readers make the mistake of identifying Faulkner's fictitious Yoknapatawpha County with the actual state of Mississippi. It is quite possible that an informed historian could parallel every character and incident in Faulkner's great cycle with some person or event in the history of the state ; and yet Faulkner's world is as dark a literary domain as Thomas Hardy's Wessex and, almost as remote from real life as the Poictesme of James Branch Cabell or the No Man's Land of Edgar Allan Poe. This is the main danger: the sociological dimension somewhat blurs if not obliterates the literary nature and qualitythe literarinessof Southern fiction. As an antidote to the mistaken or illusory view that literature is a mirror of life, I'd like to advocate the view that the South also is a territory of the imagination. Just as one does not paint from nature but from painting, according to A. Malraux, one does not write books from actual life but from literature. Cf. also OC's statement that: "The writer is initially set going by literature more than by life" (M&M, 45). Moreover, there exists now a fictitious South, a composite literary entity that owes its being to all the novels that were written about or around it. What was once raw about American life has now been dealt with so many times that the material we begin with is itself a fiction, one created by Twain, Eliot, or Far from being a mere "transcript of life", literature aims in W. Faulkner's own words, at "sublimating the actual into the apocryphal" (an apocryphal story is well-known but probably not true), which is just another way of claiming that literature creates its own reality. By selecting and rearranging elements from reality and composing them into an imaginative pattern the artist gives them a meaningfulness and a coherence which they would otherwise not have possessed. As an imaginative recreation of experience the novel can thus, in and of itself, become a revolt against a world which appears to have no logical pattern Another element bearing out the literariness of OC's short fiction is the play of intertextuality : The referent of narrative discourse is never the crude fact, nor the dumb event, but other narratives, other stories, a great murmur of words preceding, provoking, accompanying and following the procession of wars, festivals, labours, time: And in fact we are always under the influence of some narrative, things have always been told us already, and we ourselves have always already been told. (V. Descombes, Modern French Philosophy, 186). OC's texts take shape as a mosaic of quotations ; they imitate, parody, transform other texts. Her stories reverberate with echoes from The Bible, or The Pilgrim's Progress, etc., which conclusively proves that one does not write from life or reality only but also from books : [...] la création d'un livre ne relève ni de la topographie, ni du patchwork des petits faits psychologiques, ni des mornes déterminations chronologiques, ni même du jeu mécanique des mots et des syntaxes. Elle ne se laisse pas circonscrire par l'étroite psychologie de l'auteur, champ de la psychanalyse, ni par son « milieu » ou par son « moment », que repère la sociologie ou l'histoire. L'oeuvre ne dépend de rien, elle inaugure un monde. [...] La tâche de l'artiste n'est-elle pas de transfigurer, de transmuerl'alchimiste est l'artiste par excellencela matière grossière et confusemateria grossa et confusaen un métal étincelant ? Toute conclusion d'un poète doit-être celle de Baudelaire : « Tu m'as donné de la boue et j'en ai fait de l'or. » (Durand,398) Far from being a mere transcript of the social, economic, historical situation of the South, OC's work is essentially, like all form of literary creation a transmutation of anecdotal places and geographic areas into topoi* : « une transmutation de lieux anecdotiques et de sites géographiques en topoi […] toute oeuvre est démiurgique : elle crée, par des mots et des phrases, une terre nouvelle et un ciel nouveau » (Durand,. *Topos, topoi : from Greek, literally "place" but the term came to mean a traditional theme (topic) or formula in literature. Between life and literature there intervenes language and the imagination. Literature is basically a question of "words, commas and semi-colons" as S. Foote forcefully maintained. B) Identification of a given novelist with the section As for the second question -the "southernness" of such and such novelist -, I'd like to quote the challenging opinion of a French specialist, M. Gresset, who rightly maintains that : Il est à peu clair maintenant que le Sud est une province de l'esprit, c'est-à-dire non seulement qu'on peut-être "Sudiste" n'importe où, mais que le Sud en tant que province, se trouve à peu près où l'on voudra. [...] Être sudiste, ce serait donc non seulement être un minoritaire condamné par l'Histoire, mais être, comme on dit maintenant, un loser 3 . (Emphasis mine) Consequently, one should be wary of enrolling under the banner of southern literature such and such a writer just because he was born in Mississippi, Tennessee or Georgia. all her books in the South, consequently she is a regionalist, but she managed to endow her fiction with universality i.e. to turn the particular into the universal. As a Southerner born and bred, Flannery O'Connor is consistently labeled a regional writer, which in the eyes of many critics amounts to a limitation if not a liability. However, if Flannery O'Connor undoubtedly is of the South she is also in the South as one is in the human condition to share its greatness and baseness, its joys and sorrows, its aspirations and aberrations. In other words, her work and preoccupations transcend the limits of regionalism to become universal in their scope and appeal: "O'Connor writes about a South that resides in all of us. Her works force us toward parts of our personal and collectives histories we thought we had shed long ago." (R. K. -"What made the South ?" This essential question is no less complex than the former. Among the most often quoted "marks of distinctiveness" are to be found :  Geography: but far from forming a homogeneous geographical region, a unit, the South can be divided into 7 regions  too much diversity for geography to be a suitable criterion. Consequently, specialists resorted to another factor :  Climate: the weather is very often said to be the chief element that made the South distinctive. The South has long been noted for its mild winters, long growing seasons, hot summers and heavy rainfall. Climate has doubtless exerted a strong influence on the section, e.g. it slowed the tempo of living and of speech, promoted outdoor life, modified architecture and encouraged the employment of Negroes, etc. However, climate is a necessary but not a sufficient explanation.  Economy: the South used to be an agricultural region but it underwent a radical process of urbanization and industrialization that led to the Americanization of Dixie: economically, the South is gradually aligning itself with the North.  History: the South suffered evils unknown to the nation at large : slavery, poverty, military defeat, all of them un-American experiences: "the South is the region history has happened to" (R. Weaver, RANAM IX, 7) To sum up → The South is a protean* entity baffling analysis and definition : it can't be explained in terms of geography, climate or history, etc. yet it is all that and something more : "An attitude of mind and a way of behaviour just as much as it is a territory" (Simkins, IX). *having the ability to change continually in appearance or behaviour like the mythological character Proteus. Nevertheless, if it can be said there are many Souths, the fact remains that there is also one South. That is to say, it is easy to trace throughout the region (roughly delimited by the boundaries of the former Confederate States of America, but shading over into some of the border states, notably Kentucky, also) a fairly definite mental pattern, associated with a fairly definite social patterna complex of established relationships and habits of thought, sentiments, prejudices, standards and values, and associations of ideas which, if it is not common strictly speaking to every group of white people in the South, is still common in one appreciable measure or another, and in some part or another, to all but relatively negligible ones (W. J. Cash, The Mind of the South, 1969). Taking into account the literary history of the South and the fiction it gave rise to may help us to answer some of the question we've been dealing with. D) A Very Short Introduction to Southern Fiction A rough sketch of the literary history of a "writerly* region", focussing on the landmarks. *Writerly: of or characteristic of a professional author; consciously literary. The Southern literary scene was long dominated by Local-color fiction or Regionalism (a movement that emphasizes the local color or distinctive features of a region or section of the US). Local-color fiction was concerned with the detailed representation of the setting, dialect, customs, dress and ways of thinking and feeling which are characteristic of a particular region (the West, the Mississippi region, the South, the Midwest and New England). This movement or literary school was illustrated by the works of/Among local-colorists four names stand out: Joel Chandler Harris A period of paramount importance in the emergence of Southern literature was the debate over slavery and abolition just before Civil War (1861-1865). The controversy over the peculiar institution gave rise to a genuine Southern literature: The South found itself unable to accept much of the new literature which emanated from the Northern states. It then began half-consciously building up a regional literature, modeled upon English writers, which was also in part a literature of defense […] The South was more or less consciously building up a rival literary tradition. (Hubbell,133) Another fact for congratulation to the South is, that our people are beginning to write booksto build up a literature of our own. This is an essential prerequisite to the establishment of independence of thought amongst us. (G. Fitzhugh, 338) The most influential work of those troubled years was the anti-slavery plea, Uncle Tom's Cabin published in 1852 by Harriet Beecher Stowe (1811-1896), the first American best-seller. The period gave rise to the plantation novel whose archetype is Swallow Barn by John Pendleton Kennedy (1851). By the end of the "War between Brothers", the South could boast a number of good writers, but one more half-century was needed for Southern literature to come of age and establish a new tradition. The most prominent voice in the post-war period was M. Twain . Importance of The Adventures of Huckleberry Finn (1884) on the development of American prose: in a single step, it made a literary medium of the American language; its liberating effect on American writing was unique, so much so that both W. Faulkner and E. Hemingway made it the fountainhead of all American literature : All modern American literature comes from one book by Mark Twain called Huckleberry Finn. [...] All American writing comes from that. There was nothing before. There has been nothing as good since. (E. Hemingway, Green Hills of Africa, 26). The present-day popularity of Southern fiction can be accounted for by the fact that Americans have long been fascinated with the South as land of extremes, the most innocent part of America in one respect and the guiltiest in another; innocent, that is in being rustic or rural (there's in the latter observation an obvious hint of pastoralism: an idealized version of country life→the South seems to have embodied a certain ideal mixture of ruralism and aristocratic sophistication) yet guilty due to the taint of slavery and segregation. Anyway, what makes the South a distinctive region is that the South was at one time in American history not quite a nation within a nation, but the next thing to it. And it still retains some of the characteristics of that exceptional status. So does the fiction it/she gave rise to: "The Southern writer is marginal in being of a region whose history interpenetrates American moral history at crucial points". (F. J. Hoffman, The Modern Novel in America). Importance of historical experience and consciousness in the Southern worldview ; when the South was colonized, it was meant to be a paradise on earth, a place immune from the evils that beset Europe, a sort of blessed Arcadia (a region of Greece which became idealized as the home of pastoral life and poetry), but the tragedy of the South lies in the fact that it "is a region that history has happened to". And afterwards myth took over from history in order to make up for the many disappointments history brought about: The Old South emerges as an almost idyllic agricultural society of genteel people and aristocratic way of life now its history is transformed into the story of a fallen order, a ruined time of nobility and heroic achievements that was vanquished and irrevocably lost. In this way the actual facts of the old South have been translated by myth into a schemata of the birth, the flowering and the passing of what others in an earlier era might have called a Golden Age. (J. K. Davis) E) Southern literature It is not easy to sum up in one simple formula the main features of Southern fiction ; this is besides a controversial question. As a starting-point→ a tentative definition from a study entitled Three Modes of Southern Fiction : Among these characteristics [of Southern fiction] are a sense of evil, a pessimism about man's potential, a tragic sense of life, a deep-rooted sense of the interplay of past and present, a peculiar sensitivity to time as a complex element in narrative art, a sense of place as a dramatic dimension, and a thorough-going belief in the intrinsic value of art as an end in itself, with an attendant Aristotelian concern with forms and techniques (C. Hugh Holiman, Three Modes of Southern Fiction) Against this background, the features that are to be emphasized are the following : -A strong sense of place (with its corollary: loyalty to place) ; Sense of place or the spirit of place might be said to be the presiding genius of Southern fiction. Whereas much modern literature is a literature without place, one that does not identify itself with a specific region, Southern fiction is characterized by its dependence on place and a special quality of atmosphere, a specific idiom, etc. Novelist Thornton Wilder claims, rigthly or wrongly, that: "Americans are abstract. They are disconnected. They have a relation but it is to everywhere, to everybody, and to always" (C. Vann Woodward, The Search for Southern Identity, 22). According to him "Americans can find in environment no confirmation of their identity, try as they might." And again: "Americans are disconnected. They are exposed to all place and all time. No place nor group nor movement can say to them: we are waiting for you; it is right for you to be here." Cf. Also "We don't seem anchored to place [...] Our loyalties are to abstractions and constitutions, not to birthplace or homestead or inherited associations." (C. Vann Woodward) The insignificance of place, locality, and community for T. Wilder contrasts strikingly with the experience of E. Welty who claims that: "Like a good many other regional writers, I am myself touched off by place. The place where I am and the place I know […] are what set me to writing my stories." To her, "place opens a door in the mind," and she speaks of "the blessing of being located-contained." Consequently, "place, environment, relations, repetitions are the breath of their [the Southern States'] being." The Southern novel has always presupposed a strong identification with a place, a participation in its life, a sense of intense involvement in a fixed, defined society (involvement with a limited, bounded universe, South, 24) Place is also linked to memory; it plays another important rôle as archives (or record) of the history of the community : one of the essential motifs of Southern fiction is the exploration of the link between place and memory and truth. Here a quotation from E. Welty is in order : The truth in fiction depends for its life on place. Location is at the crossroads of circumstances, the proving ground of "What happened ? Who's here ? Who's coming ?" and that is the heart's field (E. Welty, 118). Place : it is a picture of what man has done and imagined, it is his visible past result (Welty, 129) Is it the fact that place has a more lasting identity than we have and we unswervingly tend to attach ourselves to identity ? (119) --A sense of Time The Southern novelist evinces a peculiar sensitivity to time as a complex element in narrative art (Three Modes of Sn Fiction); he/she shows a deep-rooted sense of the interplay of past and present. Southern fiction is in the words of Allen Tate "a literature conscious of the past in the present" (Ibid., 37) and "Southern novelists are gifted with a kind of historical perspective enabling them to observe the South and its people in time". Cf. W. Faulkner: "To me no man is himself, he is the sum of his past" (171). Concerning the importance of the past and of remembrance, two other quotations from Allen Tate are in order: After the war the South again knew the world… but with us, entering the world once more meant not the obliteration of the past but a heightened consciousnes of it (South, 36) The Southerners keep reminding us that we are not altogether free agents in the here and now, and that the past is part master" (South, 57) --A "cancerous religiosity"* *"The South with its cancerous religiosity" (W. Styron, Lie Down in Darkness) Cf. OC's statement: "I think it is safe to say that while the South is hardly Christ-centered, it is most certainly Christ-haunted" (M&M, 44). Existence of the Bible-Belt : an area of the USA, chiefly in the South, noted for religious fundamentalism. -A sense of evil, a certain obsession with the problem of guilt (cf. Lilian Smith's opinion: "Guilt was then and is today the biggest crop raised in Dixie") and moral responsibility bound up, of course, with the race issue, the Civil War, etc. : There is a special guilt in us, a seeking for something hadand lost. It is a consciousness of guilt not fully knowable, or communicable. Southerners are the more lonely and spiritually estranged, I think, because we have lived so long in an artificial social system that we insisted was natural and right and just -when all along we knew it wasn't (McGill) -Another distinctive feature: the tradition of the folktale and story-telling which is almost as old as the South itself ; I won't expand on this feature and limit myself to a few quotes: I think there's a tradition of story-telling and story-listening in the South that has a good deal to do with our turning to writing as a natural means of pressing whatever it is we've got bubbling around inside us. (S. Foote) The South is a story-telling section. The Southerner knows he can do more justice to reality by telling a story than he can by discussing problems or proposing abstraction. We live in a complex region and you have to tell stories if you want to be anyway truthfuil about it, (F. O'Connor) Storytelling achieved its ultimate height just before the agricultural empire was broken down and the South became industrialized. That's where storytelling actually flowered (E. Caldwell) In the world of Southern fiction people, places and things seem to be surrounded by a halo of memories and legends waiting to get told. People like to tell stories and this custom paves the way for would-be novelists. Hence too, the importance of Voice: not an exclusively Southern feature but most Southern novels are remarkable for the spoken or speech quality of their prose/style : For us prose fiction has always been close to the way people talkmore Homeric than Virgilian. It presumes a speaker rather than a writer. It's that vernacular tone that is heard most often in contemporary Southern fiction. No wonder then all these factors should result in the fact that: "The Southerner has a great sense of the complexities of human existence" (H. Crews) -He is endowed with a sense of distinctiveness and prideful difference. That sense stems from the conviction that the South is section apart from the rest of the United States. The History of the section shows that such a conviction is well-founded for it comprises many elements that seem to be atypical in American history at large, cf. C. Vann Woodward's opinion : In that most optimistic of centuries in the most optimistic part of the world [i.e. the USA at large], the South remained basically pessimistic in its social outlook and its moral philosophy. The experience of evil and the experience of tragedy are parts of the Southern heritage that are as difficult to reconcile with the American legend of innocence and social felicity as the experience of poverty and defeat are to reconcile with the legends of abundance and success (The Burden of Southern History, Baton Rouge, LSU, 1974, 21.) There are still numerous features that might be put forward to account for the distinctiveness or differentness of the South and Southern literature, but this is just a tentative approach. All these points would require qualification but they will do as general guidelines (cf. the bibliography if you wish to go into more detail). LECTURE SYMBOLIQUE, ALLEGORIQUE ET PARABOLIQUE Dans Le Livre à venir, le philosophe M. Blanchot déclare que « la lecture symbolique est probablement la pire façon de lire un texte littéraire » (125). On peut souscrire à cet anathème si l'on a du symbole une conception réductrice qui en fait une simple clé, une traduction, alors qu'en réalité c'est un travail (Bellemin-Noël, 66) et qu'en outre, comme nous le verrons, « la symbolique se confond avec la démarche de la culture humaine tout entière ». Qu'entendons-nous par là ? Tout simplement que, selon la belle formule de G. Durand, « L'anthropologie ne commence véritablement que lorsqu'on postule la profondeurs dans les "objets" des sciences de l'homme » (Figures Comme le précise le philosophe J. Brun : mythiques et visages Les véritables symboles ne sont pas des signes de reconnaissance, ce ne sont pas des messagers de la présence, mais bien des messagers de l'Absence et de la Distance. C'est pourquoi ce sont eux qui viennent à nous et non pas nous qui nous portons vers eux comme vers un but que nous aurions plus ou moins consciemment mis devant nous. Les symboles sont les témoins de ce que nous ne sommes pas ; si nous nous mettons à leur écoute, c'est parce qu'ils viennent irriguer nos paroles d'une eau dont nous serons à jamais incapables de faire jaillir la source. (81). Les symboles nous redonnent aussi cet état d'innocence où, comme l'exprime magnifiquement P. Ricoeur : « Nous entrons dans la symbolique lorsque nous avons notre mort derrière nous et notre enfance devant nous » (Le Conflit des herméneutiques). Tout symbole authentique possède trois dimensions concrètes ; il est à la fois : -"cosmique" (c'est-à-dire puise sa figuration dans le monde bien visible qui nous entoure) ; -"onirique" (c'est-à-dire s'enracine dans les souvenirs, les gestes qui émergent dans nos FROM WORDS TO THE WORD (i.e. GOD'S WORD) : FICTION-INTERTEXTUALITY-VARIATIONS ON INITIATION By way of introduction to OC's fictional universe, I'd like to discuss three statements : the first 2 by the author herself : All my stories are about the action of grace 7 on a character who is not very willing to support it (M&M, 25) : hence the reference to initiation We have to have stories in our background. It takes a story to make a story (Ibid., 202 ) : hence the reference to intertextuality and the third from a critic, R. Drake, who pointed out that : "Her range was narrow, and perhaps she had only one story to tell. […] But each time she told it, she told it with renewed imagination and cogency" : hence the reference to variations on the same theme. Those three observations will lead us to focus on the fundamental and interrelated questions or notions -interrelated that is in O'C's workthose of fiction-writing and intertextuality, initiation. I. Function & Aim of fiction according to OC "No prophet is accepted in his own country" (Luke 4 : 24) "Writing fiction is a moral occupation" (H. Crews) "Writing fiction is primarily a missionary activity" (O'Connor) Fiction with a religious purpose ("My subject in fiction is the action of grace in a territory held largely by the devil" M&M, 118) based on the use of parables in the Bible : "Therefore speak I to them in parables: because they seeing see not; and hearing they hear not, neither do they understand" (Matt. 13 : 13). OC's short-stories = variations on two key parables : 1. "Behold, a sower went forth to sow ; And when he sowed, some seeds fell by the way side, and the fowls came and devoured them up : Some fell upon stony places, where they had not much earth : and forthwith they sprung up, because they had no deepness of earth : And when the sun was up, they were scorched ; and because they had no root, they withered away. And some fell among thorns ; and the thorns sprung up, and choked them : But other fell into good ground, and brought forth fruit, some an hundredfold, some sixtyfold, some thirtyfold. Who hath ears to hear, let him hear. And the disciples came, and said unto him, Why speakest thou unto them in parables ? He answered and said unto them, Because it is given unto you to know the mysteries of the kingdom of heaven, but to them it is not given" (Matt. 13 : 3-11) " The kingdom of heaven is likened unto a man which sowed good seed in his field : But while men slept, his enemy came and sowed tares among the wheat, and went his way. But when the blade was sprung up, and brought forth fruit, then appeared the tares also. 7. The free and unmerited favour of God, as manifested in the salvation of sinners and the bestowal of blessings So the servants of the householder came and said unto him, Sir, didst not thou sow good seed in thy field ? from whence then hath it tares ? He said unto them, An enemy hath done this. The servants said unto him, Wilt thou then that we go and gather them up ? But he said, Nay ; lest while ye gather up the tares, ye root up also the wheat with them. Let both grow together until the harvest : and in the time of harvest I will say to the reapers, Gather ye together first the tares, and bind them in bundles to burn them: but gather the wheat into my barn." (Matt. 13 : 24-30) II. Initiation "Every hierophany is an attempt to reveal the Mystery of the coming together of God and man". All the narratives by OC are stories of initiation ("The bestowal of grace upon an unwilling or unsuspecting character" : a process, an operation, a pattern corresponding to what is called initiation) → an investigation into the constituents and characteristics of stories of initiation but first, what is initiation ? Origins of the term : initium : starting off on the way ; inire : enter upon, begin. Initiation as an anthropologic term means "the passage from childhood or adolescence to maturity and full membership in adult society" (Marcus, 189), which usually involves some kind of symbolic rite. "The Artificial Nigger" is a good example thereof. Held to be one of the most ancient of rites, an initiation marks the psychological crossing of a threshold into new territories, knowledge and abilities. The major themes of the initiation are suffering, death and rebirth. The initiate undergoes an ordeal that is symbolic of physically dying, and is symbolically reborn as a new person possessing new knowledge. In pagan societies, the initiation marks the entrance of the initiate into a closed and traditionally secret society; opens the door to the learning of ritual secrets, magic, and the development and use of psychic powers; marks a spiritual transformation, in which the initiate begins a journey into Self and toward the Divine Force; and marks the beginning of a new religious faith. Many traditional initiations exist so that the spiritual threshold may be crossed in many alternate ways; and, all are valid: the ritual may be formal or informal; may be old or new; may occur as a spontaneous spiritual awakening, or may even happen at a festival. 2. What is a story of initiation? In general, one can say that there is no single precise and universally applicable definition of stories of initiation in literary theory. There are some attempts to build a concise theory of the initiation-theme in literature : several aspects of initiation can be found in literature. First of all, initiation as a process in literary descriptions denotes the disillusioning process of the discovery of the existence of evil, which is depicted as a confrontation of the innocent protagonist with guilt and atonement and often has the notion of a shocking experience. This confrontation usually includes a progress in the protagonists character or marks a step towards self-under-standing. Thus, this type describes an episode which leads the protagonist to gaining insight and gaining in experience, in which this experience is generally regarded as an important stage towards maturity. The second type differs from the first in focusing on the result of the initiatory experience. This includes the loss of original innocence concerning the protagonist and is often compared to the biblical Fall of Men. Furthermore, this approach generally stresses the aspect of duality in the initiation process, which is the aspect of loss of innocence as a hurtful but necessary experience as well as the aspect of profit in gaining identity. The next aspect centers on the story of initiation as describing the process of self-discovery and self-realization, which basically means the process of individuation (the passage to maturity). From that point of view, an initiation story may be said to show its young protagonist experiencing a significant change of knowledge about the world or himself, or a change of character, or of both, and this change must point or lead him towards an adult world. It may or may not contain some form of ritual, but it should give some evidence that the change is at least likely to have permanent effects. The aspect of movement in stories of initiation : it plays an important role in many stories dealing with the initiation theme. Often, the inner process of initiation, the gaining of experience and insight, is depicted as a physical movement, a journey. This symbolic trip of the protagonist additionally supports the three-part structure, which is usually found in initiation stories. The threepart structure of initiation can shortly be described as the three stages of innocenceexperiencematurity. The motive of the journey reflects this structure, as the innocent protagonist leaves home (i.e. the secure place of childhood), is confronted with new situations, places and people on his journey and returns back home as a `new man′ himself, in a more mature state of mind. Also to be taken into account, the aspect of effect in stories of initiation which may be categorized according to their power and effect. Three types of initiation which help to analyze stories dealing with this topic: -First, some initiations lead only to the threshold of maturity and understanding, but do not definitely cross it. Such stories emphasize the shocking effect of experience, and their protagonists tend to be distinctly young. -Second, some initiations take their protagonists across a threshold of maturity and understanding but leave them enmeshed in a struggle for certainty. These initiations sometimes involve self-discovery. -Third, the most decisive initiations carry their protagonists firmly into maturity and understanding, or at least show them decisively embarked toward maturity. These initiations usually center on self-discovery. For convenience'sake, these types may be called tentative, uncompleted, and decisive initiations. As one can see, the change in the protagonist′s state of mind plays an important role for his definition. To analyze the dimension of effect usually also involves a consideration of the aspect of willfulness of the initiatory experience, as voluntary initiation experiences are more likely to have direct, permanent effect on the protagonist, whereas forced initiations may be rejected, or rather suppressed so that the effect may be not clearly distinguishable at first. Crucial, however, is the aspect of permanency of effect ("one may demand evidence of permanent effect on the protagonist before ascribing initiation to a story"), as it may prove difficult to provide evidence of this permanency. To sum up: Initiation → involves an ontological (dealing with the nature of being) mutation/metanoia : « Characteristics of initiation in O'C Almost systematically involves a journey or a trip→a journey of enlightenment (cf. "A Good Man is Hard to Find") The Initiate may be : an adolescent, an old man/woman ; an intellectual, etc. Key figure: initiation always involves an agent → the messager, the agent of the change (Negro, preacher, stranger, kids, a plaster figure, etc.) → In most of the stories, a visitor or a visit irrevocably alters the home scene and whatever prevailing view had existed. These visitors take various shapes: a one-armed tramp, three juveline arsonists (visitors/messengers often go in threes), a deranged escaped convict, etc. Place of initiation: river, woods, staircase. Landscape often fulfills the function of an actant; it isn't just a decor but it exerts an influence on what happens, on the character's fate, etc. (cf. role of the moon and the sun/son) Catalyst of the initiatory experience : violence (Only when that moment of ultimate violence is reached, i.e. just before death, are people their best selves UOC, 38)→assumes many forms: a stroke, a fit, a fall, an attack, a bout, a physical assault, etc. Violences triggers off the change: This notion that grace is healing omits the fact that before it heals, it cuts with the sword Christ said he came to bring (109) "The Word of God is a burning Word to burn you clean" (Th. 320) Participation of evil in initiation to the divine: « I suppose the devil teaches most of the lessons that lead to self-knowledge » (Th, 79) A case in point : "A good Man is Hard to Find" Necessary to ponder the paradox of blasphemy as the way to salvation (Th. 343) Paradox The interweaving of the sacred and the profane, the pure and the impure, sanctity and taint/ corruption : Il résulte que la souillure et la sainteté, même dûment identifiées […] représentent, en face du monde de l'usage commun, les deux pôles d'un domaine redoutable. C'est pourquoi un terme unique les désigne si souvent jusque dans les civilisations les plus avancées. Le mot grec  "souillure" signifie aussi "le sacrifice qui efface la souillure". Le terme agios "saint" signifiait en même temps "souillé" à date ancienne, au dire des lexicographes. La distinction est faite plus tard à l'aide de deux mots symétriques agès "pur" et euagès "maudit", dont la composition transparente marque l'ambiguïté du mot originel. Le latin, expiare "expier" s'interprète étymologiquement comme "faire sortir (de soi) l'élément sacré que la souillure contractée avait introduit". (R. Caillois, L'Homme et le sacré, pp. 39-40) Il y a là la révélation d'une intuition fondamentale, masquée par la religion établie, à savoir que sacré et interdit ne font qu'un et que « l'ensemble de la sphère sacrée se compose du pur et de l'impur » (G. Bataille). Cette double valence se retrouve également dans la sexualité. En effet si, en bonne théologie chrétienne, le spirituel s'oppose au charnel, il est des cas où la chair peut représenter une des voies d'accès au divin : Dieu le père, l'impénétrable, l'inconnaissable, nous le portons dans la chair, dans la femme. Elle est la porte par laquelle nous entrons et nous sortons. En elle, nous retournons au Père, mais comme ceux qui assistèrent aveugles et inconscients à la transfiguration (D. H. Lawrence) Outer vs. inner dimensions The outward trip also is an inner journey, a descent into oneself → processus de l'introrsum ascendere des mystiques médiévaux : « la montée spirituelle passe par une "enstase", un voyage intérieur qui s'ouvre sur un espace élargi, au terme de la rencontre de ce qui est en nous et de ce qui est hors de nous » (J. Thomas, 85) → « l'homme s'élève en lui-même, en partant de l'extérieur, qui est ténèbres, vers l'intérieur, qui est l'univers des lumières, et de l'intérieur vers le Créateur » (Rûmi, 99) III. Intertextuality We have to have stories in our background. It takes a story to make a story (Ibid., 202 ) By way of transition→ point out/up the kinship between the two notions of initiate and narrator since both have to do with knowledge: the "initiate" means "he who knows", so does the term "narrator" which comes from the Latin narus : he who knows. Narrator bears a certain relationship to the notions of secret/sacred/mystery/mysticism. A writer doesn't start from scratch → intertextuality Textuality or textness: the quality or use of language characteristic of written works as opposed to spoken usage. What is a text ? The term goes back to the root teks, meaning to weave/fabricate. Text means a fabric, a material: not just a gathering of signs on a page. R. Barthes was the originator of this textual and textile conjunction. We know that a text is not a succession of words releasing a single meaning (the message of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and sometimes clash: "Every text takes shape as a mosaic of citations, every text is the absorption and transformation of other texts" (J. Kristeva). The text is a tissue of quotations (cf. a tissue of lies)… (N. 122). A literary work can only be read in connection with or against other texts, which provide a grid through which it is read and structured. Hence, R. Barthes's contention that "The I which approaches the text is itself already a plurality of texts". Intertextuality : refers to the relations obtaining between a given book and other texts which it cites, rewrites, absorbs, prolongs, or generally transforms and in terms of which it is intelligible. The notion was formulated and developed by J. Kristeva who stated for instance that "books imitate, parody other books". A reminder of the essential observation that "learning to write may be a part of learning to read [...] that writing comes out of a superior devotion to reading": this, by the way, was a profession of faith by Eudora Welty in The Eye of the Story. As another Southern novelist, S. Foote, put it: "Reading good writers is one's university". What are the literary works FMD is brought into relation with ? In the case of O'C the fundamental proto-texts are : the Bible, The Pilgrim's Progress, Teilhard de Chardin, other religions (Buddhism/Vedanta), classical mythology, Dante's Divine Comedy. This topic is discussed in the Ellipses volume, so I'll refer you to it. The Pilgrim's Progress A famous novelthe most popular after the Bible in U.K. -, published by John Bunyan, a village tinker and preacher in 1678. Bunyan's book unfolds the universal theme of pilgrimage as a metaphor/image of human life and the human quest for personal salvation. Bunyan describes the road followed by Christian and the mishaps he has to endure to reach the Celestial City: e.g. he passes through the Valley of the Shadow of Death, the Enchanted Ground, the Delectable Mountains, and enters the "the Country of Beulah" ( "la Terre épouse") cf. novel : In this land also the contract between the bride and the bridegroom was renewed : Yea here, as the bridegroom rejoiceth over the bride, so did their god rejoice over them. Here they had no want of corn and wine ; for in this place they met with abundance of what they had sought for in all their pilgrimage. (pp. 195-196) Then Christian crosses the River, etc., and on his way he comes across various people assuming allegoric and symbolic functions e.g. Mr. Worldly-Wiseman, Faithful, Saveall, Legality who is: a very judicious man [...] that has skill to help men off with such burdens as thine are from their shoulders and besides, hath skill to cure those that are somewhat crazed in their wits with their burdens. (p. 50). Christian is brought to trial in vanity fair (a trial takes place in Vanity Fair: Faithful, Christian's fellow-traveller is martyred but Christian escapes death. Christian realizes that "there was a way to Hell, even from the gates of heaven, as well as from the City of Destruction" (205). The Bible "It is the nature of American scriptures to be vulgarizations of holy texts from which they take their cues..." (Fiedler, 97). The Bible plays so important a rôle that it may be considered as one of the dramatis personae ; the Book forms a constant counterpoint to OC's narratives. Its influence is to be found in Names: Motes → Matthew 7 : 3-5 : "And why seest thou the mote that is in thy brother's eye, but seest not the beam that is in thy own eye ? Or how sayest thou to thy brother : Let me cast the mote out of thy eye ; and behold a beam is in thy own eye ? thou hypocrite, cast out first the beam out of thy own eye ; and then shalt thou see to cast out the mote out of thy brother's eye". Hazel → Hazael : God has seen → Haze : a reference to a glazed, impaired way of seeing Enoch a character from the Old Testament; was taken up to heaven without dying Asa : king of Syria Thomas : meaning "twin" Parker (Christophoros) moves from the world of the inanimate to the animal kingdom to humans, to religious symbols and deities, and, ultimately to Christ. Obadiah : servant of the Lord→the pride of the eagle // Elihue : God is He→a symbolic transformation of Parker to God Ruth is an ironic inversion of her biblical counterpart Symbols/Images Jesus'removing the demonic spirit from the people to the herd of swine which then ran violently down a steep place into the sea (Mark 5 : 13) The River→ (Luke 8 : 32) Jesus drives the demons out of one man called legion into the herd of swine and then sends the entire herd over the bank to drown in a lake 51 Themes and notions "The True country" in "The Displaced Person"→St Raphael: The prayer asks St. Raphael to guide us to the province of joy so that we may not be ignorant of the concenrs of the true country 80 The Rheims-Douai Version of the Bible→ the Kingdom is not to be obtained but by main force, by using violence upon ourselves, by mortification and penance and resisting our evil incli-nations…(88) P. Teilhard de Chardin P. Teilhard de Chardin's concept of the "Omega Point" as that particular nexus where all vital indicators come to a convergence in God becomes in OC's fiction a moment where a character sees or comes to know the world in a way that possesses a touch of ultimate insight.Teilhard's The Phenomenon of Man was "a scientific expression of what the poet attempts to do : penetrate matter until spirit is revealed in it" (110). Teilhard's concept of the Omega Point, a scientific explanation of human evolution as an ascent towards consciousness that would culminate forwards in some sort of supreme consciousness […] To be human is to be continually evolving toward a point that is simultaneously autonomous, actual, irreversible, and transcendent: "Remain true to yourselves, but move ever upward toward greater consciousness and greater love ! At the summit you will find yourselves united with all those who, from every direction, have made the same ascent. For everything that rises must converge" (111) Other denominations Passing references in "The Enduring Chill" to Buddhism & Vedanta. In Buddhism, the Bodhisattva is an elightened being who has arrived at perfect knowledge and acts as a guide for others toward nirvana, the attainment of disinterested wisdom and compassion. Vedanta is an Hindu philosophy that affirms the identity of the individual human soul, atman, with Brahman, the holy power that is the source and sustainer of the universe. Note that in the short-story, it is a cow, a sacred animal to the Hindu, which is the source of Asbury's undulant fever. Classical mythology Peacock → "Io"/"Argus" cf. Who's who in the ancient world→ Short-story : "Greenleaf" IV. The play upon sameness and difference « L'écrivain est un expérimentateur public : il varie ce qu'il recommence ; obstiné et infidèle, il ne connaît qu'un art : celui du thème et des variations. » (R. Barthes, Essais critiques, 10)→ Variations of the same pattern hence the keys to the interpretation of any given narrative will serve for all the others: "All my stories are about the action of grace on a character who is not very willing to support it" (M&M, 25). REALISM + "AN ADDED DIMENSION" Even if the author emphatically stated that "[A] writer is initially set going by literature more than by life" (M&M, 45), life, in the sense of "the texture of existence that surrounds one" is far from being a negligible factor in the process of literary creation ; on the contrary, it plays a fundamental role as witness the following quotation which, in a way, offsets the former : There are two qualities that make fiction. One is the sense of mystery and the other is the sense of manners. You get the manners from the texture of existence that surrounds you. The great advantage of being a Southern writer is that we don't have to go anywhere to look for manners ; bad or good, we've got them in abundance. We in the South live in a society that is rich in contradiction, rich in irony, rich in contrast, and particularly rich in its speech. (MM, 103) It is fundamentally a question of degree : the writer […] is initially inspired less by life than by the work of his predecessors (MM, 208) OC's contention → "The natural world contains the supernatural" (MM, 175) → « This means for the novelist that if he is going to show the supernatural taking place, he has nowhere to do it except on the literal level of natural events » (176). In other words : What-is [ce qui est, le réel] is all he has to do with ; the concrete is his medium ; and he will realize eventually that fiction can transcend its limitations only by staying within them (146) The artist penetrates the concrete world in order to find at its depths the image of its source, the image of ultimate reality (157) Reality The basis of fiction, the point of departure of the novelist : The first and most obvious characteristic of fiction is that it deals with reality through what can be seen, heard, smelt, tasted, and touched 91 Writing fiction is [not] an escape from reality. It is a plunge into reality and it's very shocking to the system. If the novelist is not sustained by a hope of money, then he must be sustained by a hope of salvation 78 Realism "All novelists are fundamentally seekers and describers of the real, but the realism of each novelist will depend on his view of the ultimate reaches of reality" (40) "The Southern writer is forced from all sides to make his gaze extend beyond the surface, beyond mere problems, until it touches that realm which is the concern of prophets and poets. When Hawthorne said that he wrote romances, he was was attempting, in effect, to keep for fiction some of its freedom from social determinisms, and to steer it in the direction of poetry". ( 46) "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (179) What is Realism ? (Realism/regionalism, and naturalism) Cf. S. Crane→ "that misunderstood and abused word, realism." What is realism ? It is a special literary manner (actually a set of conventions and devices) aiming at giving: The illusion that it reflects life as it seems to the common reader. [...] The realist, in other words, is deliberately selective in his material and prefers the average, the commonplace, and the everyday over the rarer aspects of the contemporary scene. One cannot read far into OC's fiction without discovering numerous realistic touches e.g. in the depiction of life in the South: way of life, manner of speech (with its numerous elisions and colloquial or even corrupt expressions), turns of phrase, customs, habits, etc. So OC's work evinces the extended and massive specification of detail with which the realist seeks to impose upon the reader an illusion of life. Such an "effet de réalité" is heightened by the consistent use of details or features pertaining to a specific section of the US, which resulted in OC being labelled a regional writer i.e. one whose work is anchored, rooted in the fabric, the actualities or the concrete particulars of life in a specific area or section of the USA i.e. the OC clearly objects to those writers who "feel that the first thing they must do in order to write well is to shake off the clutch of the region […] the writer must wrestle with it [the image of the South], like Jacob with the angel, until he has extracted a blessing" (M&M, 197-198) Naturalism What about Naturalism? It is "a mode of fiction that was developed by a school of writers in accordance with a special philosophical thesis. This thesis, a product of post-Darwinian biology in the mid-nineteenth century, held that man belongs entirely in the order of nature and does not have a soul or any other connection with a religious or spiritual world beyond nature ; that man is therefore merely a higher-order animal whose character and fortunes are determined by two kinds of natural forces, heredity and environment." (Abrams). OC rejected naturalism since, from her point of view, naturalism ran counter to one of her most essential tenets: I don't think any genuine novelist is interested in writing about a world of people who are stictly determined. Even if he writes about characters who are mostly unfree, it is the sudden free action, the open possibility, which he knows is the only thing capable of illuminating the picture and giving it life (The Added Dimension, 229) True, OC occasionally makes use of animal imagery but she ignores the last two factors mentioned in the above definition. The only naturalistic element studied by OC is aggressive behaviour and a certain form of primitiveness but her utilization of such material is quite different from Zola's, OC's naturalism is more descriptive than illustrative and conjures up a moral landscape where the preternatural prevails. A consequence of OC's choice of realism →Importance of the senses and sensory experience : "Fiction begins where human knowledge begins-with the senses" (MM, 42) The novelist begins his work where human knowledge begins-with the sense ; he works through the limitations of matter, and unless he is writing fantasy, he has to stay within the concrete possibilities of his culture. He is bound by his particular past and those institutions and traditions that this past has left to his society. The Judaeo-Christian tradition has formed us in the west ; we are bound to it by ties which may often be invisible, but which are there nevertheless. (155) BUT what distinguishes OC from other realists is that for her "The natural world contains the supernatural" (M&M 175). The aim of OC's particular realism is to lead the reader to the per-ception of a second, superior plane or level of reality: "the supernatural, what I called the added dimension." 2 nd characteristic: OC's originality lies in the fact that she held realism to be a matter of seeing, a question of vision. The novelist, she wrote, "must be true to himself and to things as he sees them." → Vision or rather anagogical vision is what throws a bridge over the gap between the natural and the supernatural; it is the link, the connection between the two universes. Anagogical Vision Starting-point→ OC's statement : The kind of vision the fiction writer needs to have, or to develop, in order to increase the meaning of his story is called anagogical vision, and that is the kind of vision that is able to see different levels of reality in one image or one situation. (M&M, 72) Three preliminary observations or reminders Visible←→Invisible/Mystery « Le monde sensible tout entier est, pour ainsi dire, un livre écrit par le doigt de Dieu… Toutes les choses visibles, présentées à nous visiblement pour une instruction symbolique -c'est-à-dire figurée -, sont proposées en tant que déclaration et signification des invisibles 8 . » → God as first and ultimate author. We find an echo of this worldview and faith in OC'S statement : "What [the writer] sees on the surface will be of interest to him only as he can go through it into an experience of mystery itself" (M&M, 41) Judgment ←→Vision For the novelist, judgment is implicit in the act of seeing. His vision cannot be detached from his moral sense (M&M, 130) In the greatest fiction, the writer's moral sense coincides with his dramatic sense, and I see no way for it to do this unless his moral judgment is part of the very act of seeing (Ibid., 31) The question of anagogical vision is connected with biblical interpretation The epithet anagogical refers to one of the four tradional modes of interpretation of the Holy Bible (Exegesis: critical explanation or interpretation vs. Hermeneutics: interpretation) i.e.: 1. literal; 2. typological; 3. tropological; 4. anagogical. literal: applied to taking the words of a text in their natural and customary meaning ; Prophecy ←→Vision In the novelist's case, prophecy is a matter of seeing near things with their extensions of meaning and thus of seeing far things close up. The prophet is a realist of distances, and it is this kind of realism that you find in the best modern instances of the grotesque (44) "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (179) Vision/Anagogical vision (a few statements by way of illustration) "The novelist must be characterized not by his function but by his vision" (47) "For the writer of fiction, everything has its testing point in the eye, and the eye is an organ that eventually involves the whole personality and as much of the world that can be got into it" (91) "Anything that helps you to see, anything that makes you look. The writer should never be ashamed of staring. There is nothing that doesn't require his attention. […] The writer's business is to contemplate experience, not to be merged in it". (84) Conrad said that his aim as a fiction writer was to render the highest possible justice to the visible universe […] because it suggested and invisible one. « My task which I am trying to achieve is, by the power of the written word, to make you hear, to make you feel-it is, before all, to make you see. That-and no more, and it is everything ». (80) "He's looking for one image that will connect or combine or embody two points ; one is a point in the concrete, and the other is a point not visible to the naked eye, but believed in by him firmly, just as real to him, really, as the one that everybody sees. It's not necessary to point out that the look of this fiction is going to be wild, that it is almost of necessity going to be violent and comic, because of the discrepancies that it seeks to combine" (42) "Now learning to see is the basis for leaning all the arts except music... Fiction writing is very seldom a matter of saying things; it is a matter of showing things (93) [telling vs. showing] "The longer you look at one object, the more of the world you see in it ; and it's well to remember that the serious fiction writer always writes about the whole world, no matter how limited his particular scene" (77) Even when OC seems to be concerned with the relative, the world around us, daily life, and the little disturbances of man, it is always: "A view taken in the light of the absolute" (134). The anagogical level is that level in which the reader becomes aware that the surface antics and the bizarre twists of the lunatic fringe are much more deeply intertwined with a mystery that is of eternal consequence (UOC, 173) Indexes Simple objects endowed with symbolical meaning, hence beware of hats, spectacles, wooden objects, stairwell, things or people going in threes, etc. : the woods = a Christ figure, they appear to walk on water; glasses removed→outward physical sight is replaced by inward spiritual clarity; The 3 arsonists have their biblical counterparts in Daniel; Hulga's wooden leg → woodenness: impervious to the action of grace; A car may be a means of salvation, a vehicle for the Spirit: cf. Tom Shiftlet, the preacher in "The Life You Save May Be Your Own": "Lady, a man is divided into two parts, body and spirit... The body, lady, is like a house: it don't go anywhere; but the spirit, lady, is like an automobile, always on the move, always..." Car = pulpit, coffin, means of escape… So be on the look out for all details, however insignificant or trivial they may seem, because they often trigger off a symbolical reading of the text in which they apppear→"Detail has to be controlled by some overall purpose" (M&M, 93): in OC's world the most concrete or material or trivial thing, detail may point to or give access to the most abstract and immaterial dimension, spirituality, the divine. DISTORTION(S): GOTHIC, GROTESQUE, PROSTHETIC GROTESQUE, FREAKS (= AMERICAN GARGOYLES) Point of departure → "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (MM, 179). According to OC, distortion is a key device in literary creation as witness the following quotations from Mystery and Manners : "The problem for such a novelist will be to know how far he can distort without destroying " "His way will much more obviously be the way of distortion" (42) "The truth is not distorted here, but rather, a certain distortion is used to get at the truth" (97) → Why is it so ? 1° Distortion as a strategy : found in Roman ruins and representing motifs in which the human, the animal and the vegetable kingdoms inter-mingled/twined. Later on, the term was carried over into literature, but the term has taken on specific connotations in American fiction; it became popular thanks to S. Anderson's work, Winesburg, Ohio, in which "freakiness" also means an attitude to truth, a crippling appropriation of truth as S. Anderson pointed out : It was the truths that made the people grotesques. The old man had quite an elaborate theory concerning the matter. It was his notion that the moment one of the people took one of the truths to himself, called it his truth, and tried to live his life by it, he became a grotesque and the truth he embraced became a falsehood (S. Anderson, Winesburg, Ohio) The grotesques are characterized by various types of psychic unfulfilment or limitation owing in part to the failure of their environment to provide them with opportunities for a rich variety of experience and in part to their own inability or reluctance to accept or understand the facts of isolation and loneliness. The grotesques have become isolated from others and thus closed off from the full range of human experience; they are also the socially defeated, human fragments... (Cf. W. Styron in Lie Down in Darkness: "Didn't that show you that the wages of sin is not death, but isolation?"). Cf. the connection between the gothic and the grotesque in the following excerpt: If, as has been suggested, the tendency of works in the [Gothic] tradition has been not to portray with mimetic fidelity the manners and social surface of everyday life but, rather, to uncover at the heart of reality a sense of mystery, then the grotesque figure becomes the Ulysses of this terra incognita. He is a figure who is in some way distorted from the shape of normalitywhether by a physical deformity (Ahab) or by a consuming intellectual (Usher), metaphysical (Pierre), moral (Ethan Brand, the veiled minister), or emotional (Bartleby) passion; and his discovery often takes a violent shapedestructive of himself or of others" (M. Orvell, Invisible Parade: The Fiction of Flannery O'Connor) The grotesque paves the way for the realization of "that disquieting strangeness apt to arise at every turn out of the most intimately familiar, and through which our everyday sense of reality is made to yield to the troubling awareness of the world's otherness" (A. Bleikasten, "Writing on the flesh") "The grotesque is a literature of extreme situation, and indeed mayhem, chaos, and violence seem to predominate in the genre" (G. Muller) Why are there so many freaks, grotesques or handicapped people in OC's fiction ? For one thing, they seem to be a feature of southern country life cf. H. Crews→ The South as the country of nine-fingered people: Nearly everybody I knew had something missing, a finger cut off, a toe split, an ear half-chewed away, an eye clouded with blindness from a glancing fence staple. And if they didn't have something missing, they were carrying scars from barbed wire, or knives, or fishhooks. But the people in the catalogue [the Sears, Roebuck mailorder catalogue] had no such hurts. They were not only whole, had all their arms and legs and toes and eyes on their unscarred bodies, but they were also beautiful. Their legs were straight and their heads were not bald and on their faces were looks of happiness, even joy, looks that I never saw much of in the faces of the people around me. Young as I was, though, I had known for a long time that it was all a lie. I knew that under those fancy clothes there had to be scars, there had to be swellings and boils of one kind or another because there was no other way to live in the world (H. Crews, A Childhood, 54) But this sociological factor is far from being the only reason, as two key statements by OC will show : To be able to recognize a freak, you have to have some conception of the whole man, and in the South the general conception of man is still, in the main, theological. (44) "The freak in modern fiction is usually disturbing to us because he keeps us from forgetting that we share in his state" (MM, 133) Freaks or partial people (as W. Schaffer, a critic, put it → "partial people seeking spiritual completion point up the sorry state of the human condition. A kind of touchstone of our human condition") : We can say we are normal because a psychological, sexual, or even spiritual abnormality canwith a little luckbe safely hidden from the rest of the world (Crews, 105) Feaks were born with their traumas. They've already passed their test in life. They're aristocrats (H. Crews, 87) The freak, through acceptance, can be viewed not as the deviation, the perversion of humanity, but the ideal (107) We all eventually come to our trauma in life, nobody escapes this. A freak is born with his trauma (113)  « Son humanité ne fait pas de doute et pourtant il déroge à l'idée habituelle de l'humain » (a French critic, Th. 114) Grotesque The same holds true of the grotesque i.e. a reminder of human imperfection. OC uses grotesque characters to usher in the mysterious and the unexpected : Their [grotesque characters'] fictional qualities lean away from typical social patterns, toward mystery and the unexpected (40) The Communion of Saints : a communion created upon human imperfection, created from what we make of our grotesque state (228) Prosthetic grotesque (le grotesque prothétique) A variant : prosthesis : an artificial body part such as a leg, an arm, a heart or breast. "The horror of prosthesis (which is more than an object, unassimilable either to other objects or to the body itself )" (Crews, 171). A case in point: Hulga's wooden leg. According to Russian critic, Bakhtine : « Le grotesque s'intéresse à tout ce qui sort, fait saillie, dépasse du corps, tout ce qui cherche à lui échapper » (Crews, 122) A possible conclusion ? « Explorer le grotesque c'est explorer le corps » (Crews 118) → in the same way as meaning seems to be body-centered, grounded in the tactile and the tangible, so is salvation a process involving the body → Transcendance is in physicality (A. Di Renzo). OC's fiction is « Un univers Université de Perpignan-Via Domitia FLANNERY O'CONNOR'S COMPLETE STORIES Three quotations from F. O'Connor to give the tone of this reading of The Complete Stories: Fitzgerald. (Wright Morris in The Territory Ahead (in Reality and Myth, 316) So, one has to take into account what French philosopher, G. Durand calls « les puissances de transfiguration de l'écriture 2 ». Literature may well express reality, but it also creates a form of reality that does not exist beside, outside, or before the text itself, but in and through the text. Literature does not re-create the world; it brings a new world into being. It is « la nécessité du récit qui secrète son paysage [...] l'oeuvre littéraire crée son espace, sa région, son paysage nourricier » (G. Durand, 393). The most extreme inference one can deduce from such a premiss is that Uncle Tom's Cabin, Sartoris and Gone With the Wind, etc., conjured up a referential, fictitious, legendary, and mythical South, a territory of the imagination. F. O'Connor set 3. M. Gresset, préface à P. Carmignani, Les Portes du Delta : Introduction à la fiction sudiste et à l'oeuvre romanesque de Shelby Foote, Perpignan, PUP, 1994, 6-7. Johansen This being said, let's now deal with the question of the setting of OC's fiction viz. the South.If you attended last year's course onJordan County, you must know by now how difficult it is to answer the simple question "What is the South ?", how dificult it is to merely say how many States comprise the section since their number range from 11 to 17 (the 11 States of the Confederacy; the 15 Slave states i.e. where slavery was legal; the 17 States below the Mason-Dixon line*) *Mason-Dixon line = a line of demarcation between Pennsylvania and Maryland deriving it name from those of the two topographers who determined that border, hence the nickname Dixie-(land) for the South . who recorded Negro folklore in Uncle Remus: His Songs and Sayings (1880); George Washington Cable (1844-1925), a native of New Orleans, was the depictor of the Creole civilization; Thomas Nelson Page (1832-1922); Kate Chopin (1851-1904), a woman writer of French descent who dealt with Louisiana in The Awakening (1899). The 2 2 nd crucial period in the formation of Southern literature, i.e. the two decades between 1925 and 1945 which were called "The Southern Renascence". It was the most extraordinary literary development of 20 th century America, a regional development comparable to the flowering of New England literature one hundred years earlier. The inception of that impressive cultural phenomenon in the South coincided with W. Faulkner's arrival on the literary scene. The South that had the reputation of being an intellectual and literary desert produced in a short span of time an exceptional crop of good writers. Stress the role of female voices in that literary chorus: -Katherine Anne Porter : a very successful short-story writer : Flowering Judas (1930), Pale Horse, Pale Rider (1939). Her only published novel was A Ship of Fools (1962) -Eudora Welty whose fame rests mainly on her collections of short stories also wrote of the Mississippi like Faulkner, but her picture of Southern life is removed from high tragedy ; it is the day-to-day existence of people in small towns and rural areas. Delta Wedding (1946), The Ponder Heart (1954), Losing Battles (l970). -Carson McCullers (1917-1967), another remarkable woman of letters : Lonely Hunter (1940), The Member of the Wedding (1946), The Ballad of the Sad Café (1951). -Flannery O'Connor (1925-1964) who found in her native South, in American fiction of the 19 th century, and her conservative Catholic Christianity the three sources of all her work : Wise Blood (1952), A Good Man Is Hard to Find (1955), The Violent Bear it Away (1960). -Shirley Ann Grau : The Hard Blue Sky (1958), The Keepers of the House (1964), The Condor Passes (1971). -Margaret Mitchell's bestseller, Gone With the Wind (1936), etc. -Elizabeth Spencer TheLight in The Piazza. The Southern tradition is carried on nowadays by an impressive array of writers: the 4 Williams (William Faulkner; William Styron; William Goyen; William Humphrey); Shelby Foote; Walker Percy; Fred Chappell; Truman Capote and an impressive number of new voices (C. Mac-Carthy, Robert Olen Butler, Madison Smart Bell, etc.) de l'oeuvre, 60), profondeur que traduit précisément le symbolisme.Chez les Grecs, le symbole était un morceau de bois ou un osselet qui avait été coupé en deux et dont deux familles amies conservaient chacune une moitié en la transmettant à leurs descendants. Lorsque, plus tard, ceux-ci rapprochaient les deux fractions complémentaires et parvenaient à reconstituer l'unité brisée dont elles étaient issues, ils redécouvraient ainsi une unité perdue mais retrouvée. (J. Brun, L'Homme et le langage, 81). Ainsi :En grec (sumbolon) comme en hébreu (mashal) ou en allemand (Sinnbild), le terme qui signifie symbole implique toujours le rassemblement de deux moitiés : signe et signifié. Le symbole est une représentation qui fait apparaître un sens secret, il est l'épiphanie d'un mystère. (L'Imagination symbolique, 13). figuré, c'est-à-dire n'est qu'un symbole restreint. La langue ne ferait donc que préciser le langage symbolique et ce, jusqu'au sens propre. En d'autres termes, la poésie serait première et non la prose utilitaire, le langage-outil, profondément lié à l'apparition parallèle de l'homme-outil...). Mais l'autre moitié du symbole, « cette part d'invisible et d'indicible qui en fait un monde de représentations indirectes, de signes allégoriques à jamais inadéquats, constitue une espèce logique bien à part » . Les deux termes du symbole sont infiniment ouverts. Précisons enfin que le domaine de prédilection du symbolisme, c'est le non-sensible sous toutes ses formes : inconscient, métaphysique, surnaturel et surréel, bref ces choses absentes ou impossibles à percevoir. Dernier point, capital, la fonction symbolique est dans l'homme le lieu de passage, de réunion des contraires : le symbole dans son essence et presque dans son étymologie est "unificateur de paires d'opposés" (Imag. symb. 68). De nombreux spécialistes ont essayé de mettre à jour ce qu'on pourrait appeler le soubassement de la faculté symbolique (du symbolisme imaginaire*) qui habite l'homme et proposé divers systèmes de classification des symboles à partir de critères ou de principes tenus pour déterminants ; G. Bachelard, par exemple, adoptera comme axiomes classificateurs, les quatre éléments -Air, Eau, Feu, Terre -les "hormones de l'imagination" ou catégories motivantes des symboles. G. Dumézil s'appuiera sur des données d'ordre social, à savoir que les systèmes de représentations mythiques dépendent dans les sociétés indo-européennes d'une tripartion fonc-tionnelle : la subdivision en trois castes ou ordres : sacerdotal, guerrier et producteur qui déterminerait tout le système de représentations et motiverait le symbolisme laïc aussi bien que religieux. A. Piganiol a, lui, adopté une bipartition (constellations rituelles pastorales et agricoles) recoupant l'opposition archétypale entre le pâtre Abel et le laboureur Caïn : certaines peuplades pastorales élèvent des autels, rendent un culte au feu mâle, au soleil, à l'oiseau ou au ciel et tendent au Expérience d'ordre existentiel qui comporte généralement une triple révélation : celle du sacré, celle de la mort, et celle de la sexualité » (Thèse sur O'Connor, 14). R. Guénon distingue l'initiation virtuelle qui signifie une entrée ou un commencement dans la voie, au sens du latin initium, de l'initiation effective, qui correspond à suivre la voie, cheminer véritablement dans la voie, ce qui est le fait d'un petit nombre d'adeptes alors que beaucoup restent sur le seuil (Aperçus sur l'initiation, 198) « L'initiation équivaut à la maturation spirituelle de l'individu : l'initié, celui qui a connu les mystères, est celui qui sait » (M. Eliade, 15) Sacré→secernere : mettre à part→parenté des termes muthos et mustêrion qui présentent la même racine mu (bouche fermée), muô : se taire et mueô : initier. - typology: the study of symbolic representation esp. of the origin and meaning of Scripture types (a type: that by which sth is symbolized or figured [symbol, emblem] in Theology a person, object or event of Old Testament history prefiguring some person or thing revealed in the new dispensation ; 8. Hugues de Saint-Victor (Eco, Sém et phil., 162) -tropology: (a speaking by tropes); a moral discourse; a secondary sense or interpretation of Scripture relating to moral. Tropological: an interpretation of Scripture applied to conduct or mo-rals→ sens moral ou psychique anagogical (ana : up in place or time) ← anagoge: spiritual elevation esp. to understand mysteries → Anagogy: a spiritual or mystical interpretation. Anagogical: of words: mystical, spiritual, allegorical. In French sometimes called « sens mystique ou pneumatique ». Blood, a novel 1952 A Good Man Is Hard to Find, a collection of short-stories, 1955 The Violent Bear It Away, a novel, 1960 Everything That Rises Must Converge, a collection of short-stories published posthumously in 1965 Mystery and Manners, occasional prose, 1969 The Complete Stories, 1971 The Habit of Being, collected letters, 1979 The Presence of Grace and Other Book Reviews by Flannery O'Connor, 1983 F. O'Connor: Collected Works, 1988. « When you have to assume that your audience does not hold the same beliefs you do, then you have to make your vision apparent by shock-to the hard of hearing you shout, and for the almost-blind you draw large and startling figures » (M&M, 34). OC quotes a very convincing example on p. 162 : "When I write a novel in which the central action is a baptism, I am very well aware that for a majority of my readers, baptism is a meaningless rite, and so in my novel I have to see that this baptism carries enough awe and mystery to jar the reader into some kind of emotional recognition of its significance. To this end I have to bend the whole novel […] Distortion in this case is the instrument" (162) 2° Physical and moral distortion Distortion isn't just a narrative or stylistic device serving a pedagogical purpose, and another remote avatar or embodiment of it is to be found in the striking number of physically abnormal characters i.e freaks, cripples, handicapped persons peopling OC's fiction, which has been described as "an insane world peopled by monsters and submen" (UOC, 15). She accounted for it by statingamong other things-, that : "My own feeling is that writers who see by the light of their Christian faith will have, in these times, the sharpest eye for the grotesque, for the perverse, and for the unacceptable" (33) Rôle of mutilation and physical imperfection In OC's world, a man physically bereft always indicates a corresponding spiritual deficiency. Mutilations and physical imperfections may serve as a clue to or index of a character's function as initiator or initiate; may be a sign of election marking the character as one of the knowing few. Cf. G. Durand: « Le sacrifice oblatif de l'oeil est surdétermination de la vision en voyance ». So, we have our work cut out→dealing with all types of distortions made use of in OC's fiction, which will lead us to an examination of the cognate notions listed above in the title. Before going into details, some general information : Freak (O. E. ?) frician, to dance) sudden change of fortune, capricious notion ; product of sportive fancy→monstrous individual→Southern themes of alienation, degeneracy, mutilation, dehumanization→staples of Southern fiction which resulted in the coining of the label of "The School of Southern Degeneracy". Gothic → the Gothic novel : that type of fiction made plentiful use of all the trappings of the medieval period: gloomy castles, ghosts, mysterious disappearances, and other sensational and supernatural occurrences. Later on, the term denoted a type of fiction which does not resort to the medieval setting but develops a brooding atmosphere of gloom or terror and often deals with aberrant psychological states. OC's work is a convincing illustration of Poe's dictum that "The Gothic is not of Germany but of the soul" Grotesque: Etymologically, grotesque comes from the Greek "kraptos" meaning "hidden, secret". In the late XV th century, grotesque referred to those ornamental and decorative elements où le spirituel ne peut être atteint qu'à travers le corps, la matière, le sensoriel » (Thèse sur Crews 31) Bear in mind that God, the Spirit, underwent the mystery of the incarnation (the embodiment of God the Son in human flesh as Jesus Christ); the body houses the soul, the Spirit, and as such partakes in the Resurrection of the Flesh. As to monsters: « Le monstre n'obéit pas à la loi du genre ; il est au sens propre, dégénéré » (D. Hollier). Let's say that the monster is both degenerate and a reminder of what constitutes the "genus" of man, mankind, i.e. our fallen state, our incompleteness and corresponding yearning for wholeness.
86,354
[ "17905" ]
[ "178707", "420086" ]
01762224
en
[ "spi" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762224/file/LEM3_FATIGUE_2018_MERAGHNI.pdf
Yves Chemisky email: [email protected] Darren J Hartl Fodil Meraghni Three-dimensional constitutive model for structural and functional fatigue of shape memory alloy actuators A three-dimensional constitutive model is developed that describes the behavior of shape memory alloy actuators undergoing a large number of cycles leading to the development of internal damage and eventual catastrophic failure. Physical mechanisms such as transformation strain generation and recovery, transformation-induced plasticity, and fatigue damage associated with martensitic phase transformation occurring during cyclic loading are all considered within a thermodynamically consistent framework. Fatigue damage in particular is described utilizing a continuum theory of damage. The total damage growth rate has been formulated as a function of the current stress state and the rate of martensitic transformation such that the magnitude of recoverable transformation strain and the complete or partial nature of the transformation cycles impact the total cyclic life as per experimental observations. Simulation results from the model developed are compared to uniaxial actuation fatigue tests at different applied stress levels. It is shown that both lifetime and the evolution of irrecoverable strain are accurately predicted by the developed model. Introduction Shape memory alloys (SMAs) are metals that have the ability to generate and recover substantial deformation during a thermomechanical cycle. The physical mechanism that drives the shape recovery in the materials is a martensitic phase transformation that results from thermal and/or mechanical inputs, often without the consequence of significant plastic strain generation during formation and recovery of martensitic variants. This unique ability has led to the development of devices for aerospace and medical applications [START_REF] Hartl | Aerospace applications of shape memory alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Lester | Review and perspectives: shape memory alloy composite systems[END_REF]. The design of such devices has required the development of constitutive models to predict their thermomechanical behavior. A comprehensive review of SMA constitutive models can be found in works by [START_REF] Birman | Review of Mechanics of Shape Memory Alloy Structures[END_REF], [START_REF] Patoor | Shape Memory Alloys, {P}art {I}: {G}eneral Properties and Modeling of Single Crystals[END_REF], [START_REF] Lagoudas | Shape Memory Alloys -Part {II}: Modeling of polycrystals[END_REF], [START_REF] Paiva | An Overview of Constitutive Models for Shape Memory Alloys[END_REF], and [START_REF] Cisse | A review of constitutive models and modeling techniques for shape memory alloys[END_REF]. Early models describe the behavior of conventional SMAs without considering irrecoverable strains and damage, which is sufficient for the design of devices where operating temperatures, maximum stress levels, and number of actuation cycles are all relatively low. To expand the capabilities of such models, the evolution of transformation induced plasticity was first considered for conventional SMAs by Bo and Lagoudas (1999b) and then [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF]; these models allow calculations of accumulated irrecoverable strains caused by cycling. The coupling between phase transformation and plasticity at higher stresses has been considered in the literature for the simulation of shape memory alloy bodies under high loads at low temperatures compared to their melting points [START_REF] Hartl | Constitutive modeling and structural analysis considering simultaneous phase transformation and plastic yield in shape memory alloys[END_REF][START_REF] Zaki | An extension of the ZM model for shape memory alloys accounting for plastic deformation[END_REF][START_REF] Khalil | A constitutive model for Fe-based shape memory alloy considering martensitic transformation and plastic sliding coupling: Application to a finite element structural analysis[END_REF]. A model accounting for the effect of retained martensite (martensite pinned by dislocations) has been developed by [START_REF] Saint-Sulpice | A 3D super-elastic model for shape memory alloys taking into account progressive strain under cyclic loadings[END_REF]. To predict the influence of irrecoverable strains in high-temperature SMAs (HTSMAs) where viscoplastic creep is observed, a one-dimensional model accounting for the coupling between phase transformation and viscoplasticity has been developed by Lagoudas et al. (2009a); a three-dimensional extension of this model was developed and implemented via finite element analyses (FEA) by Hartl et al. (2010b), and the cyclic evolution of irrecoverable strains accounting for combined viscoplastic, retained martensite, and TRIP effects was later implemented by [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. blueThe evolution of the pseudoelastic response for low cycle fatigue of SMAs has been invesigated recently [START_REF] Zhang | Experimental and theoretical investigation of the frequency effect on low cycle fatigue of shape memory alloys[END_REF]. A strain-energy based fatigue model has been proposed and confronted to experiments. These past efforts focused on the prediction of thermomechanical responses for only a small number of cycles (e.g., up to response stabilization). However, active material actuators are often subjected to a large number of repeated cycles [START_REF] Van Humbeeck | Non-medical applications of shape memory alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF], which induces thermally-induced fatigue in the case of SMAs [START_REF] Lagoudas | Thermomechanical Transformation Fatigue of {SMA} Actuators[END_REF][START_REF] Bertacchini | Thermomechanical transformation fatigue of TiNiCu SMA actuators under a corrosive environment -Part I: Experimental results[END_REF]. During the lifetime of an SMA actuator, the influence of two different classes of fatigue must be considered:(i) Structural fatigue is the phenomenon that leads towards catastrophic failure of components, while (ii) functional fatigue describes permanent geometric changes to the detriment SMA of component performance and is associated with the development of irrecoverable strain [START_REF] Eggeler | Structural and functional fatigue of NiTi shape memory alloys[END_REF]). The prediction of functional fatigue evolution allows for calculation of changes expected in a given actuator over its lifetime, while the prediction of structural fatigue evolution allows for determination of the actuator lifetime itself. While the prediction of functional fatigue relies on the simulation of irrecoverable strains upon cycling (i.e., so-called trans-induced plasticity(TRIP)) (Bo and Lagoudas, 1999b;[START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF], catastrophic structural fatigue is associated with the development of micro-cracks during transformation. Most SMAs are herein taken to be sufficiently similar to hardening metal materials, so as to apply the theoretical modeling of structural fatigue via thermodynamic approaches developed in recent years. [START_REF] Khandelwal | Models for Shape Memory Alloy Behavior: An overview of modeling approaches[END_REF]. Continuum damage mechanics (CDM) has been extensively utilized to predict the fatigue lifetime of metallic materials and structures since its development and integration within the framework of thermodynamics of irreversible processes [START_REF] Bhattacharya | Continuum damage mechanics analysis of fatigue crack initiation[END_REF][START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF][START_REF] Dattoma | Fatigue life prediction under variable loading based on a new non-linear continuum damage mechanics model[END_REF]. The notion of damage itself concerns the progressive degradation of the mechanical properties of materials before the initiation of cracks observable at the macro-scale [START_REF] Simo | Strain-and stress-based continuum damage models-I. Formulation[END_REF]. Contrary to approaches based on fracture mechanics, which explicitly consider the initiation and growth of micro-cracks, voids, and cavities as a discontinuous and discrete phenomenon [START_REF] Baxevanis | Fracture mechanics of shape memory alloys: review and perspectives[END_REF], CDM describes damage using a continuous variable associated with the local density of micro-defects. Based on this damage variable, constitutive equations have been developed to predict the deterioration of material properties [START_REF] Voyiadjis | Advances in damage mechanics : metals and metal matrix composites[END_REF]. CDM enables fatigue life prediction in innovative superalloys [START_REF] Shi | Creep and fatigue lifetime analysis of directionally solidified superalloy and its brazed joints based on continuum damage mechanics at elevated temperature[END_REF] and standard aluminium alloys [START_REF] Hojjati-Talemi | Fretting fatigue crack initiation lifetime predictor tool: Using damage mechanics approach[END_REF] alike. Relevant models can also be implemented within FEA framework to predict the response of structures with complex shapes [START_REF] Zhang | Finite element implementation of multiaxial continuum damage mechanics for plain and fretting fatigue[END_REF]. Two opposing views exist in the theoretical modeling of continuous damage. If the micro-defects and their associated effects are considered isotropic, a simple scalar variable (i.e.,the damage accumulation) is sufficient to describe the impact of damage on material properties. However, to comply with experimental findings confirming anisotropic evolution of damage in ductile materials [START_REF] Lemaitre | Anisotropic damage law of evolution[END_REF][START_REF] Bonora | Ductile damage evolution under triaxial state of stress: Theory and experiments[END_REF][START_REF] Luo | Experiments and modeling of anisotropic aluminum extrusions under multi-axial loading Part II: Ductile fracture[END_REF][START_REF] Roth | Effect of strain rate on ductile fracture initiation in advanced high strength steel sheets: Experiments and modeling[END_REF], researchers have also developed anisotropic damage continuum models as proposed by [START_REF] Voyiadjis | A coupled anisotropic damage model for the inelastic response of composite materials[END_REF]; [START_REF] Brünig | An anisotropic ductile damage model based on irreversible thermodynamics[END_REF]; [START_REF] Desrumaux | Generalised Mori-Tanaka Scheme to Model Anisotropic Damage Using Numerical Eshelby Tensor[END_REF]. In this latter case, the distribution of micro-defects adopts preferred orientations throughout the medium. To model this behavior, a tensorial damage variable is typically introduced, (i.e., the damage effect tensor) [START_REF] Lemaitre | Anisotropic damage law of evolution[END_REF][START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF]. A set of internal variables that are characteristic of various damage mechanisms can also be considered [START_REF] Ladeveze | Damage modelling of the elementary ply for laminated composites[END_REF][START_REF] Mahboob | Mesoscale modelling of tensile response and damage evolution in natural fibre reinforced laminates[END_REF]. CDM models are also categorized based on the mathematical approach utilized. Strictly analytical formalisms belong to the group of deterministic approaches. These utilize robust thermodynamic principles, thermodynamic driving forces, and a critical stress threshold to derive mathematical expressions linking the damage variable with the material properties and other descriptions of state. The appearance of micro-defects below such stress thresholds is not considered possible and every result represents a deterministic prediction of material behavior. Alternatively, probabilistic approaches define probabilities attributed to the appearance of micro-defects. The damage is often thought to occur at points in the material where the local ultimate strength is lower than the average stress. Considering the local ultimate stress as a stochastic variable leads to calculated damage evolution that is likewise probabilistic. Such probability can be introduced into a thermodynamic model that describes the material properties to within margins of error [START_REF] Fedelich | A stochastic theory for the problem of multiple surface crack coalescence[END_REF][START_REF] Rupil | Identification and Probabilistic Modeling of Mesocrack Initiations in 304L Stainless Steel[END_REF]. The probabilistic models have been built mostly to treat fracture in brittle materials, such as ceramics [START_REF] Hild | On the probabilistic-deterministic transition involved in a fragmentation process of brittle materials[END_REF] or cement [START_REF] Grasa | A probabilistic damage model for acrylic cements. Application to the life prediction of cemented hip implants[END_REF], which demonstrate statistical scatter in direct relation with damage such as crack initiation and coalescence [START_REF] Meraghni | Implementation of a constitutive micromechanical model for damage analysis in glass mat reinforced composite structures[END_REF]. Probabilistic modeling may be a useful tool in fatigue life analysis of SMA bodies, given the scattering observed in the thermomechanical response of nearly identical test samples demonstrated in experimental works [START_REF] Figueiredo | Low-cycle fatigue life of superelastic NiTi wires[END_REF][START_REF] Nemat-Nasser | Superelastic and cyclic response of NiTi SMA at various strain rates and temperatures[END_REF]. Relevant experiments (Scirè Mammano and Dragoni, 2014) determine the number of cycles to failure in NiTi SMA wires by considering samples submitted to a series of cyclic load fatigue tests at increasing strain rates. It is evident from such works that the fatigue life is to some degree uncertain and the use of stochastic models might increase prediction accuracy overall. Several fatigue failure models for SMAs have been developed based on experimental observations. [START_REF] Tobushi | Low-Cycle Fatigue of TiNi Shape Memory Alloy and Formulation of Fatigue Life[END_REF] have proposed an empirical fatigue life equation, similar to a Coffin-Manson law, that depends on strain amplitude, temperature, and frequency of the cycles. This first model was compared to rotating-bending fatigue tests. A modified Manson-Coffin model was further proposed by [START_REF] Maletta | Fatigue of pseudoelastic NiTi within the stress-induced transformation regime: a modified CoffinManson approach[END_REF][START_REF] Maletta | Fatigue properties of a pseudoelastic NiTi alloy: Strain ratcheting and hysteresis under cyclic tensile loading[END_REF] to predict the fatigue life of NiTi SMAs under the stress-controlled cyclic loading conditions. A third Manson Coffin-like relationship has been proposed by Lagoudas et al. (2009b) to determine the irrecoverable strain accumulation of NiTiCu SMAs as a function of the number of cycles to failure for different stress levels, for both partial and complete transformations. Energy-based fatigue life models for SMAs have also been developed, and in particular consider the dissipated energy. [START_REF] Moumni | Fatigue analysis of shape memory alloys: energy approach[END_REF] proposed an empirical power law to predict the fatigue life of super elastic NiTi SMAs. [START_REF] Kan | An energy-based fatigue failure model for super-elastic NiTi alloys under pure mechanical cyclic loading[END_REF] has modified the previous model, replacing the power-law equation by a logarithmic one. Those models were compared with fatigue tests performed on NiTi alloys under uniaxial stress-controlled cyclic loading [START_REF] Kang | Whole-life transformation ratchetting and fatigue of super-elastic NiTi Alloy under uniaxial stress-controlled cyclic loading[END_REF]. [START_REF] Song | Damage-based life prediction model for uniaxial low-cycle stress fatigue of super-elastic NiTi shape memory alloy microtubes[END_REF] has recently proposed a damage-based fatigue failure model, considering three damage mechanisms, (i.e. micro-crack initiation, micro-crack propagation, and martensite transformation induced damage). A global damage variable is defined as the ratio of the accumulated dissipation energy at the current number of cycles (N ) with regard to the accumulated dissipation energy obtained at the failure life (N f ). A damage-based fatigue failure model is proposed to predict the fatigue life, that depends on the dissipation energy at the stabilized cycle, and the dissipation energy at the N-th cycle. It is shown that the model predicts the fatigue life of super-elastic NiTi SMA micro-tubes subjected to uniaxial stress-controlled load cycles. blueHigh cycle fatigue criterion have been developed recently for SMAs. The Investigation of SMA cyclic response under elastic shakedown has led to the definition of a Dang Van, which means type endurance limit for SMA materials [START_REF] Auricchio | A shakedown analysis of high cycle fatigue of shape memory alloys[END_REF]. A Shakedown based model for high-cycle fatigue of shape memory alloys has been developed by [START_REF] Gu | Shakedown based model for high-cycle fatigue of shape memory alloys[END_REF]. Non-proportional multiaxial fatigue of pseudoelastic SMAs has been recently investigated by [START_REF] Song | Non-proportional multiaxial fatigue of super-elastic NiTi shape memory alloy micro-tubes: Damage evolution law and lifeprediction model[END_REF], which has led to the definition of a multiaxial fatigue model. Although past developments allow determination of the fatigue life of shape memory alloy devices for uniaxial, homogeneous cyclic loadings, the present work focuses on the difficult problem of coupling damage evolution to phase transformation, irrecoverable transformationinduced plastic strain, and general three-dimensional thermomechanical states. To permit the introduction of damage into a previously well-defined and widely accepted class of model for SMA phase transformation ( Lagoudas et al. (2012)), probabilistic and anisotropic approaches are avoided. Rather, a deterministic and isotropic model for continuum damage mechanics is proposed, which is compatible with the existing models of thermomechanical response of SMA actuators, including those considering generated plastic strain consideration herein [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]). Such a model, even once its assumptions are considered, provides the most comprehensive tool for calculating fatigue in SMA actuators to date. The organization of this work follows. The motivation of the proposed model, including the need for numerical simulations of cyclic loading in SMA bodies, has been provided in Section 1. Observations motivating specific forms of the evolution equations for damage and irrecoverable strains are overviewed in Section 2. The thermodynamical model is developed in Section 3, with the functional form of the various evolution equations related to the physical mechanisms considered being clearly presented. After some comments on model calibration in Section 4, numerical simulations and their comparison with experimental demonstrations of structural and functional fatigue in SMA bodies are presented in Section 5. Final conclusions are provided in Section 6. Motivating Observations from Previous Studies Studies of SMA actuation fatigue are not as numerous as those focusing on nearly isothermal (i.e., superelastic) response. This is due to both the relative importance of generally isothermal medical devices and the difficulty of applying high numbers of thermal cycles to SMA actuators. From those SMA actuation fatigue databases that are available in the literature, the experimental studies of actuation fatigue and post-mortem analyses that were carried out on Ni60Ti40 (wt. %) [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF] and on NiTiHf [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] have been selected for consideration herein. In those past studies, a widespread distribution of cracks were found to be present in the SMA components at failure, as shown in Fig. 1. This indicates that a progressive damage mechanism was activated during the lifetime of the SMA body. CDM appears to be particularly adapted for modeling such fatigue damage given the continuous and progressive evolution of multiple defects observed. NiTiHf alloys have received an increased attention in the recent years according to their high potential to be utilized for high temperature actuators, (e.g., those having transformation temperatures typically above 100 • C). They can operate at high stress levels without the development of significant plastic strains [START_REF] Karaca | NiTiHf-based shape memory alloys[END_REF]. From the analysis of NiTiHf actuator fatigue tests, we see that the number of cycles to failure increases with decreasing cyclic actuation work. Actuator work is defined as the scalar product of the constant applied stress and the transverse strain recovered each cycle (see Fig. 2a). Further, the amount of irrecoverable strain generated seems to be positively correlated with the number of cycles to failure (see Fig. 2b). The development of such strains may be important in predicting the lifetime of actuators formed from a number of SMA materials or some stress levels, bluespecifically at higher stress levels (e.g., 300-600 MPa in [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF]). The study of fatigue in Ni60Ti40 actuator components also shows that the number of cycles to failure increases with decreasing cyclic actuation work (see Fig. 2c). However, the experimental results suggests that in this material loaded to lower stresses blue(e.g., 100-250 MPa in [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF]), failure may not be correlated with the accumulation of plastic strain, though such correlation is regularly considered in ductile metals. This is shown for the lower stressed Ni60Ti40 samples (Fig. 2d). The consistent and clear negative correlation between cyclic actuation work and fatigue life in both cases motivates the choice of a thermodynamical model to describe the evolution of damage in shape memory alloys. The fact that generated TRIP might also be correlated to failure at higher stresses only motivates consideration of a stress threshold effect for the coupling between damage and TRIP strains, and this will be addressed for the first time herein. Constitutive Modeling Framework for Phase Transformation and Damage Physical mechanisms associated with cyclic martensitic phase transformation such as transformation strain generation and recovery, transformation-induced plastic strain generation, and fatigue damage accumulation are all taken into account within the thermodynamically consistent constant model presented in this section. Fatigue damage is described utilizing a scalar variable following an isotropic continuum theory of damage. The damage growth rate has been formulated as a function of both the stress state and the magnitude of the recoverable transformation strain such that cyclic actuation work is directly and clearly considered. Transformation-induced plasticity is also considered as per the experimental observations described in the previous section, and its generation depends on the stress state, the magnitude of the transformation strain, and a term that couples with plastic strain with damage for stress levels above an assumed material-dependent threshold. The model is based on the framework of [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF], considering further improvements proposed by [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF] for thermodynamical models describing phase transformation that drives multiple complex phenomena. The proposed model focuses on the generation and recovery of transformation strains that occur as a result of martensitic transformation (forward and reverse); martensitic reorientation is not considered given its relative unimportance in SMA cyclic actuator applications, which are obviously the primary motivating application for this work. Concerning damage evolution, it is herein assumed, based on observations, that microscopic crack initiation and propagation are not explicitly linked to the appearance of large plastic strains [START_REF] Bertacchini | Thermomechanical transformation fatigue of TiNiCu SMA actuators under a corrosive environment -Part I: Experimental results[END_REF][START_REF] Calhoun | Actuation fatigue life prediction of shape memory alloys under the constant-stress loading condition[END_REF], but rather that the process of martensitic phase transformation may be more important. In fact, it has been shown the localized nucleation of martensite around crack tips during forward transformation can decrease the fracture toughness and induce localized propagation of cracks, even under moderate stresses [START_REF] Baxevanis | Fracture mechanics of shape memory alloys: review and perspectives[END_REF]. The evolution of damage must therefore be coupled with the martensitic transformation mechanisms directly. The framework thus adopted follows closely the work of [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF] and [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF] for the development of TRIP strain and [START_REF] Lemaitre | Mechanics of solid materials[END_REF] for the coupling between a physical mechanism such as plasticity or phase transformation with damage. To summarize, the following internal state variables associated with multiple inelastic strain mechanisms are tracked during both forward and reverse transformations: • The inelastic transformation strain ε t , which considers all inelastic strains associated with different physical phenomena occurring during transformation (i.e, it is composed of contributions from crystallographic transformation, plasticity, and damage). Such transformation strain is decomposed in two contributions, ε F and ε R , to represent the inelastic strain induced by forward transformation and by reverse transformation, respectively. The inelastic transformation strain is further split into a part that is recoverable(tt) and a portion that is not (TRIP strain;tp), to obtain four total contributions : ε tt-F , ε tp-F , ε tt-R and ε tp-R , such that: ε t = ε F + ε R = ε tt-F + ε tp-F + ε tt-R + ε tp-R . (1) • The scalar total martensitic volume fractions induced by forward transformation (into martensite) and by reverse transformation (into austenite) (ξ F , ξ R ), • The scalar transformation hardening energies induced by forward transformation and by reverse transformation ( g F , g R ), • The scalar accumulated transformation-induced plastic strain accompanying forward transformation and reverse transformation (p F , p R ), • The scalar plastic hardening energy induced by forward transformation and reverse transformation (g tp-F , g tp-R ), • The scalar (i.e., isotropic) damage accumulation induced during forward transformation and reverse transformation (d F , d R ). Considering the point-wise model as describing a representative volume element of volume V (Bo and Lagoudas, 1999a) and acknowledging that both forward and reverse transformations can occur simultaneously at various points within such a finite volume, the following two rate variables are introduced : (i) ξF represents the fractional rate of change of the martensitic volume V M induced by forward transformation [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF]: ξF = V F M V . (2) Similarly,(ii) ξR represents the rate of change of the martensitic volume fraction (MVF) induced by reverse transformation: blue ξR = - V R M V . ( 3 ) The rate of the total martensitic volume fraction ξ is then: ξ = ξF -ξR , (4) which leads to the definition of the total volume fraction of martensite: ξ = t 0 ξF dτ - t 0 ξR dτ = ξ F -ξ R . ( 5 ) blueNote that ξ F and ξ R always take positive values, which simplifies the thermodynamic definition of the model. The physical limitation related to the definition of total volume fraction is expressed as: 0 ≤ ξ ≤ 1 ⇔ ξ R ≤ ξ F ≤ 1 + ξ R . (6) Similarly, the rates of the various strain measures of the total accumulated plastic strain p, the total transformation hardening energy g t , the total plastic hardening g tp , and the total damage d are taken to be the sums of contributions from both forward and reverse transformations: εt-F = εtt-F + εtp-F εt-R = εtt-R + εtp-R εtt = εtt-F + εtt-R εtp = εtp-F + εtp-R ξ = ξF -ξR ġt = ġF + ġR ṗ = ṗF + ṗR ġtp = ġtp-F + ġtp-R ḋ = ḋF + ḋR . (7) In this way, two sets of internal variables respectively related to forward transformation (into martensite) and to reverse transformation (into austenite) are defined: ζ F = {ξ F , ε tt-F , ε tp-F , g F , p F , g tp-F , d F }, ζ R = {ξ R , ε tt-R , ε tp-R , g R , p R , g tp-R , d R } ζ = {ζ F , ζ R } (8) blueTo rigorously derive a three-dimensional model for damage accumulation in SMA materials that explicitly couples actuation work to material degradation, the thermodynamics of irreversible processes are utilized. The fundamentals are presented in Annex A. Thermodynamic derivation of the proposed model The total Gibbs free energy G is additively decomposed into a thermoelastic contribution G A from regions of the RVE in the austenitic phase, a thermoelastic contribution G M from regions of the RVE in the martensitic phase, and a mixing term G in that accounts for non-thermoelastic processes: Given the state variables chosen for the description of the thermomechanical mechanisms, the Gibbs energy for the overall SMA material is written: G = (1 -ξ)G A (σ, θ, d) + ξG M (σ, θ, d) + G mix (σ, ε tt , g t ), (9) The part of the Gibbs free energy related to the martensitic transformation only is taken from the model of [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF] given the conventional dependence of elastic response on damage [START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF], such that G β (σ, θ, d) = - 1 2(1 -d) σ : S β : σ -σ : α(θ -θ 0 ) + c β (θ -θ 0 ) -θ ln θ θ 0 -η β 0 θ + E β 0 , (10) for β = A, M . Whereas the energy of phase mixing is given as: G mix (σ, ε t , g t , g tp ) = σ : ε t + g t + g tp . (11) In those expressions above, S is the compliance tensor (4th order), α is the thermal expansion tensor (2nd order), c 0 is a material parameter that approximates as the specific heat capacity (additional terms arising from thermo-inelastic coupling being small [START_REF] Rosakis | A thermodynamic internal variable model for the partition of plastic work into heat and stored energy in metals[END_REF])), η 0 is the initial entropy, E 0 is the initial internal energy, and θ 0 is the initial or reference temperature . Details about the selection of the thermoelastic contribution of the phases, especially considering the term related to heat capacity, are given in [START_REF] Chatzigeorgiou | Thermomechanical behavior of dissipative composite materials[END_REF]. It is assumed, [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF][START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF], that thermoelastic parameters (including specific heat) that enter the expression of the Gibbs free energy for each phase can be regrouped into phase-dependent parameters as experiments warrant (i.e., S(ξ), α(ξ), c(ξ), η 0 (ξ) and E 0 (ξ)), where a linear rule of mixtures is assumed. For example, S(ξ) is the linearly phase-dependent and, written as S(ξ) = S A -ξ(S A -S M ) = S A -ξ∆S, (12) where S A and S M denote the compliance tensors of austenite and martensite, respectfully, and the operator ∆ denotes the difference in any material constant as measured in the pure martensite and pure austenite phases. Conventionally, standard isotropic forms are assumed sufficient for S and α in polycrystals. Recalling that the transformation strain ε t includes all deformations associated with martensitic transformation, recoverable and irrecoverable, the following thermodynamical quantities are expressed, recalling the method proposed by [START_REF] Germain | Continuum thermodynamics[END_REF] and invoking ( 8), (A.15), and ( 10): ε = - ∂G ∂σ = 1 1 -d S : σ + α (θ -θ 0 ) + ε t , η = - ∂G ∂θ = α : σ + c 0 ln θ θ 0 + η 0 , γ loc = - ∂G ∂ε t : εt - ∂G ∂g t ġt - ∂G ∂ξ ξ - ∂G ∂d ḋ - ∂G ∂p ṗ - ∂G ∂g tp ġtp , = σ : εt -ġt - ∂G ∂ξ ξ - ∂G ∂d ḋ -g tp , r = -c 0 θ -θα : σ + γ loc , . (13) To proceed with the definition of the evolution equations associated with the various physical mechanisms, we consider that the evolution of all inelastic strains is completely related to the underlying process of phase transformation, as assumed for the TRIP effect elsewhere [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF] and captured herein in (A.6). The following specific evolution equations are then considered, where all the rate quantities are linked with the rate of change of the martensite volume fraction: εtt-F = Λ tt-F ξF , ġF = f t-F ξF , ṗF = f tp-F ξF , εtp-F = Λ tp-F ṗF = Λ tp-F f tp-F ξF , ġtp-F = H tp-F ṗF = f tp-F ξF , ḋF = f td-F ξF , εtd-F = Λ td-F ḋ = Λ td-F f td-F ξF , (14) and εtt-R = Λ tt-R ξR , ġR = f t-R ξR , ṗR = f tp-R ξR , εtp-R = Λ tp-R ṗR = Λ tp-R f tp-R ξR , ġtp-R = H tp-R ṗR = f tp-R f tp-R ξR , ḋR = f td-R ξR , εtd-R = Λ td-R ḋ = Λ td-R f td-R ξR . ( 15 ) blueIn the above equations, Λ tt-F represents the evolution tensor (that is, the direction of the strain rate) for the recoverable part of the transformation strain during forward transformation, while Λ tp-F and Λ td-F represent the irrecoverable part related to plasticity and damage, respectively (during forward transformation). During reverse transformation, the three evolution tensors are denoted as Λ tt-R , Λ tp-R and Λ td-R . The functional forms • Forward transformation (set A F ): A ξ F = A ξ = 1 2 σ : ∆S : σ + σ : ∆α (θ -θ 0 ) -ρ∆c (θ -θ 0 ) -θln θ θ 0 + ρ∆s 0 θ -ρ∆E 0 , A ε F = A ε tt-F = A ε tp-F = A ε td-F = A ε t = σ, A g F = -1 A p F = A p = 0 A g tp-F = -1 A d F = A d = 1 2(1 -d) 2 σ : S : σ, (16) • Reverse transformation (set A R ): A ξ R = -A ξ = - 1 2 σ : ∆S : σ -σ : ∆α (θ -θ 0 ) + ρ∆c (θ -θ 0 ) -θln θ θ 0 -ρ∆s 0 θ + ρ∆E 0 , A ε R = A ε tt-R = A ε tp-R = A ε td-R = A ε t = σ, A g t-R = 1 A p R = A p = 0 A g tp-R = -1 A d R = A d = 1 2(1 -d) 2 σ : S : σ. (17) Transformation limits Since phase transformation, TRIP, and damage are assumed to be rate-independent phenomena, a threshold for the activation of such mechanisms that depends primarily on thermodynamic forces should be defined, [START_REF] Edelen | On the Characterization of Fluxes in Nonlinear Irreversible Thermodynamics[END_REF]. Specifically, the evolution of all internal variables ζ should respect the following, where S is a domain in the space of the components of A having boundary ∂S: ζ = 0 → A ∈ S + ∂S, ζ = 0 → A ∈ ∂S. ( 18 ) Following the methodology introduced by [START_REF] Germain | Cours de Mécanique des Miliex Continus: Tome I-Théorie Générale[END_REF] and referred as generalized standard materials by [START_REF] Halphen | Sur les matériaux standards généralisés[END_REF], if ∂S is a surface with a continuous tangent plane and if Φ (A) is a function continuously differentiable with respect to A, zero on ∂S, and negative in S, then one can write: A ∈ S , ζ = 0 A ∈ ∂S , ζ = λgradΦ ⇔ ζα = λ ∂Φ ∂A α , λ ≥ 0. ( 19 ) Further, if state variables are included as parameters and the domain S remains convex, the second law of thermodynamics is satisfied and maximum dissipation principle as well [START_REF] Halphen | Sur les matériaux standards généralisés[END_REF]. Note that the processes of forward and reverse transformations are considered independently, in the sense that dissipation related to the rate of the internal variables defined for forward transformation ζ F and ζ R (cf. ( 16) and ( 17)) should be independently non-negative, i.e.: A F : ζF ≥ 0 ; A R : ζF ≥ 0. ( 20 ) The two criteria for forward and reverse transformations are based on the ones proposed elsewhere [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF]; [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]; Chatziathanasiou et al. (2015)): Φ F = ΦF + A ξ F -f t-F (ξ) + H tp p -Y t-F , Φ R = -ΦR + A ξ R + f t-R (ξ) -H tp p + Y t-R . ( 21 ) Those two functions are null on the surfaces ∂S F and ∂S R of the convex domains S F and S R , respectively, if the important functions ΦF (σ) and ΦR (σ) are convex. Considering (19) and assuming that λ is a positive multiplier, the rate of the internal variables are given as: ξF = λ ∂Φ F ∂A ξ F = λ, εtt-F = λ ∂Φ F ∂A ε tt-F = ξF ∂Φ F ∂σ , ξR = λ ∂Φ R ∂A ξ R = λ, εtt-R = λ ∂Φ R ∂A ε tt-R = ξR ∂Φ R ∂σ . (22) Comparing ( 14) and ( 15) we see: Λ tt-F = ∂Φ F ∂σ , Λ tt-R = ∂Φ R ∂σ . ( 23 ) 3.3. Choice of Functional Forms Fully recoverable martensitic transformation The transformation functions ΦF and ΦR are the particular terms in the transformation criteria that consider the shape of the bounding surfaces in the six-dimensional stress hyperspace; here a modified Prager function is chosen that accounts for tension-compression asymmetry but not anisotropy [START_REF] Bouvet | A phenomenological model for pseudoelasticity of shape memory alloys under multiaxial proportional and nonproportional loadings[END_REF][START_REF] Grolleau | Assessment of tension-compression asymmetry of NiTi using circular bulge testing of thin plates[END_REF]. The following formulation closely follows [START_REF] Patoor | Micromechanical Modelling of Superelasticity in Shape Memory Alloys[END_REF], [START_REF] Peultier | A simplified micromechanical constitutive law adapted to the design of shape memory applications by finite element methods[END_REF], and [START_REF] Chemisky | Constitutive model for shape memory alloys including phase transformation, martensitic reorientation and twins accommodation[END_REF]. It predicts that the initiation of SMA forward transformation depends on the stress tensor invariants and asymmetry-related parameters. Specifically, ΦF (σ) =     3J 2 (σ) 1 + b J 3 (σ) J 3/2 2 (σ) 1 n -k σ     H cur (σ). ( 24 ) The terms J 2 (σ) and J 3 (σ) denote the second and third invariants of the deviatoric part σ . These are given as: J 2 (σ) = 1 2 σ ij σ ij , and J 3 (σ) = 1 3 σ ij σ jk σ ki , (25) using summation notation for repeated indices. Constants b and n are associated with the ratio between stress magnitudes needed to induce forward transformation under tension and compression loading. Convexity is ensured under specific conditions detailed in [START_REF] Chatziathanasiou | Phase Transformation of Anisotropic Shape Memory Alloys: Theory and Validation in Superelasticity[END_REF]. The evolution of the maximum transformation strain H cur is represented by the following decaying exponential function (Hartl et al., 2010a): H cur (σ) =    H min ; σ ≤ σ crit , H min + (H sat -H min )(1 -e -k(σ-σ crit ) ) ; σ > σ crit . (26) Here σ denotes the Mises stress and H min corresponds to the minimal observable transformation strain magnitude generated during full transformation under tensile loading (or the two way shape memory strain magnitude). The parameter H sat describes the maximum possible recoverable full transformation strain generated under uniaxial tensile loading. Additionally, σ crit denotes the critical Mises equivalent stress below which H cur = H min and the parameter k controls the rate at which H cur exponentially evolves from H min to H sat . The threshold for forward transformation introduced in (3.2) is not constant and is given as [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]: Y t-F = Y crit-F + Dσ : Λ ε F (27) The variables D and Y crit-F are model constants associated with the differing influences of stress on transformation temperatures for forward and reverse transformation. They are calculated from knowledge of other material constants [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]. During forward transformation, the transformation strain is oriented in the direction of the applied stress, which motivates the selected J 2 -J 3 form of the direction tensor Λ t-F . During reverse transformation, it is assumed that the direction of transformation strain recovery is instead governed by the average orientation of the martensite. This is represented in an average sense by the value of the macroscopic transformation strain ε tt = ε tt-F + ε tt-R as normalized by the martensite volume fraction ξ. Specifically, we assume Λ t-R = ε tt-F ξ F + ε tt-R ξ R . ( 28 ) Given the assumed associativity for the reverse transformation strain (see ( 23)), the transformation function ΦR for reverse transformation is then expressed as: ΦR = σ ε tt-F ξ F + ε tt-R ξ R . ( 29 ) After ( 27), the threshold for reverse transformation is expressed as: Y t-R = Y crit-R -Dσ : εt , (30) where Y crit-R is another material constant (usually taken equal to Y crit-F ). An evolution equation also links the time rate of changes of the hardening energies ( ġF and ġR ) with those of martensite ( ξF and ξR ), according to ( 14) and ( 15). Then f t-F and f t-R are referred to as the forward and reverse hardening functions, respectively, which define the current transformation hardening behavior. Note that g t , being a contribution to the Gibbs free energy, cannot depend on the time derivative of the martensitic volume fraction but only on the transformation history. As per ( 14) and ( 15), the evolution equation associated with g t changes with the transformation direction such that, given the reversibility of martensitic transformation in SMAs, in the absence of other dissipative mechanisms the Gibbs free energy should take on the same value for the same state of the external variables upon completion of any full transformation loop. If all contributions to the Gibbs free energy with the exception of g t are returned to their original values after a full transformation loop, the following condition must be satisfied to fully return G r to its initial state: 1 0 f t-F dξ + 0 1 f t-R dξ = 0. ( 31 ) This necessary condition restricts the choice of hardening function for forward and reverse transformations and constrains the calibration accordingly. The specification of a form for the hardening functions that describe smooth transition from elastic to transformation response is another key contribution of the model proposed by [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]: f t-F (ξ) = 1 2 a 1 (1 + ξ n 1 -(1 -ξ) n 2 ) + a 3 , f t-R (ξ) = 1 2 a 2 (1 + ξ n 3 -(1 -ξ) n 4 ) -a 3 . ( 32 ) Here, n 1 , n 2 , n 3 and n 4 are real exponents in the interval (0, 1] 1 . This form is selected here since it is specifically adapted to the response of polycrystalline SMA systems wherein the transformation hardening can be quite "smooth", especially after the completion of several cycles. Such smoothness is tuned by the adjustment of the parameters {n 1 , n 2 , n 3 , n 4 }. Forms related to the evolution equations associated with damage The damage accumulation functions f td-F ( Φfwd ) and f td-R ( Φrev ) are based on a linear accumulation law [START_REF] Lemaitre | Mechanics of solid materials[END_REF] written in terms of the integer number N of cycles completed such that, ∆d D crit = ∆N N f , ( 33 ) where N f and D crit are the number of cycles and local damage associated with local catastrophic failure, respectively. Note that failure will be defined as the state at which d reaches the critical value D crit . This linear accumulation law can be written to consider continuous evolutions over time: dd D crit = dN N f ⇒ , ḋ D crit = Ṅ N f . ( 34 ) Considering fatigue occurs only as a consequence of transformation cycles (full or partial) and that a full cycle corresponds in the evolution of the martensitic volume fraction from 0 to 1 and back to 0, (34) can be rewritten to consider both forward and reverse transformations as (see( 14),( 15) ) ḋF = ξF D crit 2N f = ξF f td-F ḋR = ξR D crit 2N f = ξR f td-R . ( 35 ) In this way, the damage accumulation functions are defined. blueWhile it is postulated in Section 3 that damage may evolve actively during forward transformation, here we propose a general formulation that considers damage evolution during both forward and reverse transformation. Future experimental studies will be needed to ascertain the relative importance of forward versus reverse transformation as mechanisms for damage evolution. From previous experimental studies, it has been shown that the fatigue life N f of SMA actuators is correlated to the cyclic mechanical work they perform [START_REF] Calhoun | Actuation fatigue life prediction of shape memory alloys under the constant-stress loading condition[END_REF]. blueDuring isobaric uniaxial fatigue testing (the main response motivating this more general study), the actuation work per unit of volume done in each half cycle by a constant uniaxial stress σ distributed homogeneously over a specimen generating uniaxial strain ε t is the product σε t . As an empirical measure, this so-called actuation work neglects the small inelastic permanent strains generated during a single transformation cycle such that σε t σε tt-f . It was shown that a power law was sufficient to capture cycles to failure per N f = σε tt-f C d -γ d . ( 36 ) Examining ( 24) in such a case of uniaxial loading and assuming small values of b, we see ΦF uniax = σH cur (σ). For the full transformation considered in these motiviating studies, H cur (σ) = ε tt-f by definition of ( 26), and thus ΦF uniax = σε tt-f . Motivated by this relationship in this work, we then make a generalized equivalence between the effectiveness of the past power law and its applicability in three dimensions (pending future multi-axial studies). Finally, noting that in the case of full transformation under proportional loading (e.g., in the uniaxial case), it can be shown that ΦF = ΦR . This allows us to then introduce N f = ΦF C d -γ d -N 0 f = ΦR C d -γ d -N 0 f . (37) Here, N 0 f is a parameter linked to the actuation work required for a static failure (N f = 0), while C d and γ d are parameters characteristic of the number of cycles to failure dependence on actuation work. Combining ( 35) and (37) and currently assuming that damage accumulates equally during forward and reverse transformation, the final form of the damage functions are: f td-F ( Φ) = D crit 2 ΦF C d -γ d -N 0 f -1 , f td-R ( Φ) = D crit 2 ΦR C d -γ d -N 0 f -1 . ( 38 ) Such forms, obtained from observations of isobaric uniaxial experiments, substantially defines the evolution of damage and is applicable for a wide range of thermomechanical loadings. Obviously, a large experimental effort is required to validate this critical extension from one-dimensional (uniaxial) to three-dimensional, where such conditions as non-proportional loading or partial transformation must be considered; in this work only isobaric actuation cycles will be considered in the discussions of experimental validation. Forms of the evolution equations associated with plasticity The transformation plasticity magnitude function f tp (ξ) is inspired by past works [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. Several conclusions are drawn considering also the experimental observations by [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] from actuation fatigue tests where specimens were thermally cycled under various constant stress levels (see Fig. 3):2 • For moderate stress levels (200 MPa; Fig. 3a), after a rapid increase in accumulated plastic strain, a stable regime is observed (after 1000 cycles). The plastic strain accumulates linearly from cycle to cycle during this stable regime up to the point of failure. Similar response has been observed on Ni60Ti40 alloys [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF] and NiTi alloys [START_REF] Lagoudas | Shape memory alloys: modeling and engineering applications[END_REF]. • For higher actuation stress levels (400 MPa; Fig. 3b), a transient regime is first observed, as with the moderate stress levels. While an apparent stable regime is observed, one can observe a slight increase in plastic strain accumulation from cycle to cycle prior to failure. • At the highest feasible stress levels (600 MPa; Fig. 3c), the same initial transient regime is observed; after which an apparent stabilized regime is also observed, followed by an important and continuous increase of the plastic strain rate up to failure. blueFrom these experiments the effect of stress amplitude is clear. At high stress levels the rate of change of the irrecoverable strain increases from about the half lifetime of the sample. This behavior is characteristic of a change in the material's response, and is generally explained through stress concentration due to the development of defects [START_REF] Van Humbeeck | Cycling effects. Fatigue and degradation of shape memory alloys[END_REF][START_REF] Hornbogen | Review Thermo-mechanical fatigue of shape memory alloys[END_REF]. The functional form of the irrecoverable strain evolution should therefore account for that effect by considering a coupling with damage above a critical stress threshold, since this coupling is only observed at a high stress. The following evolution law for plastic strains is thus proposed: f tp-F (p) = w tp C tp 0 ΦF C tp γtp C tp 1 p + e -p C tp 2 + σ -σ Y tp σ Y tp αtp λ tp , f tp-R (p) = (1 -w tp )C tp 0 ΦR C tp γtp C tp 1 p + e -p C tp 2 + σ -σ Y tp σ Y tp αtp λ tp , (39) with λ tp = λ tp ( d D crit , 1 - D coa D crit , p 0 ), = λtp ( d, D, p 0 ), (40) λtp ( d, D, p 0 ) =    p 0 d (1 -d) -2 d <= h p 0 (1 -h) -2 + 2 h (1 -h) -3 ( d -h) + h(1 -h) -2 d > h, (41) with h = 1 -D. The function λ tp is a typical level set power law function that depends on the current value of damage, the critical value for damage D crit , and a constant D coa that indicates the change of regime of the evolution of plastic strains. blueMethodology of Model Parameters Identification The entire three-dimensional constitutive model for shape memory alloys experiencing cycling fatigue requires four sets of parameters to be calibrated: • The thermoelastic model parameters, • The parameters associated with phase transformation criteria (e.g., the conventional phase diagram), • The parameters characteristic of damage accumulation, • The parameters characteristic of TRIP accumulation. These are summarized in Table 1. The thermoelastic parameters of martensite and austenite blue(e.g., Young's moduli, coefficients of thermal expansion, Poisson's ratios) are usually calibrated from mechanical and thermal uniaxial loadings, where loads are applied at temperatures outside of transformation regions. The parameters qualifying the phase diagram (M s , M f , A s , A f , C A , C M ) along with those contained in the functional form of the maximum transformation strain H cur are calibrated based on several isobaric thermal cycles prior to the accumulation of substantial damage or TRIP. The identification of the thermodynamical parameters of the model (ρ∆η 0 , ρ∆E 0 , a 1 , a 2 , a 3 , Y t 0 , D) and the material parameters for phase transformation are detailed in [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]. blueAccording to the complexity of the functional forms, especially for the evolution of damage and TRIP, the parameters are generally identified utilizing an optimization algorithm that minimizes a cost function, defined as a square difference between experimental measurement and the simulated response, following the methodology found in [START_REF] Meraghni | Parameters identification of fatigue damage model for short glass fiber reinforced polyamide (PA6-GF30) using digital image correlation[END_REF]. The algorithm utilized in this work is a combined gradient based -genetic optimization scheme, which is used to successfully determine the transformation characteristic of three-dimensional SMA structures [START_REF] Chemisky | Analysis of the deformation paths and thermomechanical parameter identification of a shape memory alloy using digital image correlation over heterogeneous tests[END_REF]. blueThe suggested identification procedure, used to present the validation cases in the next section, consists of the following sequence: 1. blueDetermination of the parameters for transformation strain H cur functional form via the optimization algorithm (objective function defined in terms of transformation strain magnitude with respect to a stress value). 4. blueThe parameters for the evolution of TRIP are evaluated using the evolution of the uniaxial irrecoverable strain ε p with respect to the number of cycles. In the present approach, the parameter D coa has been set up to 0.05 (50% of D crit ), since it is clear that a change of regime occurs the mid-life of the NiTiHf actuators loaded high stress (see Fig. 3c), attributed to the evolution of damage. Note that 0 ≤ D coa ≤ D crit ≤ 1. The parameter identification of w tp requires some specific thermomechanical loading path and usually takes values greater than 0.5 and is constrained to stay within the range 0 ≤ w tp ≤ 1 [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. Since the half-cycles were not available in the database utilized to identify the parameters of the Ni60Ti40 and NiTiHf alloys, this value has been arbitrarily set to 0.6. blueThe remaining parameters C tp 0 , C tp 1 , C tp , γ tp , C tp 2 σ Y tp , α tp , p 0 have been identified utilizing the optimization algorithm based on the experimental results of the evolution of irrecoverable plastic strain as a function of the number of cycles. All these parameters must be positive values. Comparison of Experimental Results This new model for the description of functional and structural damage has been specifically formulated to capture the combined effect of phase transformation, transformationinduced plasticity and fatigue damage of polycrystalline SMAs subjected to general threedimensional thermomechanical loading and has been implemented in the 'smartplus' library [START_REF] Chemisky | smartplus : Smart Material Algoritjms and Research Tools[END_REF]. While the capabilities of such a modeling approach to capture the effects of phase transformation have been already demonstrated by [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF] and [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF], here we specifically consider the evolution of damage and TRIP strains. The set of experiments utilized to validate the proposed model consider specimens loaded uniaxially to different constant stress levels(i.e., in the austenitic condition) and then subjected to thermally-induced transformation cycles up to failure. The parameters are taken from [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF], where NiTiHf actuators were tested at three relatively high constant stress levels (i.e., 200, 400 and 600 MPa., cf.Fig. 3) blueacross a temperature range from approximately 300 K to 500 K. A fourth stress level of 300 MPa is used for validation since the full characterization of the elastic response was not addressed in the source work, standard values for NiTiHf alloys are applied. An average of transformation strain magnitudes generated over full cycles at each stress level is used to define the average experimental value shown in Fig. 4 and Fig. 5. Figure 4: Dependence of maximum transformation strain magnitude on applied stress for the considered NiTiHf alloy [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF]. Data is fit using the functional form H cur (26) based on the average recovered transformation strain over all cycles at each stress levels considered. The parameters that define the evolution equation for the damage internal variable have been identified based on the number of cycles to failure of the SMA actuators thermally cycled at different stress levels blue and are displayed in Table 2. The comparison between the fatigue database for various stress levels and the model simulation is presented in Figure 5, where the actuation energy density in this one-dimensional (uniaxial) setting is equivalent to blue Φ = ΦF = ΦR = σH cur (σ).. Note that the stress levels of 200, 400 and 600 MPa have been used for the calibration of the damage model, while data for the stress level of 300 MPa (2 tests) are used to validate predictions. The parameters related to the evolution of TRIP strains have been identified based on the evolution of residual strains as measured at high temperature (i.e., in the austenitic condition). The parameter identification algorithm used is a hybrid genetic -gradientbased method developed by [START_REF] Chemisky | Analysis of the deformation paths and thermomechanical parameter identification of a shape memory alloy using digital image correlation over heterogeneous tests[END_REF] and applied here to the least-square difference between the experimental and numerical irrecoverable strains for the three stress levels tested (i.e, 200, 400 and 600 MPa). It is noted, according to the comparison presented in Figure 7, that both functional fatigue (i.e., TRIP) and structural fatigue (i.e., total life) are accurately captured by the model, since both the number of cycles to failure and the level of irrecoverable strain are correctly described for the three stress levels tested. All three stages of plastic strain evolution with cycles are represented, and the rapid accumulation of TRIP strain magnitude towards the end of the lifetime of the actuator is clearly visible in the simulated results. The typical behavior of an actuator is represented here. 3 Note in particular that the upward shift in transformation temperatures with increasing cycle count 3 The transformation temperatures, not provided in [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF], are calibrated here using common values. is captured. The second experiment (Fig. 8) used to validate the model focuses only on functional fatigue, whereby (i.e., TRIP) an SMA actuator is subjected to 80 thermal transformation cycles under constant load corresponding to a uniaxial stress level of 200 MPa [START_REF] Lagoudas | Shape memory alloys: modeling and engineering applications[END_REF], bluewhere the derived properties are given in Table 3. The actuator is at an early stage of its expected lifetime, so only the first and second stage of the evolution of plastic strain are captured, see Fig. 8a). Note that the evolution of the transformation strain with temperature is well captured, and that the shift of the transformation temperatures between the 1 st and the 80 th cycle is again accurately described by the proposed model (see Fig. 8b). The evolution of the non-zero total strain components with respect to temperature is shown in Fig. 9. While the evolution equation for irrecoverable strain is based on the Mises equivalent stress, it is seen that the irrecoverable strain as well as transformation strain follows the direction of the imposed stress. Also, the importance of shear stress component versus uniaxial stress component is highlighted here, regarding the amount of irrecoverable strain.. Figure 1 :Figure 2 : 12 Figure1: Examples of damage (micro-crack) in nickel-rich NiTi material after thermally-induced actuation cycling (2840 cycles at 200MPa). Note the micro-cracks initiating within precipitates, resulting in relative small observable strains but eventually leading to specimen failure[START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF] f tp-F and f td-F relate the magnitude of the hardening energy with the rate of change of the martensite during forward transformation, and with the amount of damage with the rate of change of the martensite during forward transformation, respectively. During reverse transformation, those quantities are denoted as f tp-R and f td-R . the energetic conjugates to the internal variables denoted A = -∂G ∂ζ (cf. (A.15)), it is deducted that the generalized thermodynamic forces related to transformation and the associated evolutions of transformation strain, transformation hardening energy, accumulated transformation-induced plastic strain, and damage are given as: Figure 3 : 3 Figure 3: Transformation and plastic strain evolution in NiTiHf actuators under various isobaric loads (Wheeler et al., 2015) (a-c) and the comparison of these three (d). The strains are measured at high and low temperature, with a zero reference based on the beginning of the first cooling cycle. No necking is observed in any sample. 2. blueDetermination of the other phase transformation parameters with reconstruction of the phase diagram from isobaric thermal cycles.3. Determination of the fatigue damage parameters used the optimization algorithm to predict the number of cycles to failure according to the actuation energy. Using uniaxial isobaric loading, this last quantity is obtained experimentally for various stress levels.The parameters C d , γ d , N 0 f can be evaluated with this procedure, ensuring that these parameters are positive values. The parameter D crit has been estimated at 0.1 from the experimental observation of crack density in the observed fatigue samples just prior to failure. First and second stage Third stage -coupling with damage Figure 5 : 5 Figure 5: Number of cycles to failure as a function of the actuation energy Φ for the considered NiTiHf alloy. Results from the isobaric tests performed at 200, 400, and 600 MPa used for calibration; data from 300 MPa tests used for validation. Figure 6 : 6 Figure 6: blueSpectrum of evolution of damage with respect to the number of cycles. Number of failure from the isobaric tests performed at 200, 400, and 600 MPa utilized for calibration in blue dot, 300 MPa validation tests in orange. Figure 7 : 7 Figure7: Comparison between the evolution of irrecoverable strains in NiTiHf actuators under various isobaric loads[START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] with the model simulations: a), b), and c) show comparisons of the evolution of TRIP strains for the calibration stress levels of 200, 400, and 600 MPa, respectively; d) shows an example of a simulation of the evolution of the response of an actuator for the first, 100 th , 200 th , and the last (309 th ) cycle prior to failure. The blue and red dots correspond to the experimentally measured strains at high and low temperature for the considered cycles, respectively. Figure 8 : 8 Figure 8: Comparison between the evolution of irrecoverable strains of a NiTi actuator under an isobaric load (uniaxial stress level of 200 MPa; Lagoudas (2008)) with the model simulations: a) comparisons of the evolution of TRIP strains; b) comparison of the full strain-temperature response of an actuator for the first and 80 th cycles. Figure 9 : 9 Figure 9: Comparison between the evolution of irrecoverable strains of a NiTi actuator of an actuator for the first and 80 th cycles, considering two multiaxial isobaric loads : 1) continuous grey line : σ 11 = 50 MPa, σ 12 = σ 21 = 100 MPa, all other components of the stress tensor being 0) and 2) dashed grey line (σ 11 = 100 MPa, σ 12 = σ 21 = 50 MPa, all other components of the stress tensor being 0). The evolution of a) comparison of the total uniaxial strain (ε 11 )-temperature response , b) comparison of the total shear strain (ε 12 )-temperature response. and structural fatigue in shape memory alloy actuators, a new phenomenological model has been proposed that considers the coupled accumulation of damage and transformation-induced plasticity and is inspired by recent three-dimensional models for phase transformation based on thermodynamics of irreversible processes. Structural fatigue is described using an evolution equation for damage based on the rate of transformation energy relative to the martensite volume fraction Φ. Such a description succeeds in capturing the number of cycles to failure of the SMA actuators thermally cycled at different stress levels. The evolution of irrecoverable strains (i.e., functional fatigue) is described based on the same rate of transformation energy, especially to describe the first (transient) and second (steady-state) stages of transformation-induced plastic strain evolution. To represent the third stage (accelerated accumulation), a power law that depends on the level of accumulated damage is applied to represent the effect of structural fatigue on the development of irrecoverable strains. It is demonstrated that this formulation can accurately describe the accumulation of TRIP strains for the three considered actuators loaded at different stress levels. It is finally shown that the expression of the transformation limits represent the shift in transformation temperatures observed during cycling loading of actuators. These various aspects combine to make this model the most complete description of shape memory alloy fatigue to date. Table 1 : 1 Required material parameters and associated material propertiess , M f , A s , A f , C A , C M Parameter Type Set of Constants Specific Response Thermoelastic properties Young's moduli, Poisson's ratios, Coefficients of therm. expan., etc. Phase diagram Phase transformation H min , H sat , k, σ crit Transformation strain n 1 , n 2 , n 3 , n 4 Smoothness of transformation Damage D crit , C d , γ d , N 0 f Evolution law for damage TRIP w tp , C tp 0 , C tp 1 , C tp , γ tp , C tp 2 σ Y tp , α M tp , p 0 , D coa Table 2 : 2 Identified model parameters for the NiTiHf alloy blueMs , M f , A s , A f 293 K, 273 K, 313 K, 333 K blueC A = C M 7 MPa.K -1 bluen 1 = n 2 = n 3 = n 4 0.2 Model Parameters Identified value blueE A = E M 70000 MPa blueν A = ν M 0.3 blueα A = α M 0 K -1 H min 0.005 H sat 0.0277 k 0.0172 MPa -1 σ crit 120 MPa b 0 n 2 D crit 0.14 D coa 0.07 C d 85689.2 MPa γ d 1.040 N 0 f 7000 cycles w tp 0.6 C tp 0 0.000245 C tp 1 0.000667 C tp 6.144682 MPa γ tp 4.132985 C tp 2 0.006239 σ Y tp , 300 MPa α tp 3.720168 p 0 1.861436 Table 3 : 3 Identified model parameters for the NiTi alloy blueM s , M f , A s , A f 277.15 K, 260.15 K, 275.15 K, 291.15 K blueC A , C M 8.3, 6.7 MPa.K -1 bluen 1 = n 2 = n 3 = n 4 0.1 Model Parameters Identified value blueE A , E M 47000 MPa, 24000 MPa blueν A = ν M 0.3 blueα A = α M 0 K -1 H min 0.05 If all four exponents equal 1, the original model of[START_REF] Boyd | A thermodynamical constitutive model for shape memory materials. Part I. The monolithic shape memory alloy[END_REF] is recovered, see Appendix A of[START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]. It is important to note that all tests considered were not associated with any observed localized necking behavior and that nearly constant applied stress can be assumed. The same result can be obtained by utilizing the methodology of[START_REF] Coleman | Thermodynamics with Internal State Variables[END_REF] for thermodynamics with internal state variables; however, the issues raised by[START_REF] Lubliner | On the thermodynamic foundations of non-linear solid mechanics[END_REF] that limit the case to the elastic response should also be considered. Appendix A. Appendix A: Fundamentals of Thermodynamics of irreversible processes blueConsidering a small strain ε at a considered material point, the strong form of the first law of thermodynamics can be expressed as Ė = σ : ε -divq + ρR, (A.1) where q is the heat flux, R denotes the heat sources per unit mass, and σ is the Cauchy stress. Similarly, the second law of thermodynamics is written in the strong form as [START_REF] Chatzigeorgiou | Periodic homogenization for fully coupled thermomechanical modeling of dissipative generalized standard materials[END_REF]: where η = ρς is the entropy per unit volume. Combining equations (A.1) and (A.2) to eliminate extra heat sources yields where γ is the internal entropy production per unit volume. We can also define r as the difference between the rates of the mechanical work and the internal energy of Q as the thermal energy per unit volume provided by external sources, which gives Further, the internal entropy production can be split into two contributions, where γ loc is the local entropy production (or intrinsic dissipation) and γ con is the entropy production due to heat conduction, giving The two laws of thermodynamics can then be simply expressed as Combining equations (A.3), (A.5), and (A.6), one can re-express the first principle of thermodynamics as: When designing a constitutive law, especially with the aim of tracking fatigue damage and permanent deformation in a material, it is very useful to separate the various mechanisms into categories (e.g., elastic or inelastic, reversible or irreversible, dissipative or non-dissipative) following the methodology proposed by [START_REF] Chatzigeorgiou | Thermomechanical behavior of dissipative composite materials[END_REF]. Some of these mechanisms are responsible for permanent changes in material microstructure. To describe all observable phenomena it is required to express E in terms of the proper variables capable of expressing the material state under every possible thermomechanical loading path. Following the approach of [START_REF] Germain | Continuum thermodynamics[END_REF], the internal energy E is taken to be a convex function with regards to its arguments: the strain tensor ε, the entropy η and a set of internal state variables ζ such that The following definitions for the derivatives of the internal energy are postulated: For the purposes of further development, it is convenient to introduce the Gibbs free energy potential G by employing the following partial Legendre transformation [START_REF] Maugin | The thermomechanics of plasticity and fracture[END_REF]): Expressing Ġ in terms of its arguments and using (A.13), the last expression reduces to .15), in conjunction with (A.6), is used to identify proper evolution equations for the internal state variables. Usually, the mechanical and thermal dissipations are assumed to be decoupled and non-negative, i.e. γ loc ≥ 0 and γ con ≥ 0. blue Such a constitutive model is intended to be applied to the scope of Finite Element Analyses (FEA). In most FEA softwares, the variables are updated following a procedure that include three loops. A loading step is typically partitioned in time increments and is denoted by ∆x. The increment during the global FEA solver is denoted ðx. The increment during the Newton-Raphson scheme in the material constitutive law, which is described below, is denoted by the symbol δx. Such steps consist in finding the updated value of the stress tensor and of the internal variables of the model. In a backward Euler fully implicit numerical scheme, the value of a given quantity x is updated from the previous time step n to the current n + 1 per Such an implicit relation is usually solved iteratively during the FEA calculations, and the current value is updated from iteration m to iteration m + 1 per blueThe return mapping algorithm is used in the constitutive law algorithm and consists of two parts: i) Initially, it is assumed that no evolution of the internal variables occurs, thus the material behaves linearly. This allows it to consider a thermoelastic prediction of all the fields. In such a prediction, the stress tensor and are estimated, while the internal variables are set to their initial value at the beginning of the time increment; (ii) The stress tensor and the internal variables are corrected such that the solution meets the requirements of the specified constitutive law (the forward transformation, reverse transformation, or both). During the return mapping algorithm, the total current strain and temperature are held constant such that: where k denotes the increment number during the correction loop. The system of Kuhn-Tucker set of inequalities can be summarized as: Note that the size of the system to solve might therefore depends on the activated mechanism(s). We utilized the Fischer-Burmeister Fischer (1992) complementary function to replace the Kuhn-Tucker set of inequalities that typically results from dissipative mechanisms into a set of equations. Such formulation results in a smooth complementary problem [START_REF] Kiefer | Implementation of numerical integration schemes for the simulation of magnetic SMA constitutive response[END_REF], which does not require the information of the number of active sets. This methodology has already been utilized by [START_REF] Schmidt-Baldassari | Numerical concepts for rate-independent single crystal plasticity[END_REF] in the context of rate-independent multi-surface plasticity, by [START_REF] Bartel | A micromechanical model for martensitic phase-transformations in shape-memory alloys based on energy-relaxation[END_REF]; [START_REF] Bartel | Thermodynamic and relaxation-based modeling of the interaction between martensitic phase transformations and plasticity[END_REF] for martensitic phase transformation modeling, and by [START_REF] Kiefer | Implementation of numerical integration schemes for the simulation of magnetic SMA constitutive response[END_REF] for the simulation of the constitutive response of magnetic SMAs. blueAt this point the methodology presented in [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF] is briefly summarized here. The Fischer-Burmeister technique transforms a set of Kuhn-Tucker inequality into an equivalent equation : This equation has two sets of roots: either Φ m ≤ 0; ṡm = 0, which means that the mechanism m is not activated, or Φ m = 0; ṡm ≥ 0 indicates that the mechanism m is activated and a solution for ṡm (and consequently for all internal variables V m ) is searched. Next, the elastic prediction -inelastic correction method is utilized to solve the unconstrained system of equations using a Newton-Raphson scheme. The inelastic transformation strain is recalled here: During a time increment n, at the m-th iteration of the solver and the k-th iteration of the constitutive law algorithm, the transformation strain thus writes: To avoid lengthy expressions in the sequel, the iteration numbers will be omitted. Any quantity x denotes the x (n+1)(m+1)(k) , the increment ∆x denotes the ∆x (n+1)(m+1)(k) , the increment δx denotes the δx (n+1)(m+1)(k) and the increment ðx denotes the ðx (n+1)(m+1) . The convex cutting plane (CCP) [START_REF] Simo | Computational Inelasticity[END_REF]; [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF] is utilized to approximate the evolution of the inelastic strain as: blueThe comparison and the efficiency evaluation of the convex cutting plane and the closest point projection has been discussed in detail in [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF], and it has been shown that the convex cutting plane algorithm is more efficient in most cases, even if it may require more steps to converge when strong non-proportional loadings are considered. The total current strain and temperature are held constant in displacement driven FEA. Assuming an additive decomposition of strains and the previously induced constitutive relations between elastic strain and stress, and thermal strain and temperature provides: Since the variations of elastic compliance tensor and the thermal expansion tensor are dependent on the volume fraction of forward or reverse martensitic transformation: .11) it is therefore possible to define a total stress-influence evolution tensor, such as: blueThus, with the help of (A.8), (A.10) and (A.11), (A.9) is now written as: Recall that the transformations (forward and reverse) depend on the stress and the internal variables through the definition of thermodynamical forces. Applying the chain rule to these criterion yields: .14) with: blue The numerical resolution consists of solving the following system of complementary Fischer-Burmeister functions using a Newton-Raphson scheme: .16) with: (A.17) In the above equations, B is a matrix containing the partial derivatives of Φ, such that B lj = j -∂Φ l ∂σ κ j + K lj . The iterative loops stop when the convergence criteria on all the complementary functions has been fulfilled. Appendix A.1. Determination of the thermomechanical quantities and tangent moduli blueSince we require the computation of the solver iteration increments ∆ε (n+1)(m+1) = ∆ε (n+1)(m) + ðε (n+1)(m) the mechanical tangent modulus D ε is required: To compute such quantities, the criterion is utilized and uses ( A.14): .20) Considering now only the subset of activated mechanisms, i.e. the ones that satisfy ðΦ l = 0, (A.21) the following holds true (the superscript l shall now refer to any activated mechanism, and only those): The set of non-linear equations that can be rearranged in a matrix-like format: A.23) where ξ = ξ F , ξ R . The components of the reduced sensitivity tensor B that correspond to the active load mechanism variables, with respect to the strain and temperature, are: Note that second-order tensors and scalar quantities can be defined for the influence of strain and temperature, respectively, on a unique lead mechanism s j : ðξ j = P j ε ðε + P j θ ðθ. Highlights This work presents new developments in the thermomechanical constitutive modeling of structural and functional fatigue of shape memory alloys (SMAs). It captures the evolution of irrecoverable strain that develops during cyclic actuation of SMAs. It describes the evolution of the structural fatigue through the evolution of an internal variable representative of damage. Final failure is predicted when such variables reaches a critical value. The full numerical implementation of the model in an efficient scheme is described. Experimental results associated with various thermomechanical paths are compared to the analysis predictions, including fatigue structural lifetime prediction and evolution of the response during cyclic actuation. The analysis of three-dimensional loadings paths are considered
82,357
[ "863727" ]
[ "178323", "301080", "178323" ]
01762254
en
[ "info" ]
2024/03/05 22:32:13
2012
https://hal.science/hal-01762254/file/PEROTTO%20-%20EUMAS%202011%20%28pre-print%20version%29.pdf
Studzinski Filipo Perotto Recognizing internal states of other agents to anticipate and coordinate interactions Keywords: Factored Partially Observable Markov Decision Process (FPOMDP), Constructivist Learning Mechanisms, Anticipatory Learning, Model-Based RL In multi-agent systems, anticipating the behavior of other agents constitutes a difficult problem. In this paper we present the case where a cognitive agent is inserted into an unknown environment composed of different kinds of other objects and agents; our cognitive agent needs to incrementally learn a model of the environment dynamics, doing it only from its interaction experience; the learned model can be then used to define a policy of actions. It is relatively easy to do so when the agent interacts with static objects, with simple mobile objects, or with trivial reactive agents; however, when the agent deals with other complex agents that may change its behaviors according to some non directly observable internal states (like emotional or intentional states), the construction of a model becomes significantly harder. The complete system can be described as a Factored and Partially Observable Markov Decision Process (FPOMDP); our agent implements the Constructivist Anticipatory Learning Mechanism (CALM) algorithm, and the experiment (called meph) shows that the induction of non-observable variables enable the agent to learn a deterministic model of most of the universe events, allowing it to anticipate other agents actions and to adapt to them, even if some interactions appear as non-deterministic in a first sight. Introduction Trying to escape from AI classic (and simple) maze problems toward more sophisticated (and therefore more complex and realistic) agent-based universes, we are led to consider some complicating conditions: (a) the situatedness of the agent, which is immersed into an unknown universe, interacting with it through limited sensors and effectors, without any holistic perspective of the complete environment state, and (b) without any a priori model of the world dynamics, which forces it to incrementally discover the effect of its actions on the system in an on-line experimental way; to make matters worse, the universe where the agent is immersed can be populated by different kinds of objects and entities, including (c) other complex agents, and in this case, the task of learning a predictive model becomes considerably harder. We are especially concerned with the problem of discovering the existence of other agents' internal variables, which can be very useful to understand their behavior. Our cognitive agent needs to incrementally learn a model of its environment dynamics, and the interaction with other agents represents an important part of it. It is relatively easy to construct a model when the agent interacts with static objects, with simple mobile objects, or with trivial reactive agents; however, when dealing with other complex agents which may change its behaviors according to some non directly observable internal properties (like emotional or intentional states), the construction of a model becomes significantly harder. The difficulty increases because the reaction of each agent can appears to our agent as a non deterministic behavior, regarding the information provided by the perceptive elements of the situation. We can anticipate at least two points of interest addressed by this paper: the first one is about concept creation, the second one is about agent inter-subjectivity. In the philosophical and psychological research community concerned with cognitive issues, the challenge of understanding the capability to develop new abstract concepts have always been a central point in most theories about how the human being can deal and adapt itself to a so complex and dynamical environment as the real world [START_REF] Murphy | The big book of concepts[END_REF], [START_REF] Piaget | [END_REF]. In contrast to the kind of approach usually adopted in AI, which easily slip into the strategy of treating exceptions and lack of information by using probabilistic methods, many cognitive scientists insist that the human mind strategy looks more like accommodating the disturbing events observed in the reality by improving his/her model with new levels of abstraction, new representation elements, and new concepts. Moreover, the intricate problem of dealing with other complex agents has also been studied by cognitive science for some time, from psychology to neuroscience. A classical approach to it is the famous "ToM" assumption (Astington, et al. 1988), [START_REF] Bateson | Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology[END_REF], [START_REF] Dennett | The Intentional Stance[END_REF] which claims that the human being have developed the capability to attribute mental states to the others, in order to represent their beliefs, desires and intentions, and so being able to understand their behavior. In this paper, we use the Constructivist Anticipatory Learning Mechanism (CALM), defined in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], to solve the "meph problem", where a cognitive agent is inserted into an environment constituted of other objects and also of some other agents, that are non-cognitive in the sense that they do not learn anything, but that are similar to our agent in terms of structure and possible behaviors. CALM is able to build a descriptive model of the system where the agent is immersed, inducting, from the experience, the structure of a factored and partially observable Markov decision process (FPOMDP). Some positive results have been achieved due to the use of 4 integrated strategies [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], (Perotto et al. 2007), [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF]Alvares, 2007): (a) the mechanism takes advantage of the situated condition presented by the agent, constructing a description of the system regularities relatively to its own point of view, which allows to set a good behavior policy without the necessity of "mapping" the entire environment; (b) the learning process is anchored on the construction of an anticipatory model of the world, what could be more efficient and more powerful than traditional "model free" reinforcement learning methods, that directly learn a policy; (c) the mechanism uses some heuristics designed to well structured universes, where conditional dependencies between variables exist in a limited scale, and where most of the phenomena can be described in a deterministic way, even if the system as a whole is not, representing what we call a partially deterministic environment; this characteristic seems to be widely common in real world problems; (d) the mechanism is prepared to discover the existence of hidden or non-observable properties of the universe, which cannot be directly perceived by the agent sensors, but that can explain some observed phenomena. This last characteristic is fundamental to solve the problem presented in this article because it enables our agent to discover the existence of internal states in other agents, which is necessary to understand their behavior and then to anticipate it. Further discussion about situatedness can be found in [START_REF] Wilson | How to Situate Cognition: Letting Nature Take its Course[END_REF][START_REF] Wilson | How to Situate Cognition: Letting Nature Take its Course[END_REF], [START_REF] Beer | A dynamical systems perspective on agent-environment interactions[END_REF], and [START_REF] Suchman | Plans and Situated Actions[END_REF]. Thus, the basic idea concerning this paper is to describe the algorithm CALM, proposed in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], presenting its features, and placing it into the Markov Decision Process (MDP) framework panorama. The discussion is supported, on one side, by these introductory philosophical conjectures, and on the other side, by the meph experiment, which creates a multi-agent scenario, where our agent needs to induce the existence of internal variables to the other agents. In this way, the paper presents some positive results in both theoretical and practical aspects. Following the paper, section 2 overviews the MDP framework, section 3 describes the CALM learning mechanism, section 4 introduces the experiment and shows the acquired results, and section 5 concludes the paper, arguing that the discover and induction of hidden properties of the system can be a promising strategy to model other agents internal states. Markov Decision Process Framework Markov Decision Process (MDP) and its extensions constitute a quite popular framework, largely used for modeling decision-making and planning problems. An MDP is typically represented as a discrete stochastic state machine; at each time cycle the machine is in some state s; the agent interacts with the process by choosing some action a to carry out; then, the machine changes into a new state s', and gives the agent a corresponding reward r; a given transition function δ defines the way the machine changes according to s and a. Solving an MDP is finding the optimal (or near-optimal) policy of actions in order to maximize the rewards received by the agent over time. When the MDP parameters are completely known, including the reward and the transition functions, it can be mathematically solved by dynamic programming (DP) methods. When these functions are unknown, the MDP can be solved by reinforcement learning (RL) methods, designed to learn a policy of actions on-line, i.e. at the same time the agent interacts with the system, by incrementally estimating the utility of state-actions pairs and then by mapping situations to actions [START_REF] Sutton | Reinforcement Learning: an introduction[END_REF]. The Classic MDP Markov Decision Process first appeared (in the form we know) in the late 1950s [START_REF] Bellman | A Markovian Decision Process[END_REF], [START_REF] Howard | Dynamic Programming and Markov Processes[END_REF], reaching a concrete popularity in the Artificial Intelligence (AI) research community from the 1990s [START_REF] Puterman | Markov Decision Processes: discrete stochastic dynamic programming[END_REF]. Currently the MDP framework is widely used in the domains of Automated Control, Decision-Theoretic Planning [START_REF] Blythe | Decision-Theoretic Planning[END_REF], and Reinforcement Learning [START_REF] Feinberg | Handbook of Markov Decision Processes: methods and applications[END_REF]. A "standard MDP" represents a system through the discretization and enumeration of its state space, similar to a state machine in which the transition function can be non-deterministic. The flow of an MDP (the transition between states) depends only on the system current state and on the action taken by the agent at the time. After acting, the agent receives a reward signal, which can be positive or negative if certain particular transitions occur. However, for a wide range of complex (including real world) problems, the complete information about the exact state of the environment is not available. This kind of problem is often represented as a Partially Observable Markov Decision Process (POMDP) [START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The idea of representing non-observable elements in a MDP is not new [START_REF] Astrom | Optimal Control of Markov Decision Processes with Incomplete State Estimation[END_REF], [START_REF] Smallwood | The optimal control of partially observable Markov decision processes over a finite horizon[END_REF][START_REF] Smallwood | The optimal control of partially observable Markov decision processes over a finite horizon[END_REF], but became popular with the revived interest on the framework, occurred in the 1990s (Christman, 1992), [START_REF] Kaelbling | Acting optimally in partially observable stochastic domains[END_REF][START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The POMDP provides an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which the system states are observable only indirectly, via a set of imperfect, incomplete or noisy perceptions. In a POMDP, the set of observations is different from the set of states, but related to them by an observation function, i.e. the underlying system state s cannot be directly perceived by the agent, which has access only to an observation o. The POMDP is more powerful than the MDP in terms of modeling (i.e. a larger set of problems can be described by a POMDP than by an MDP), but the methods for solving them are computationally even more expensive, and thus applicable in practice only to very simple problems [START_REF] Hauskrecht | Value-function approximations for partially observable Markov decision processes[END_REF], [START_REF] Meuleau | Solving POMDPs by Searching the Space of Finite Policies[END_REF], [START_REF] Shani | Model-Based Online Learning of POMDPs[END_REF]. The main bottleneck about the use of MDPs or POMDPs is that representing complex problems implies that the state space grows-up and quickly becomes intractable. Real-world problems are generally complex, but fortunately, most of them are quite well-structured. Many large MDPs have significant internal structure, and can be modeled compactly if the structure is exploited in the representation. The factorization of states is an approach to exploit this characteristic. In the factored representation, a state is implicitly described by an assignment to some set of state variables. Thus, the complete state space enumeration is avoided, and the system can be described referring directly to its properties. The factorization of states enable to represent the system in a very compact way, even if the corresponding MDP is exponentially large [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF], [START_REF] Shani | Efficient ADD Operations for Point-Based Algorithms[END_REF]. When the structure of the Factored Markov Decision Process (FMDP) [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF] is completely described, some known algorithms can be applied to find good policies in a quite efficient way [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. However, the research concerning the discover of the structure of an underlying system from incomplete observation is still incipient [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], [START_REF] Degris | Factored Markov Decision Processes[END_REF]. Factored and Partially Observable MDP In order to increase the range of representable problems, the classic MDP model can be extended to include factorization of states and partial observation, and it can be so called a Factored Partially Observable Markov Decision Process (FPOMDP). In order to be factored, the description of a given state s in the original model will be decomposed and replaced by a set {x1, x2, ... xn} in the extended model; the action a becomes a set {c1, c2, ... cm}; the reward signal r becomes {r1, r2, ... rk}; and the transition function δ is replaced by a set of transformation functions {T1, T2, ... Tn}. A FPOMDP (Degris;[START_REF] Degris | Factored Markov Decision Processes[END_REF] can be formally defined as a 4-tuple {X, C, R, T}. The finite non-empty set of system properties or variables X = {X1, X2, ... Xn} is divided into two subsets, X = P È H, where the subset P represents the observable properties (those that can be accessed through the agent sensory perception), and the subset H represents the hidden or non-observable properties; each property Xi is associated to a specified domain, which defines the values the property can assume. C = {C1, C2, ... Cm} represents the controllable variables, composing the agent actions, R = {R1, R2, ... Rk} is the set of (factored) reward functions, in the form Ri : Pi  IR, and T = {T1, T2, ... Tn} is the set of transformation functions, as Ti : X  C  Xi , defining the system dynamics. Each transformation function can be represented as a Dynamic Bayesien Network (DBN) [START_REF] Dean | A model for reasoning about persistence and causation[END_REF][START_REF] Dean | A model for reasoning about persistence and causation[END_REF], which is an acyclic, oriented, two-layers graph. The first layer nodes represent the environment state in time t, and the second layer nodes represent the next state, in t+1 [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. A stationary policy π is a mapping X → C where π(x) defines the action to be taken in a given situation. The agent must learn a policy that optimizes the cumulative rewards received over a potentially infinite time horizon. Typically, the solution π* is the policy that maximizes the expected discounted reward sum, as indicated in the classical Bellman optimality equation (1957), here adapted to our FPOMDP notation. V π* (x) = R(x) + maxc [ γ . Σx' P(x' | x, c) . V π* (x') ] In this paper, we consider the case where the agent does not have an a priori model of the universe where it is situated (i.e. it does not have any idea about the transformation function), and this condition forces it to be endowed with some capacity of learning, in order to be able to adapt itself to the system. Even if there is a large research community studying model-free methods (that directly learn a policy of actions), in this work we adopt a model-based method, through which the agent must learn a descriptive and predictive model of the world, and so define a behavior strategy based on it. Learning a predictive model is often referred as learning the structure of the problem, which is an important research objective into the MDP framework community [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], as well as in related approaches like Induction of Decision Trees or Decision Graphs [START_REF] Jensen | Bayesian Networks and Decision Graphs[END_REF][START_REF] Jensen | Bayesian Networks and Decision Graphs[END_REF], Bayesian Networks (BN) [START_REF] Pearl | Causality: models of reasoning and inference[END_REF], [START_REF] Friedman | Being Bayesian about Network Structure: a bayesian approach to structure discovery in bayesian networks[END_REF]Koller, 2003) and Influence Diagrams [START_REF] Howard | Dynamic Programming and Markov Processes[END_REF][START_REF] Howard | Influence Diagrams[END_REF]. In this way, when the agent is immersed in a system represented as a FPOMDP, the complete task for its anticipatory learning mechanism is both to create a model of the transformation function, and to define an optimal (or sufficiently good) policy of actions. The transformation function can be described by a dynamic bayesian network, i.e. an acyclic, oriented, two-layers graph, where the first layer nodes represent the environment situation in time t, and the second layer nodes represent the next situation, in time t+1. A policy π : X → C defines the behavior to be taken in each given situation (the policy of actions). Several algorithms create stochastic policies, and in this case the action to take is defined by a probability. [START_REF] Degris | Factored Markov Decision Processes[END_REF] present a good overview of the use of this representation in artificial intelligence, referring several related algorithms designed to learn and solve factored FMDPs and FPOMDPs, including both the algorithms designed to calculate the policy given the model [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF], (Boutilier;[START_REF] Boutilier | Computing optimal policies for partially observable decision processes using compact representations[END_REF], [START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF][START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF], (Poupart;[START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF], [START_REF] Hoey | SPUDD: Stochastic Planning Using Decision Diagrams[END_REF], [START_REF] St-Aubin | APRICODD: Approximate policy construction using decision diagrams[END_REF], [START_REF] Guestrin | Solving Factored POMDPs with Linear Value Functions[END_REF], [START_REF] Sim | Symbolic Heuristic Search Value Iteration for Factored POMDPs[END_REF], and [START_REF] Shani | Efficient ADD Operations for Point-Based Algorithms[END_REF] and the algorithms designed to discover the structure of the system [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], (Degris;[START_REF] Degris | Factored Markov Decision Processes[END_REF], [START_REF] Strehl | Efficient Structure Learning in Factored-State MDPs[END_REF], and [START_REF] Jonsson | A Causal Approach to Hierarchical Decomposition of Factored MDPs[END_REF][START_REF] Jonsson | A Causal Approach to Hierarchical Decomposition of Factored MDPs[END_REF]. Constructivist Anticipatory Learning Mechanism The constructivist anticipatory learning mechanism (CALM), detailedly described in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], is a mechanism developed to enable an agent to learn the structure of an unknown environment where it is situated, trough observation and experimentation, creating an anticipatory model of the world, which will be represented as an FPOMDP. CALM operates the learning process in an active and incremental way, where the agent needs to choose between alternative actions, and learn the world model as well as the policy at the same time it actuates. There is no separated previous training time; the agent has an unique uninterrupted interactive experience into the system, quite similarly to real life problems. In other words, it must performing and learning at the same time. The problem can be divided into two tasks: first, building a world model, i.e. to induce a structure which represents the dynamics of the system (composed by agent-environment interactions). Second, to establish a behavioral policy, i.e. to define the actions to do at each possible different state of the system, in order to increase the estimated rewards received over time. The task becomes harder because the environment is only partially observable, from the point of view of the agent, constituting an FPOMDP. In this case, the agent has perceptive information from a subset of sensory variables, but the system dynamics depends also on another subset of hidden variables. To be able to create the world model, the agent needs, beyond discover the regularities of the phenomena, also discover the existence of non-observable variables that are important to understand the system evolution. In other words, learning a model of the world is more than describing the environment dynamics (the rules that can explain and anticipate the observed transformations), it is also discovering the existence of hidden properties (once they influence the evolution of the observable ones), and finally find a way to deduce the values of these hidden properties. If the agent can successfully discover and describe the hidden properties of the FPOMDP which it is dealing with, then the world becomes treatable as a FMDP, and there are some known algorithms able to efficiently calculate the optimal (or near-optimal) policy. The algorithm to calculate the policy of actions used by CALM is similar to the one presented by [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF]. On the other hand, the main challenge is to discover the structure of the problem based on the on-line observation, and CALM do it using representations and strategies inspired on (Drescher, 1993). Knowledge Representation CALM tries to reconstruct, by experience, each system transformation function Ti, representing it by an anticipation tree, which in turn is composed by schemas. Each schema represent some perceived regularity occurring in the environment, i.e. some regular event checked by the agent during its interaction with the world. A schema is composed by three vectors:  = (context  action → expectation). The context vector has each of their elements linked with a sensor. The action vector is linked with the effectors. The expectation represents the value expected for some specific sensor in the next time. In a specific schema, the context vector represents the set of equivalent situations where the schema is applicable. The action vector represents a set of similar actions that the agent can carry out in the environment. The expectation vector represents the expected result after executing the given action in the given context. Each element vector can assume any value in a discrete interval defined by the respective sensor or effector. In addition, the context vector can incorporate some "synthetic elements" not linked to any sensor but representing abstract or non-sensory properties which the existence is induced by the mechanism. Some elements in these vectors can undertake an "undefined value". For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). In both the context and action vectors, '#' represents something ignored, not relevant to make the anticipations. There is compatibility between a schema and a certain situation when the schema's context vector has all defined elements equal to those of the agent's perception. In the expectation vector, '#' means that the element is not deterministically predictable. The undefined value generalizes the schema because it allows to ignore some properties to represent a set of situations. Another symbols can be used to represent some special situations, in a way to reduce the number of schemas; it is the case of the symbol '=', used to indicate that the value of the expected element does not change in the specified context. The use of undefined values makes possible the construction of an anticipation tree. Each node in that tree is a schema, and relations of generalization and specialization guide its topology (quite similar to decision trees or discrimination trees). The root node represents the most generalized situation, which has the context and action vectors completely undefined. Adding one level in the tree is to specialize one generalized element, creating a branch where the undefined value is replaced by the different possible defined values. This specialization occurs either in the context vector or in the action vector. In this way, CALM divides the state space according to the different expectations of changing, grouping contexts and actions with its respective transformations. The tree evolves during the agent's life, and it is used by the agent, even if until under construction, to take its decisions, and in consequence, to define its behavior. The structure of the schemas and an example of their organization as an anticipation tree are presented in Figure 1. ### # ### ### 0 #0# ### 1 111 0## 0 000 1## 0 10# context action expectation Figure 1: the anticipation tree; each node is a schema composed of three vectors: context, action and expectation; the leaf nodes are decider schemas. The context in which the agent is at a given moment (perceived through its sensors) is applied in the tree, exciting all the schemas that have a compatible context vector. This process defines a set of excited schemas, each one suggesting a different action to do in the given situation. CALM will choose one to activate, performing the defined action through the agent's effectors. The algorithm always chooses the compatible schema that has the most specific context, called decider schema, which is the leaf of a differentiated branch. This decision is taken based on the calculated utility of each possible choice. There are two kinds of utility: the first one estimates the discounted sum of rewards in the future following the policy, the second one measures the exploration benefits. The utility value used to take the decision depends on the circumstantial agent strategy (exploiting or exploring). The mechanism has also a kind of generalized episodic memory, which represents (in a compact form) the specific and real situations experimented in the past, preserving the necessary information to correctly constructs the tree. The implementation of a feasible generalized episodic memory is not evident; it can be very expensive to remember episodes. However, with some strong but well chosen restrictions (like limiting dependency representation) it can be computationally viable. Anticipation Tree Construction Methods The learning process happens through the refinement of the set of schemas. The agent becomes more adapted to its environment as a consequence of that. After each experienced situation, CALM checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the model. In the schematic tree topology, the context and action vectors are taken together. This concatenated vector identifies the node in the tree, which grows up using a top-down strategy. The context and action vectors are gradually specialized by differentiation, adding, each time, a new relevant feature to identify the category of the situation. In general there is a shorter way starting with an empty vector and searching for the probably few relevant features than starting with a full vector and having to waste energy eliminating a lot of useless elements. Selecting the good set of relevant features to represent some given concept is a well known problem in AI, and the solution is not easy, even by approximated approaches. To do it, CALM adopts a forward greedy selection [START_REF] Blum | Selection of relevant features and examples in machine learning[END_REF], using the data registered in the generalized episodic memory. The expectation vector can be seen as a label in each decider schema, and it represents the predicted anticipation when the decider is activated. The evolution of expectations in the tree uses a bottom-up strategy. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and then it reduces the schema expectation. Three basic methods compose the CALM learning function, namely: differentiation, adjustment, and integration. Differentiation is a necessary mechanism because a schema responsible for a context too general can hardly make precise anticipations. If a general schema does not work well, the mechanism divides it into new schemas, differentiating them by some element of the context or action vector. In fact, the differentiation method takes an unstable decider schema and changes it into a two level sub-tree. The parent schema in this sub-tree preserves the context of the original schema. The children, which are the new decider schemas, have their context vectors a little bit more specialized than their parent. They attribute a value to some undefined element, dividing the scope of the original schema. Each one of these new deciders engages itself in a part of the domain. In this way, the previous correct knowledge remains preserved, distributed in the new schemas, and the discordant situation is isolated and treated only in its specific context. Differentiation is the method responsible to make the anticipation tree grows up. Each level of the tree represents the introduction of some new constraint. The algorithm needs to choose what will be the differentiator element, and it could be from either the context vector or the action vector. This differentiator needs to separate the situation responsible for the disequilibrium from the others, and the algorithm chooses it by calculating the information gain, and considering a limited (parametrized) range of interdependencies between variables. Figure 2 When some schema fails and it is not possible to differentiate it in any way, then CALM executes the adjustment method. This method reduces the expectations of an unstable decider schema in order to make it reliable again. The algorithm simply compares the activated schema's expectation and the real result perceived by the agent after the application of the schema, setting the incompatible expectation elements to the undefined value ('#'). The adjustment method changes the schema expectation (and consequently the anticipation predicted by the schema). Successive adjustments can reveal some unnecessary differentiations. Figure 3 In this way, the schema expectation can change (and consequently the class of the situation represented by the schema), and the tree maintenance mechanism needs to be able to reorganize the tree when this change occurs. Therefore, successive adjustments in the expectations of various schemas can reveal unnecessary differentiations. When CALM finds a group of schemas with similar expectations to approach different contexts, the integration method comes into action, trying to join these schemas by searching for some unnecessary common differentiator element, and eliminating it. The method operates as shown in figure 4. ### A 0#0 0## A 0#0 1## A 0#0 (b) (a) ### A 0#0 Dealing with the Unobservable When CALM reduces the expectation of a given schema by adjustment, it supposes that there is no deterministic regularity following the represented situation in relation to these incoherent elements, and the related transformation is unpredictable. However, sometimes a prediction error could be explained by considering the existence of some abstract or hidden property in the environment, which could be useful to differentiate an ambiguous situation, but which is not directly perceived by the agent sensors. So, before adjusting, CALM supposes the existence of a non-sensory property in the environment, which will be represented as a synthetic element. When a new synthetic element is created, it is included as a new term in the context and expectation vectors of the schemas. Synthetic elements suppose the existence of something beyond the sensory perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities. In this way, when dealing with partially observable environments, CALM has two additional challenges: (a) inferring the existence of unobservable properties, which it will represent by synthetic elements, and (b) including these new elements into its predictive model. A good strategy to do this task is looking at the historical information. In the case where the POMDP is completely deterministic, it is possible to find sufficient little pieces of history to distinguish and identify all the underlying states [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF][START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], and we suppose that it is similar when the POMDP in non-deterministic but well structured. CALM introduces a method called abstract differentiation. When a schema fails in its prediction, and when it is not possible to differentiate it by the current set of considered properties, then a new boolean synthetic element is created, enlarging the context and expectation vectors. Immediately, this element is used to differentiate the incoherent situation from the others. The method attributes arbitrary values to this element in each differentiated schema. These values represent the presence or absence of some non-observable condition, necessary to determine the correct prediction in the given situation. The method is illustrated in figure 5, where the new elements are represented by card suites. Once a synthetic element is created, it can be used in next differentiations. A new synthetic element will be created only if the existing ones are already saturated. To avoid the problem of creating infinite new synthetic elements, CALM can do it only until a determined limit, after which it considers that the problematic anticipation is not deterministically predictable, undefining the expectation in the related schemas by adjustment. Figure 6 explains didactically the idea behind synthetic element creation. The synthetic element is not associated to any sensory perception. Consequently, its value cannot be observed. This fact can place the agent in ambiguous situations, where it does not know whether some relevant but not observable condition (represented by this element) is present or absent. Initially, the value of a synthetic element is verified a posteriori (i.e. after the execution of the action in an ambiguous situation). Once the action is executed and the following result is verified, then the agent can rewind and deduce what was the situation really faced in the past instant (disambiguated). Discovering the value of a synthetic element after the circumstance where this information was needed can seem useless, but in fact, this delayed deduction gives information to another method called abstract anticipation. If the nonobservable property represented by this synthetic element has a regular dynamics, then the mechanism can propagate the deduced value back to the schema activated in the immediate previous instant. The deduced synthetic element value will be included as a new anticipation in the previous activated schema. Figure 7 shows how this new element can be included in the predictive model. For example, in time t1 CALM activates the schema 1 = (#0 + c → #1), where the context and expectation are composed by two elements (the first one synthetic and the second one perceptive), and one action. Suppose that the next situation '#1' is ambiguous, because it excites both schemas 2 = (♣1 + c → #0) and 3 = (♦1 + c → #1). At this time, the mechanism cannot know the synthetic element value, crucial to determine what is the real situation. Suppose that, anyway, the mechanism decides to execute the action 'c' in time t2, and it is followed by the sensory perception '0' in t3. Now, in t3, the agent can deduce that the situation really dealt with in t2 was '♣1', and it can include this information into the schema activated in t1, in the form 1 = (#0 + c → ♣1). Experiment The CALM mechanism has already been used to successfully solve problems such as flip, which is also used by [START_REF] Singh | Learning Predictive State Representations[END_REF] and [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF][START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], and wepp, which is an interesting RL situated problem. CALM is able to solve both of them by creating new synthetic elements to represent underlying states of the problem [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF]Álvares, 2007), (Perotto et al., 2007). In this paper, we ♣ 0 c (100%) ♣ 0 1 c (100%) (100%) ♦ 0 ♦ 0 (b) (a) 0 c 1 0 c 1 c t 0 t 1 t 2 t 3 1 c ♣ ♦ 0 1 0 c (50%) (50%) (b) (c) 0 1 0 c (100%) (100%) p 0 c h 0 c 0 (a) 0 c (100%) c 1 t 0 t 1 t 2 introduce an experiment that we call meph (acronym to the actions that the agent can do: move, eat, play, hit). In the meph experiment, the agent is inserted into a bi-dimensional environment (like a "grid") where it should learn how to interact with other agents and objects that can be found during its life. Our agent needs to create a policy of actions in a way to optimize its feelings (internal rewards). When it is hungry, it feels good by eating some food (which is an object that can be sometimes found in the environment, among other non eatable objects like stones). Agents and objects can be differentiated by observable features, it means that our agent can sensorially distinguish what is the "thing" with which it is interacting. However, both food and stone have the same appearance, and in this way, to discover if the object is either food or stone, the agent needs to experiment, it means, the agent needs to explicitly experiment the environment in order to take more information about the situation, for example, hitting the object to listen what sound it makes. Figure 8 shows a random configuration of the environment. When our agent is excited, it founds pleasure by playing with another agent. However, the other agents also have internal emotional states; when another agent is angry, any tentative to play with results in an aggression, which causes a disagreeable painful sensation to our agent. Playing is enjoyable and safe only if both agents are excited, or at least if the other agent is not angry (and then, not aggressive), but these emotional states are internal to each agent, and cannot be directly perceived. With such configuration, our agent will need to create new synthetic elements to be able to distinguish what is food and what is stone, and to distinguish who are aggressive and who are peaceable. At each time step, our agent can execute one of 4 actions: move, eat, play or hit. Moving itself is a good strategy to scape from other aggressive agents, but it is also the action that changes the context, allowing the agent to search for different contexts. The agent does not control precisely the movement it does; the action of moving itself causes a random rotation followed by a position changing to an adjacent cell. Eating is the good choice when the agent is hungry and it is in front of some food at the same time, action that ceases the bad sensation caused by hungry. Playing is the good action to be carried out when the agent is excited and in frontal contact with another non-aggressive agent. Hitting is the action that serves to interacting with other objects without compromise; doing it, the agent is able to identify the solution to some ambiguous situations; for example, hitting a stone has no effect, while hitting food provokes a funny sound. The same for another agents, that reacts with a noisy sound when hit, but only if it is already angry. The agent has two limited external perceptions, both focused on the cell that is directly in front of it; the sense of vision allows it to see if the place before him contains an object, an agent, or nothing; the sense of hearing permits to listen the sounds coming from there. The agent's body has 5 internal properties, corresponding to 5 equivalent internal perceptions: pain, anger, hungry, excitation, and pleasure. Pleasure occurs always when the agent plays with another agent, independently of the other agent internal state (which is quite selfish). However, as we know, our agent can get punched if the other agent is angry, and in this case pain takes place. When our agent feels pain and hungry at the same time, it becomes angry too. Initially the agent does not know anything about the environment or about its own sensations. It does not distinguish the situations, and also does not know what consequences its actions imply. The problem becomes interesting because playing can provoke both positive and negative rewards, the same for eating, that is an interesting behavior only in certain situations; it depends on the context where the action is executed, which is not fully observable by sensors. This is the model that CALM needs to learn by itself before establish the behavior policy. Figure 9 shows the involved variables, which will compose the schemas' identifying vectors. Figures 10 to 17 shows the anticipation trees created by the mechanism after stabilization. Figure 9: the vectors that compose the context of a schema; synthetic properties, perceptive properties, and controllable properties (actions). Figure 10: anticipation tree for hearing; the only action that provokes sound is hitting, it is true only if the object is food or the agent is hungry; when the agent hits (H) an object that is food or a non-aggressive agent, differentiated of their confounding pairs by the synthetic element (♣), then the agent listens a sound in the next instant; if the action is other (*) than hitting, then no sound is produced. Figure 11: anticipation tree for vision; when the agent hits (H) another agent (A), it verifies the permanence of this other agent in its visual field, which means that hitting an agent makes it stay in the same place; however, no other actions (*) executed in front of an agent (A) can prevents it to go, and so the prediction is undefined (#); the same for moving itself (M), which causes a situation changing and the non predictability of the visual field for the next instant; in all of the other cases the vision stays unchanged. Figure 12: anticipation tree for pain; the agent knows that playing (P) with an aggressive (♣) agent (A) causes pain; otherwise, no pain is caused. Figure 13: anticipation tree for pleasure; playing with a peaceable (♦) agent (A) is pleasant, and it is the only known way to reach this feeling. 1 ♣ # # # # # # # H 0 ♦ # # # # # # # H 0 ♣ # # # # # # # * # # # # # # # # H h a h e p p v h c h a h e p p v h c 1 ♣ # # # # # A # P 0 * # # # # # * # * h a h e p p v h c 1 ♦ # # # # # A # P 0 * # # # # # * # * h a h e p p v h c # # 0 0 # # # # # # 0 # * * # # # # # # h a h e p p v h c # # # # # # # A # * # # # # # # # * # M = # # # # # # * # * # # # # # # * # # A # # # # # # A # H # # # # # # A # # h a h e p p v h c hearing vision pain pleasure excitation hungry anger action hidden Figure 14: anticipation tree for excitation; when the agent feels neither anger (0) nor hungry (0), it can (eventually) becomes excited; it happens in a non-deterministic way, and for this reason the prediction is undefined in this case (#), which can be understood as a possibility; otherwise (**) excitation will be certainly absent. Figure 15: anticipation tree for hungry; eating (E) food (O♣) ceases hungry; otherwise, if the agent is already hungry, it will remains, but if it is not yet hungry, it can become. Figure 16: anticipation tree for anger; if neither pain nor hungry, then anger turns off; if both pain and hungry, then anger turns on; otherwise, anger does not change its state. Figure 17: hidden element anticipation tree; this element allows identifying whether an object is food or stone, and whether an agent is angry or not; CALM abstract anticipation method allows modeling the dynamics of this variable, even if t is not directly observable by direct sensory perception; the perception of the noise (1) is the result that enables the discovering of the value of this hidden property; the visual perception of an object (O), or the fact of hitting (H) another agent (A), also permit to know that the hidden element does not change. Figure 18 shows the evolution of the mean reward comparing the CALM solution with a random agent, and with two classic Q-Learning [START_REF] Watkins | Q-learning[END_REF]Dyan, 1992) implementations: the first one non situated (the agent has the vision of the entire environment as flat state space), and the second one with equivalent (situated) inputs than CALM. The non-situated implementation of Q-Learning algorithm (Classic Q) takes much more time to start drawing a convergence curve than the others, and in fact, the expected solution will never be reached; it is due to the fact that Q-Learning tries to construct directly a mapping from states to actions (a policy), but the state space is taken as the combination of the state variables; in this implementation (because it is not situated) each of the cells in the environment compose a different variable, and then the problem becomes quickly big; by the same cause, the agent becomes vulnerable to the growing of the board (the grid dimensions imply directly in the complexity of the problem). # # 1 # # 1 # # # 0 # # 0 # # 0 # # # = # # * # # * # # # = # # # # # # A # T # # # # # # # A # * # # # # # # # O # M # # # # # # A # # h a h e p p v h c = # # # # # # O # * # # # # # # O # # # # # # # # # N # # ♦ # # # # # # * 0 # ♣ # # # # # # * 1 # The CALM solution converges much earlier than Q-Learning, even taken in its situated version, and CALM found a better solution also; it is due to the fact that CALM quickly constructs a model to predict the environment dynamics, including the non-observable properties in the model, and so it is able to define a good policy sooner. The "pause" in the convergence that can be seen in the graphic indicates two moments: first, the solution found before correctly modeling the hidden properties as synthetic elements, and then, the solution after having it. On the other side, Q-Learning stays attached to a probabilistic model, and in this case, without the information about the internal states of the other agents, trying to play with them becomes unsafe, and the Q-Learning solution will prefer do not do it. Conclusions The CALM mechanism, presented in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], can provide autonomous adaptive capability to an agent, because it is able to incrementally construct knowledge to represent the deterministic regularities observed during its interaction with the system, even in partially deterministic and partially observable environments. CALM can deal with the incomplete observation trough the induction and prediction of hidden properties, represented by synthetic elements, thus it is able to overpass the limit of sensory perception, constructing more abstract terms to represent the system, and to describe its dynamics in more complex levels. CALM can be very efficient to construct a model in non-deterministic environments if they are well structured. In other words, if the most part of transformations are in fact deterministic relatively to the underlying partially observable properties, and if the interdependence between variables are limited to a small range. Several problems found in the real world present these characteristics. The proposed experiment (meph) can be taken as an useful problem that, even if simple, challenges the agent to solve some intricate issues such as the interaction with other complex agents. The next step in this way is to insert several cognitive agents like CALM in the same scenario; it means, agents that change our own internal model and policy of action, and in this way, present a non-stationary behavior. Again, the difficulty for one agent model the other agents in this kind of condition is even harder. Finally, we believe that the same strategy can be adapted to several kinds of classification tasks, where a previous database of samples are available. In this case, the algorithm learns to classify new instances based on a model created from a training set of instances that have been properly labeled with the correct classes. This task is similar to several real world problems actually solved with the computer aim, such as e-mail filtering, diagnostic systems, recommendation systems, decision support systems, and so on. Figure 2 . 2 Figure 2. Differentiation method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) sub-tree generated by differentiation. Figure 3 . 3 Figure 3. Adjust method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) schema expectation reduction after adjust. Figure 4 . 4 Figure 4. Integration method; (a) sub-tree after some adjust; (b) an integrated schema substitutes the sub-tree. Figure 5 . 5 Figure 5. Synthetic element creation method; (d) incremented context and expectation vectors, and differentiation using synthetic element. Figure 6 : 6 Figure 6: discovering the existence of non observable properties; in (a) a real experienced sequence; in (b) what CALM does not do (the attribution of a probability); in (c) the creation of a synthetic element in order to explain the observed difference. Figure 7 : 7 Figure7: predicting the dynamics of a non observable property; in (a) a real experienced sequence; in (b) the pieces of knowledge that can explain the logic behind the observed transformations, including the synthetic property changing. Figure 8 : 8 Figure 8: the simulation environment to the meph experiment, where the triangles represent the agents (looking and hearing forward), and the round squares represent the objects (food or stones). Figure 18 : 18 Figure18: the evolution of mean reward in a typical execution of the meph problem, considering four different agent solutions: CALM, situated Q-Learning, Random, and Classic Q-Learning; the scenario is a 25x25 grid, where 10% of the cells are stones and 5% are food, in the presence of 10 other agents.
54,341
[ "18738", "18738" ]
[ "396069" ]
01762255
en
[ "info" ]
2024/03/05 22:32:13
2012
https://hal.science/hal-01762255/file/PEROTTO%20-%20ICAART%202012%20%28pre-print%20version%29.pdf
Filipo Studzinski Perotto email: [email protected] TOWARD SOPHISTICATED AGENT-BASED UNIVERSES Statements to introduce some realistic features into classic AI/RL problems Keywords: Agency Theory, Factored Partially Observable Markov Decision Process (FPOMDP), Constructivist Learning Mechanisms, Anticipatory Learning, Model-Based Reinforcement Learning In this paper we analyze some common simplifications present in the traditional AI / RL problems. We argue that only facing particular conditions, often avoided in the classic statements, will allow the overcoming of the actual limits of the science, and the achievement of new advances in respect to realistic scenarios. This paper does not propose any paradigmatic revolution, but it presents a compilation of several different elements proposed more or less separately in recent AI research, unifying them by some theoretical reflections, experiments and computational solutions. Broadly, we are talking about scenarios where AI needs to deal with true situatedness agency, providing some kind of anticipatory learning mechanism to the agent in order to allow it to adapt itself to the environment. INTRODUCTION Every scientific discipline starts by addressing specific cases or simplified problems, and by introducing basic models, necessary to initiate the process of understanding into a new domain of knowledge; these basic models eventually evolve to a more complete theory, and little by little, the research attains important scientific achievements and applied solutions. Artificial Intelligence (AI) is a quite recent discipline, and this fact can be easily noticed by regarding its history in the course of the years. If in the 1950s and 1960s AI was the stage for optimistic discourses about the realization of intelligence in machine, the 1970s and 1980s reveal an evident reality: true AI is a feat very hard to accomplish. This movement led AI to plunge into a more pragmatic and less dreamy period, when visionary ideas have been replaced by a (necessary) search for concrete outcomes. Not by chance, several interesting results have been achieved in these recent years, and it is changing the skepticism by a (yet timid) revival of the general AI field. If on one hand the AI discourse mood has changed like a sin wave, on the other hand the academic practice of AI shows a progressive increment of complexity with respect to the standard problems. When the solutions designed to some established problem become stable, known, and accepted, new problems and new models are proposed in order to push forward the frontier of the science, moving AI from toy problems to more realistic scenarios. Make a problem more realistic is not just increasing the number of variables involved (even if limiting the number of considered characteristics is one of the most recurrent simplifications). When trying to escape from AI classic maze problems toward more sophisticated (and therefore more complex) agent-based universes, we are led to consider several complicating conditions, like (a) the situatedness of the agent, which is immersed into an unknown universe, interacting with it through limited sensors and effectors, without any holistic perspective of the complete environment state, and (b) without any a priori model of the world dynamics, which forces it to incrementally discover the effect of its actions on the system in an on-line experimental way; to make matters worse, the universe where the agent is immersed can be populated by different kinds of objects and entities, including (c) other complex agents, which can have their own internal models, and in this case the task of learning a predictive model becomes considerably harder. In this paper, we use the Constructivist Anticipatory Learning Mechanism (CALM), defined in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], to support our assumption. In other words, we shows that the strategies used by this method can represent a changing of directions in relation to classic and yet dominant ways. CALM is able to build a descriptive model of the system where the agent is immersed, inducting, from the experience, the structure of a factored and partially observable Markov decision process (FPOMDP). Some positive results [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], (Perotto et al. 2007), (Perotto;Alvares, 2007), (Perotto, 2011), have been achieved due to the use of 4 integrated strategies: (a) the mechanism takes advantage of the situated condition presented by the agent, constructing a description of the system regularities relatively to its own point of view, which allows to set a good behavior policy without the necessity of "mapping" the entire environment; (b) the learning process is anchored on the construction of an anticipatory model of the world, which could be more efficient and more powerful than traditional "model free" reinforcement learning methods, that directly learn a policy; (c) the mechanism uses some heuristics designed to well structured universes, where conditional dependencies between variables exist in a limited scale, and where most of the phenomena can be described in a deterministic way, even if the system as whole is not (a partially deterministic environment); which seems to be widely common in real world problems; (d) the mechanism is prepared to discover the existence of hidden or non-observable properties of the universe, which enables it to explain a larger portion of the observed phenomena. Following the paper, section 2 overviews the MDP framework and the RL tradition, section 3 describes the CALM learning mechanism, section 4 shows some experiments and acquired results, and section 5 concludes the paper. MDP+RL FRAMEWORK The typical RL problem is inspired on the classic rat maze experiment; in this behaviorist test, a rat is placed in a kind of labyrinth, and it needs to find a piece of cheese (the reward) that is placed somewhere far from it, sometimes avoiding electric traps along the way (the punishment). The rat is forced to run the maze several times, and the experimental results show that it gradually discovers how to solve it. The computational version of this experiment corresponds to an artificial agent placed in a bi-dimensional grid, moving over it, and eventually receiving positive or negative reward signals. Exactly as in the rat maze, the agent must learn to coordinate its actions by trial and error, in order to avoid the negative and quickly achieve the positive rewards. This computational experiment is formally represented by a geographical MDP, where each position in the grid corresponds to a state of the process; the process starts in the initial state, equivalent to the agent start position in the maze, and it evolves until the agent reaches some final reward state; then the process is reset, and a new episode take place; the episodes are repeated, and the algorithm is expected to learn a policy to maximize the estimated discounted cumulative reward that will be received by the agent in subsequent episodes. These classic RM maze configurations present at least two positive points, when comparing to realistic scenarios: the agent needs to learn actively and on-line, it means, there is no previous separated time to learn before the time of the life; the agent must perform and improve its behavior at the same time, without supervision, by "trial-and-error". However, this kind of experiment cannot be taken as a general scheme for learning: on the one hand, the simplifications adopted (in order to eliminate some uncomfortable elements) cannot be ignored when dealing with more complex or realistic problems; on the other hand, there are important features lacking on the classic RL maze, what makes difficult comparing it to other natural learning situations. Some of these simplifications and lacks are listed below: Non-Situativity: in the classic RL maze configuration, the agent is not really situated in the environment; in fact, the little object moving on the screen (which is generally called agent) is dissociated from the "agent as the learner"; the information available to the algorithm comes from above, from an external point of view, in which this moving agent appears as a controllable object of the environment, among the others. In contrast, realistic scenarios impose the agent sensory function as an imprecise, local, and incomplete window of the underlying situation stated by the real situation. Geographic Discrete Flat Representation: in classic mazes, the corresponding MDP is created by associating each grid cell to a process state; so, the problem stays confined in the same two dimensions of the grid space, and the system states represent nothing more than the agent geographic positions. In contrast, realistic problems introduce several new and different dimensions to the problem. The basic MDP model itself is conceived to represent a system by exhaustive enumeration of states (a flat representation), and it is not appropriated to represent multi-dimensional structured problems; the size of the state space grows exponentially up with the number of considered attributes (curse of dimensionality), which makes the use of this formalism only viable for simple or small scenarios. Disembodiment: in the classic configuration, the agent does not present any internal property, it is like a loose mind directly living in the environment; in consequence, it can be only extrinsically motivated, i.e. the agent acts in order to attain (or to avoid) some determined positions into the space, given from the exterior. In natural scenarios, the agent has a "body" playing the role of an intermediary between mind and external world; the body also represents an "internal environment", and the goals the agent needs to reach are given from this embodied perspective (in relation to the dynamics of some internal properties). Complete Observation: the basic MDP design the agent as an omniscient entity; the learning algorithm observes the system in its totality, it knows all the possible states, and it can precisely perceive in what state the system is at every moment, it also knows the effect of its actions on the system, in general it is the only source of perturbation in the world dynamics. These conditions are far from common in real-world problems. Episodic Life and Behaviorist Solution: in the classic enunciation, the system presents initial and final states, and the agent lives by episodes; when it reaches a final state, the system restarts. Generally this is not the case in real-life problems, where agents live a unique continuous uninterrupted experience. Also, solving a MDP is often synonymous of finding an optimal (or near-optimal) policy, and in this way most of the algorithms proposed in the literature are model-free. However, in complex environments, the only way to define a good policy is "understanding" what is going on, and creating an explicative or predictive model of the world, which can then be used to establish the policy. The Basic MDP Markov Decision Process (MDP) and its extensions constitute a quite popular framework, largely used for modeling decision-making and planning problems [START_REF] Feinberg | Handbook of Markov Decision Processes: methods and applications[END_REF]). An MDP is typically represented as a discrete stochastic state machine; at each time cycle the machine is in some state s; the agent interacts with the process by choosing some action a to carry out; then, the machine changes into a new state s', and gives the agent a corresponding reward r; a given transition function δ defines the way the machine changes according to s and a. The flow of an MDP (the transition between states) depends only on the system current state and on the action taken by the agent at the time. After acting, the agent receives a reward signal, which can be positive or negative if certain particular transitions occur. Solving an MDP is finding the optimal (or nearoptimal) policy of actions in order to maximize the rewards received by the agent over time. When the MDP parameters are completely known, including the reward and the transition functions, it can be mathematically solved by dynamic programming (DP) methods. When these functions are unknown, the MDP can be solved by reinforcement learning (RL) methods, designed to learn a policy of actions on-line, i.e. at the same time the agent interacts with the system, by incrementally estimating the utility of state-actions pairs and then by mapping situations to actions [START_REF] Sutton | Reinforcement Learning: an introduction[END_REF]. However, for a wide range of complex (including real world) problems, the complete information about the exact state of the environment is not available. This kind of problem is often represented as a Partially Observable MDP (POMDP) [START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The POMDP provides an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which the system states are observable only indirectly, via a set of imperfect, incomplete or noisy perceptions. In a POMDP, the set of observations is different from the set of states, but related to them by an observation function, i.e. the underlying system state s cannot be directly perceived by the agent, which has access only to an observation o. We can represent a larger set of problems using POMDPs rather than MDPs, but the methods for solving them are computationally even more expensive [START_REF] Hauskrecht | Value-function approximations for partially observable Markov decision processes[END_REF]. The main bottleneck about the use of MDPs or POMDPs is that representing complex universes implies an exponential growing-up on the state space, and the problem quickly becomes intractable. Fortunately, most of real-world problems are quite well-structured; many large MDPs have significant internal structure, and can be modeled compactly; the factorization of states is an approach to exploit this characteristic [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. In the factored representation, a state is implicitly described by an assignment to some set of state variables. Thus, the complete state space enumeration is avoided, and the system can be described referring directly to its properties. The factorization of states enables to represent the system in a very compact way, even if the corresponding MDP is exponentially large [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. When the structure of the Factored Markov Decision Process (FMDP) is completely described, some known algorithms can be applied to find good policies in a quite efficient way [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. However, the research concerning the discovery of the structure of an underlying system from incomplete observation is still incipient [START_REF] Degris | Factored Markov Decision Processes[END_REF]. FPOMDP The classic MDP model can be extended to include both factorization of states and partial observation, then composing a Factored Partially Observable Markov Decision Process (FPOMDP). In order to be factored, the atomic elements of the non-factored representation will be decomposed and replaced by a combined set of elements. A FPOMDP (Guestrin et al., 2001), [START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF][START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF], [START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF][START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF], [START_REF] Shani | Model-Based Online Learning of POMDPs[END_REF], [START_REF] Sim | Symbolic Heuristic Search Value Iteration for Factored POMDPs[END_REF], can be formally defined as a 4-tuple {X, C, R, T}. The state space is factored and represented by a finite non-empty set of system properties or variables X = {X 1 , X 2 , ... X n }, which is divided into two subsets, X = P  H, where the subset P contains the observable properties (those that can be accessed through the agent sensory perception), and the subset H contains the hidden or non-observable properties; each property X i is associated to a specified domain, which defines the values the property can assume; C = {C 1 , C 2 , ... C m } represents the controllable variables, composing the agent actions; R = {R 1 , R 2 , ... R k } is a set of (factored) reward functions, in the form R i : P i  IR, and T = {T 1 , T 2 , ... T n } is a set of transformation functions, as T i : X  C  X i , defining the system dynamics. Each transformation function can be represented by a Dynamic Bayesien Network (DBN), which is an acyclic, oriented, two-layers graph. The first layer nodes represent the environment state in time t, and the second layer nodes represent the next state, in t+1 [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. A stationary policy π is a mapping X → C where π(x) defines the action to be taken in a given situation. The agent must learn a policy that optimizes the cumulative rewards received over a potentially infinite time horizon. Typically, the solution π* is the policy that maximizes the expected discounted reward sum In this paper, we consider the case where the agent does not have an a priori model of the universe where it is situated (i.e. it does not have any idea about the transformation function), and this condition forces it to be endowed with some capacity of learning, in order to be able to adapt itself to the system. Although it is possible directly learn a policy of actions, in this work we are interested in model-based methods, through which the agent must learn a descriptive and predictive model of the world, and so define a behavior strategy based on it. Learning a predictive model is often referred as learning the structure of the problem. In this way, when the agent is immersed in a system represented as a FPOMDP, the complete task for its anticipatory learning mechanism is both to create a predictive model of the world dynamics (i.e. inducing the underlying transformation function of the system), and to define an optimal (or sufficiently good) policy of actions, in order to establish a behavioral strategy. [START_REF] Degris | Factored Markov Decision Processes[END_REF] present a good overview of the use of this representation in artificial intelligence, referring algorithms designed to learn and solve FMDPs and FPOMDPs. ANTICIPATORY LEARNING In the artificial intelligence domain, anticipatory learning mechanisms refer to methods, algorithms, processes, machines, or any particular system that enables an autonomous agent to create an anticipatory model of the world in which it is situated. An anticipatory model of the world (also called predictive environmental model, or forward model) is an organized set of knowledge allowing inferring the events that are likely to happen. For cognitive sciences in general, the term anticipatory learning mechanism can be applied to humans or animals to describe the way these natural agents learn to anticipate the phenomena experienced in the real world, and to adapt their behavior to it [START_REF] Perotto | Anticipatory Learning Mechanisms[END_REF]. When immersed in a complex universe, an agent (natural or artificial) needs to be able to compose its actions with the other forces and movements of the environment. In most cases, the only way to do so is by understanding what is happening, and thus by anticipating what will (most likely) happen next. A predictive model can be very useful as a tool to guide the behavior; the agent has a perception of the current state of the world, and it decides what actions to perform according to the expectations it has about the way the situation will probably change. The necessity of being endowed with an anticipatory learning mechanism is more evident when the agent is fully situated and completely autonomous; that means, when the agent is by itself, interacting with an unknown, dynamic, and complex world, through limited sensors and effectors, which give it only a local point of view of the state of the universe and only partial control over it. Realistic scenarios can only be successfully faced by an agent capable of discovering the regularities that govern the universe, understanding the causes and the consequences of the phenomena, identifying the forces that influence the observed changes, and mastering the impact of its own actions over the ongoing events. CALM Mechanism The constructivist anticipatory learning mechanism (CALM), detailed in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], is a mechanism developed to enable an agent to learn the structure of an unknown environment where it is situated, trough observation and experimentation, creating an anticipatory model of the world. CALM operates the learning process in an active and incremental way, and learn the world model as well as the policy at the same time it actuates. The agent has a single uninterrupted interactive experience into the system, over a theoretically infinite time horizon. It needs performing and learning at the same time. The environment is only partially observable from the point of view of the agent. So, to be able to create a coherent world model, the agent needs, beyond discover the regularities of the phenomena, also discover the existence of non-observable variables that are important to understand the system evolution. In other words, learning a model of the world is beyond describing the environment dynamics, i.e. the rules that can explain and anticipate the observed transformations, it is also discovering the existence of hidden properties (once they influence the evolution of the observable ones), and also find a way to deduces the dynamics of these hidden properties. In short, the system as a whole is in fact a FPOMDP, and CALM is designed to discover the existence of non-observable properties, integrating them in its anticipatory model. In this way CALM induces a structure to represent the dynamics of the system in a form of a FMDP (because the hidden variables become known), and there are some able to efficiently calculate the optimal (or near-optimal) policy, when the FMDP is given [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. CALM tries to reconstruct, by experience, each transformation function T i , which will be represented by an anticipation tree. Each anticipation tree is composed by pieces of knowledge called schemas, which represent some perceived regularity occurring in the environment, by associating context (sensory and abstract), actions and expectations (anticipations). Some elements in these vectors can undertake an "undefined value". For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). The learning process happens through the refinement of the set of schemas. After each experienced situation, CALM updates a generalized episodic memory, and then it checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the model. The context and action vectors are gradually specialized by differentiation, adding each time a new relevant feature to identify more precisely the situation class. The expectation vector can be seen as a label in each "leaf" schema, and it represents the predicted anticipation when the schema is activated. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and the only way is to reduce the schema expectation. CALM creates one anticipation tree for each property it judges important to predict. Each tree is supposed to represent the compete dynamics of the property it represents. From this set of anticipation trees, CALM can construct a deliberation tree, which will define the policy of actions. In order to incrementally construct all these trees, CALM implements 5 methods: (a) sensory differentiation, to make the tree grow (by creating new specialized schemas); (b) adjustment, to abandon the prediction of non-deterministic events (and reduce the schemas expectations) (c) integration, to control the tree size, pruning and joining redundant schemas: (d) abstract differentiation, to induce the existence of non observable properties; and (e) abstract anticipation, to discover and integrate these non-observable properties in the dynamics of the model. Sometimes some disequilibrating event can be explained by considering the existence of some abstract or hidden property in the environment, which could be able to differentiate the situation, but which is not directly perceived by the agent sensors. So, before adjusting, CALM supposes the existence of a non-sensory property in the environment, which it will represent as a abstract element. Abstract elements suppose the existence of something beyond the sensory perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities. EXPERIMENTS In (Perotto et al., 2007) the CALM mechanism is used to solve the flip problem, which creates a scenario where the discovery of underlying nonobservable states are the key to solve the problem, and CALM is able to do it by creating a new abstract element to represent these states. In [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF] and (Perotto;Álvares, 2007) the CALM mechanism is used to solve the wepp problem, which is an interesting RL situated bi-dimensional grid problem, where it should learn how to behavior considering the interference of several dimensions of the environment, and of its body. Initially the agent does not know anything about the world or about its own sensations, and it does not know what consequences its actions imply. Figure 1 shows the evolution of the mean reward comparing the CALM solution with a classic Q-Learning implementation (where the agent have the vision of the entire environment as flat state space), and with a situated version of the Q-Learning agent. We see exactly two levels of performance improvement. First, the non-situated implementation (Classic Q) takes much more time to start an incomplete convergence, and it is vulnerable to the growing of the board. Second, the CALM solution converges much earlier than Q-Learning, taken in its situated version, due to the fact that CALM quickly constructs a model to predict the environment dynamics, and it is able to define a good policy sooner. CONCLUSIONS Over the last twenty years, several anticipatory learning mechanisms have been proposed in the artificial intelligence scientific literature. Even if some of them are impressive in theoretical terms, having achieved recognition from the academic community, for real world problems (like robotics) no general learning mechanism has prevailed. Until now, the intelligent artifacts developed in universities and research laboratories are far less wondrous than those imagined by science fiction. However, the continuous progress in the AI field, combined with the progress of informatics itself, is leading us to a renewed increase of interest in the search for more general intelligent mechanism, able to face the challenge of complex and realistic problems. A necessary changing of directions in relation to the traditional ways to state the problems in AI is needed. The CALM mechanism, presented in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF] has been used as an exemple of it, because it provides autonomous adaptive capability to an agent, enabling it to incrementally construct knowledge to represent the regularities observed during its interaction with the system, even in nondeterministic and partially observable environments.
29,146
[ "18738" ]
[ "34499" ]
01762260
en
[ "info" ]
2024/03/05 22:32:13
2007
https://hal.science/hal-01762260/file/Perotto.pdf
Studzinski Filipo Perotto email: [email protected] Jean-Christophe Buisson email: [email protected] Luis Otávio Alvares email: [email protected] Constructivist Anticipatory Learning Mechanism (CALM) -dealing with partially deterministic and partially observable environments This paper presents CALM (Constructivist Anticipatory Learning Mechanism), an agent learning mechanism based on a constructivist approach. It is designed to deal dynamically and interactively with environments which are at the same time partially deterministic and partially observable. We describe in detail the mechanism, explaining how it represents knowledge, and how the learning methods operate. We analyze the kinds of environmental regularities that CALM can discover, trying to show that our proposition follows the way towards the construction of more abstract or high-level representational concepts. Introduction The real world is a very complex environment, and the transition from sensorimotor intelligence to symbolic intelligence is an important aspect to explaining how human beings successfully deal with it [START_REF] Piaget | Construction of Reality in the Child[END_REF]). The problem is the same for a situated artificial agent (like a robot), who needs to incrementally learn the observed regularities by interacting with the world. In complex environments (Goldstein 1999), special 'macroscopic' properties emerge from the functional interactions of 'microscopic' elements, and generally these emergent characteristics are not present in any of the sub-parts that generate it. The salient phenomena in this kind of environment tend to be related to high-level objects and processes [START_REF] Thornton | Indirect sensing through abstractive learning[END_REF], and in this case it is plainly inadequate to represent the world only in terms of primitive sensorimotor terms [START_REF] Drescher | Made-Up Minds: A Constructivist Approach to Artificial Intelligence[END_REF]). An intelligent agent (human or artificial) who lives in these conditions needs to have the possibility to overpass the limits of direct sensorial perceptions, organizing the universe in terms of more abstract concepts. The agent needs to be able to detect highlevel regularities in the environment dynamics, but it is not possible if it is closed into a rigid 'representational vocabulary'. The purpose of this paper is to present an agent learning architecture, inspired in a constructivist conception of intelligence [START_REF] Piaget | Construction of Reality in the Child[END_REF], capable of creating a model to describe its universe, and using abstract elements to represent unobservable properties. The paper is organized as follows: Section 2 and 3 describe both the agent and the environment conceptions. Section 4 and 5 show the basic CALM mechanism, respectively detailing how to represent the knowledge, and how to learn it. Section 6 presents the way to deal with hidden properties, showing how these properties can be discovered and predicted through synthetic elements. Section 7 presents example problems and solutions following the proposed method. Section 8 compares related works, and section 9 finalizes the paper, arguing that this is an important step towards a more abstract representation of the world, and pointing some next steps. Agent and Environment The concepts of agent and environment are mutually dependent, and they need to be defined one in relation to the other. In this work, we adopt the notions of situated agent and properties based environment. A situated agent is an entity embedded in and part of an environment, which is only partially observable through its sensorial perception, and only partially liable to be transformed by its actions [START_REF] Suchman | Plans and Situated Actions[END_REF]. Due to the fact that sensors will be limited in some manner, a situated agent can find itself unable to distinguish between differing states of the world. A situation could be perceived in different forms, and different situations could seem the same. This ambiguity in the perception of states, also referred to as perceptual aliasing, has serious effects on the ability of most learning algorithms to construct consistent knowledge and stable policies [START_REF] Crook | Learning in a State of Confusion: Perceptual Aliasing in Grid World Navigation[END_REF]. An agent is supposed to have motivations, which in some way represent its goals. Classically, the machine learning problem means enabling an agent to autonomously construct polices to maximize its goal reaching performance. The model based strategy separates the problem in two parts: (a) construct a world model and, based on it, (b) construct a policy of actions. CALM (Constructivist Anticipatory Learning Mechanism) responds to the task of constructing a world model. It tries to organize the sensorial Berthouze, L., Prince, C. G., Littman, M., Kozima, H., and Balkenius, C. (2007) information in a way to represent the regularities in the interaction of the agent with the environment. There are two common ways to describe an environment: either based on states, or based on properties. A states based environment can be expressed by a generalized states machine, frequently defined as POMDP [START_REF] Singh | Learning Predictive State Representations[END_REF] or as a FSA [START_REF] Rivest | Diversity-based inference of finite automata[END_REF]. We define it as Є = {Q, A, O, δ, γ} where Q is a finite not-empty set of underlying states, A is a set of agent actions, O is a set of agent observations, δ : Q × A → Q is a transition function, which describes how states change according to the agent actions, and γ : Q → O is an observation function, which gives some perceptive information related to the current underlying environment state. A properties based environment can be expressed by ξ = {F, A, τ} where F is a finite not-empty set of properties, composed by F (p) , the subset of perceptible or observable properties, and F (h) , the subset of hidden or unobservable properties, A is a set of agent actions, and τ (i) : F 1 × F 2 × ... × F k × A → F i is a set of transformation functions, one for each property F i in F, describing the changes in property values according to the agent actions. The environment description based on properties (ξ) has some advantages over the description based in states (Є) in several cases. Firstly because it promotes a clearer relation between the environment and the agent perception. In general we assume that there is one sensor to each observable property. Secondly, because the state identity can be distributed in the properties. In this way, it is possible to represent 'generalized states' and, consequently, compact descriptions of the transformation functions, generalizing some elements in the function domain, when they are not significant to describe the transformation. A discussion that corroborates our assumptions can be read in (Triviño-Rodriguez and Morales-Bueno 2000). The most compact expression of the transition function represents the environment regularities, in the form λ * (i) : F 1 |ε × F 2 |ε × ... × F k |ε × A|ε → F i . This notation is similar to that used in grammars, and it means that at each property j, we can consider it for the domain or not (F j |ε). Types of Environments We adopt the properties based description (ξ), defining 3 axis to characterize different types of environments. The axis ∂ represents the environment determinism in the transformations, the axis ω indicates the perceptive accessibility that the agent has to the environment, and the axis ϕ represents the information gain related to the properties. The determinism axis level (∂) is equivalent to the proportion of deterministic transformations in τ in relation to the total number of transformations. So, in the completely non-deterministic case (∂ = 0), the transformation function (τ) to any property i needs to be represented as F 1 × F 2 × ... × F k × A → Π(F i ), where Π(F i ) is a probabilistic distribution. On the other hand, in the completely deterministic case (∂ = 1), every transformation can be represented directly by F 1 × F 2 × ... × F k × A → F i . An environment is said partially deterministic if it is situated between these two axis extremities (0 < ∂ < 1). When ∂ = 0.5, for example, half of the transformations in τ are deterministic, and the other half is stochastic. It is important to note that a single transition in the function δ of an environment represented by states (Є) is equivalent to k transformations in the function τ of the same environment represented by properties (ξ). So, if only one transformation that integrates the transition is non-deterministic, all the transition will be nondeterministic. Conversely, a non-deterministic transition can present some deterministic component transformations. This is another advantage of using the properties representation, when we combine it with a learning method based on the discovery of deterministic regularities. The accessibility axis level (ω) represents the degree of perceptive access to the environment. It is equivalent to the proportion of observable properties in F in relation to the total number of properties. If ω = 1 then the environment is said completely observable, which means that the agent has sensors to observe directly all the environment properties. In this case there is no perceptual confusion, and the agent always knows what is its current situation. If ω < 1, then the environment is said partially observable. The lesser ω, the higher the proportion of hidden properties. When ω is close to 0, the agent is no longer able to identify the current situation only in terms of its perception. In other words, partially observable environments present some determinant properties to a good world model, which cannot be directly perceived by the agent. Such environments can appear to be arbitrarily complex and non-deterministic on the surface, but they can actually be deterministic and predictable with respect to unobservable underlying elements [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF]. There is a dependence relation between these two axis -accessibility and determinism. The more an agent has sensors to perceive complex elements and phenomena, the more the environment will appear deterministic to it. Finally, the informational axis level (ϕ) is equivalent to the inverse of the average number of generalizable properties to represent the environment regularities (λ * ), divided by the total number of properties in F. The greater is ϕ (rising to 1), the more compactly the transformation function (τ) can be expressed in terms of regularities (λ * ). In other words, higher levels of ϕ mean that the information about the environment dynamics is concentrated in the properties (i.e. there is just a small sub-set of highly relevant properties for each prediction), and lower levels of ϕ indicate that the information about the dynamics is fuzzily distributed over all the set of properties, and in this case the agent needs to describe the transformation in function of almost all properties. Learning methods based on the discovery of regularities can be very efficient in environments where the properties are highly informative. The Basic Idea In this section we present the basic CALM mechanism, which is developed to model its interaction with a completely observable but partially deterministic environment (COPDE), where ω=1, but ∂<1 and ϕ<1. CALM tries to construct a set of schemas to represent perceived regularities occurring in the environment through its interactions. Each schema represents some regularity checked by the agent during its interaction with the world. It is composed by three vectors: Ξ = (context + action → expectation). The context and expectation vectors have the same length, and each of their elements are linked with one sensor. The action vector is linked with the effectors. In a specific schema, the context vector represents the set of equivalent situations where the schema is applicable. The action vector represents a set of similar actions that the agent can carry out in the environment. The expectation vector represents the expected result after executing the given action in the given context. Each element vector can assume any value in a discrete interval defined by the respective sensor or effector. Some elements in these vectors can undertake the undefined value. For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). In both the context and action vectors, '#' represents something ignored, not relevant to make the anticipations. But for the expectation vector, '#' means that the element is not deterministically predictable. The undefined value generalizes the schema because it allows to ignore some properties to represent a set of situations. There is compatibility between a schema and a certain situation when the schema's context vector has all defined elements equal to those of the agent's perception. Note that compatibility does not compare the undefined elements. For example, a schema which has the context vector = '100#' is able to assimilate the compatible situations '1000' or '1001'. The use of undefined values makes possible the construction of a schematic tree. Each node in that tree is a schema, and relations of generalization and specialization guide its topology (quite similar to decision trees or discrimination trees). The root node represents the most generalized situation, which has the and action vectors completely undefined. Adding one level in the tree is to specialize one generalized element, creating a branch where the undefined value is replaced by different defined values. This specialization occurs either in the context vector or in the action vector. The structure of the schemas and their organization as a tree are presented in Figure 1. The context in which the agent is at a given moment (perceived through its sensors) is applied in the tree, exciting all the schemas that have a compatible context vector. This process defines a set of excited schemas, each one suggesting a different action to do in the given situation. CALM will choose one to activate, and then the action proposed by the activated schema will be performed through the agent's effectors. The algorithm always chooses the compatible schema that has the most specific context, called decider schema, which is the leaf of a differentiated branch. Each decider has a kind of episodic memory, which represents (in a generalized form) the specific and real situations experimented in the past, during its activations. ### # ### ### 0 #0# ### 1 111 0## 0 000 1## 0 10# context action expectation Learning Methods The learning process happens through the refinement of the set of schemas. The agent becomes more adapted to its environment as a consequence of that. After each experienced situation, CALM checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the tree or to adjust the schema. combines top-down and bottom-up learning strategies. In the schematic tree topology, the context action vectors are considered together. This concatenated vector identifies the node in the tree, which grows up using the top-down strategy. The agent has just one initial schema. This root schema has the context vector completely general (without any differentiation, ex.: '#####') and expectation vector totally specific (without any generalization, ex.: '01001'), created at the first experienced situation, as a mirror of the result directly observed after the action. The context vector will be gradually specialized by differentiation. In more complex environments, the number of features the agent senses is huge, and, in general, only a few of them are relevant to identify the situation class (1 > ϕ >> 0). In this case, a top-down strategy seems to be better, because there is a shorter way beginning with an empty vector and searching for these few relevant features to complete it, than beginning with a full vector and having to eliminate a lot of useless elements. Selecting the good set of relevant features to represent some given concept is a well known problem in AI, and the solution is not easy, even to approximated approaches. As it will be seen, we adopt a kind of forward greedy selection [START_REF] Blum | Selection of relevant features and examples in machine learning[END_REF]. The expectation vector can be seen as a label in each decider schema, and it represents the predicted anticipation when the decider is activated. The evolution of expectations uses a bottom-up strategy. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and the only way is to reduce the schema expectation. Three basic methods compose the CALM learning function, namely: differentiation, adjustment and integration. Differentiation is a necessary mechanism because a schema responsible for a context too general can hardly make precise anticipations. If a general schema does not work well, the mechanism divides it into new schemas, differentiating them by some element of the context or action vector. In fact, the differentiation method takes an unstable decider schema and changes it into a two level sub-tree. The parent schema in this sub-tree preserves the context of the original schema. The children, which are the new decider schemas, have their context vectors a little bit more specialized than their parent. They attribute a value to some undefined element, dividing the scope of the original schema. Each one of these new deciders engages itself in a part of the domain. In this way, the previous correct knowledge remains preserved, distributed in the new schemas, and the discordant situation is isolated and treated only in its specific context. Differentiation is the method responsible to make the schematic tree grows up. Each level of the tree represents the introduction of some constraint into the context vector. Figure 2 The algorithm needs to choose what will be the differentiator element, and it could be from either the context vector or the action vector. This differentiator needs to separate the situation responsible for the disequilibrium from the others, and the algorithm chooses it by calculating the information gain. When some schema fails and it is not possible to differentiate it, then CALM executes the adjustment method. This method reduces the expectations of an unstable decider schema in order to make it reliable again. The algorithm simply compares the activated schema's expectation and the real result perceived by the agent after the application of the schema, setting the incompatible expectation elements to undefined value ('#'). As CALM always creates schemas with expectations totally determined (as a mirror of the result of its first application), the walk performed by the schema is a reduction of expectations, up to the point it reaches a state where remains only those elements that really represent the regular results of the action carried out in that context. Figure 3 The adjustment method changes the schema expectation (and consequently the anticipation predicted by the schema). Successive adjustments can reveal some unnecessary differentiations. After an adjustment, CALM needs to verify the possibility to regroup some related schemas. It is the integration method that searches two schemas with equivalent expectations to approach different contexts in a same sub-tree, and join these schemas into a single one, eliminating the differentiation. The method is illustrated in figure 4. To test this basic CALM method, we have made some experiments in simple scenarios showing that the agent converges to the expected behavior, constructing correct knowledge to represent the environment deterministic regularities, as well as the regularities of its body sensations, and also the regular influence of its actions over both. We may consider that these results have corroborated the mechanism ability to discover regularities and use this knowledge to adapt the agent behavior. The agent has learned about the consequences of its actions in different situations, avoiding emotionally negative situations, and pursuing those emotionally positive. A detailed description of these experiments can be viewed in (Perotto and Alvares 2006). ### A 0#0 0## A 0#0 1## A 0#0 (b) (a) ### A 0#0 Dealing with the Unobservable In this section we will present the extended mechanism, developed to deal with partially observable and partially deterministic environments (CALM-POPDE), where ∂<1, ϕ<1, and also ω<1. In the basic mechanism (CALM-COPDE), presented in previous sections, when some schema fails, the first alternative is to differentiate it based on direct sensorimotor (context and action) elements. If it is not possible to do that, then the mechanism reduces the schema expectation, generalizing the incoherent anticipated elements. When CALM reduces the expectation of a given schema, it supposes that there is no deterministic regularity following the represented situation in relation to these incoherent elements, and the related transformation is unpredictable. However, sometimes the error could be explained by considering the existence of some abstract or hidden property in the environment, which could be able to differentiate the situation, but which is not directly perceived by the agent sensors. In the extended mechanism, we introduce a new method which enables CALM to suppose the existence of a non-sensorial property in the environment, which it will represent as a synthetic element. When a new synthetic element is created, it is included as a new term in the context and expectation vectors of the schemas. Synthetic elements suppose the existence of something beyond the sensorial perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities. In this way, when dealing with partially observable environments, CALM has two additional challenges: a) infer the existence of unobservable properties, which it will represent by synthetic elements, and b) include these new elements into its predictive model. A good strategy to do this task is looking at the historical information. [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF] have proved that it is always possible to find sufficient little pieces of history to distinguish and identify all the underlying states in D-POMDPs. The first CALM-POPDE additional method is called abstract differentiation. When a schema fails in its prediction, and when it is not possible to differentiate it by the current set of considered properties, then a new boolean synthetic element is created, enlarging the context and expectation vectors. Immediately, this element is used to differentiate the incoherent situation from the others. The method attributes arbitrary values to this element in each differentiated schema. These values represent the presence or absence of some unobservable condition, necessary to determine the correct prediction in the given situation. The method is illustrated in figure 5, where the new elements are represented by card suites. Once a synthetic element is created, it can be used in next differentiations. A new synthetic element will be created only if the existing ones are already saturated. To avoid the problem of creating infinite new synthetic elements, CALM can do it only until a determined limit, after which it considers that the problematic anticipation is simply unpredictable, undefining the expectation in the related schemas by adjustment. The synthetic element is not associated to any sensorial perception. Consequently, its value cannot be observed. This fact can place the agent in ambiguous situations, where it does not know whether some relevant but not observable condition (represented by this element) is present or absent. Initially, the value of a synthetic element is verified a posteriori (i.e. after the execution of the action in an ambiguous situation). Once the action is executed and the following result is verified, then the agent can rewind and deduce what was the situation really faced in the past instant (disambiguated). Discovering the value of a synthetic element after the circumstance where this information was needed can seem useless, but in fact, this delayed deduction gives information to the second CALM-POPDE additional method, called abstract anticipation. If the unobservable property represented by this synthetic element has a regular behavior, then the mechanism can "backpropagate" the deduced value for the activated schema in the previous instant. The deduced synthetic element value will be included as a new anticipation in the previous activated schema. For example, in time t 1 CALM activates the schema Ξ 1 = (#0 + x → #1), where the context and expectation are composed by two elements (the first one synthetic and the second one perceptive), and one action. Suppose that the next situation '#1' is ambiguous, because it excites both schemas Ξ 2 = (♣1 + x → #0) and Ξ 3 = (♦1 + x → #1). At this time, the mechanism cannot know the synthetic element value, crucial to determine what is the real situation. Suppose that, anyway, the mechanism decides to execute the action 'x' in time t 2 , and it is followed by the sensorial perception '0' in t 3 . Now, in t 3 , the agent can deduce that the situation really dealt with in t 2 was '♣1', and it can include this information into the schema activated in t 1 , in the form Ξ 1 = (#0 + x → ♣1). Example Problem and Solution To exemplify the functioning of the proposed method we will use the flip problem, which is also used by [START_REF] Singh | Learning Predictive State Representations[END_REF] and [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF]. They suppose an agent who lives in a two states universe. It has 3 actions (l, r, u) and 2 perceptions (0, 1). The agent do not have any direct perception to know what is the underlying current state. It has the perception 1 when the state changes, and the perception 0 otherwise. Action u keeps the state the same, action l causes the deterministic transition to the left state, and action r causes the deterministic transition to the right state. The flip problem is showed as a Mealy machine in figure 6. CALM-POPDE is able to solve this problem. Firstly it will try to predict the next observation in function of its action and current observation. However, CALM quickly discovers that the perceptive observation is not useful to the model, and that there is not sufficient information to make correct anticipations. So, it creates a new synthetic element which will be able to represent the underlying left (♣) and right (♦) states. Figure 7 shows the first steps in the schematic tree construction for the flip problem. We suppose that the first movements do not betray the existence of a hidden property. These movements are : "r1, u0, l1, r1, l1, u0, r1". Figure 8 shows the first abstract differentiation, after the sequence "r0", and also the abstract anticipation, that refers to the immediately previous sequence ("r1"). Figure 9 shows the abstract anticipation coming from the repetition of "r0". Figure 10 shows a new abstract differentiation and its anticipations by following "l1, l0, l0". Finally, figure 11 shows the final solution, with the last differentiation resulting from the execution of "u0, l0, r1, u0, r0". # # r F p F h A # r 1 # l 1 # u 0 # # # F p F h A # r # r 1 1 # l # l 1 1 # u # u 0 0 # # # # # # ♣ # r ♦ 1 ♦ # r ? 0 # # l ? 1 # # u ? 0 # # # # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ? 0 ? 0 # # l # # l ? 1 ? 1 # # u # # u ? 0 ? 0 # # # # # # Figure 8. First abstract differentiation. # # r ♣ # r ♦ 1 ♦ # r ♦ 0 # # l ? 1 # # u ? 0 # # # # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # l # # l ? 1 ? 1 # # u # # u ? 0 ? 0 # # # # # # Figure 9. Abstract anticipation. # # r ♣ # r ♦ 1 ♦ # r ♦ 0 # # u ? 0 # # # # # l ♣ # l ♣ 0 ♦ # l ♣ 1 # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # u # # u ? 0 ? 0 # # # # # # # # l # # l ♣ # l ♣ # l ♣ 0 ♣ 0 ♦ # l ♦ # l ♣ 1 ♣ 1 Figure 10 . New abstract differentiations and anticipations. # # r In a second problem, we consider a robot which have the mission of buying some cans in a drink machine. It has 3 alternative actions: "insert a coin" (i), "press the button" (p), or "go to another machine" (g); it can see the state of an indicator light in the machine: "off" ( ) or "on" ( ); and it perceives whether a can is returned (☺) or not ( ). There are 2 hidden properties: "no coin inserted" ( ) or "coin inserted" ( ); and "machine ok" ( ) or "machine out of service" ( ). The light turns on ( ) just during one time cycle, and only if the agent presses the button without having inserted a coin before, otherwise the light indicator is always off. The goal in this problem is to take a determined number of drinks without losing coins in bad machines. ♣ # r ♦ 1 ♦ # r ♦ 0 # # # # # l ♣ # l ♣ 0 ♦ # l ♣ 1 ♣ # u ♣ 0 ♦ # u ♦ 0 # # u # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # # # # # # # l # # l ♣ # l ♣ # l ♣ 0 ♣ 0 ♦ # l ♦ # l ♣ 1 ♣ 1 ♣ # u ♣ # u ♣ 0 ♣ 0 ♦ # u ♦ # u ♦ 0 ♦ 0 # # u # # u This example poses two challenges to the agent: First, the machine does not present any direct perceptible change when the coin is inserted. Since the agent does not have any explicit memory, apparently it faces the same situation both before and after having inserted the coin. However, this action changes the value of an internal property in the drink machine. Precisely, the disequilibrium occurs when the agent presses the button. In an instantaneous point of view, sometimes the drink arrives, and the goal is attained ( + p → ☺), but sometimes only the led light turns on ( + p → ). To reach its goal, the agent needs to coordinate a chain of actions (insert the coin and press the button), and it can do that by using a synthetic element which represents this machine internal condition ( or ). Second, the agent does not have direct perceptive indications to know if the machine is working, or if it is out of service. The agent needs to interact with the machine to discover its operational condition ( or ). This second problem is a little bit different from the first, but it can be solved by the same way. The agent creates a test action, which enables it to discover this hidden property before inserting the coin. It can do that by pressing the button. Table 1 presents the set of decider schemas that CALM learns to the drink machine problem. We remarks that Ξ 4 presents the unpredictable transformation that follows the action g (go to another machine) due to the uncertainty of the operational state of the next machine. The test is represented in Ξ 1 and Ξ 2 , that can be simultaneously activated because the ambiguity of the result that follows the activation of Ξ 4 but anticipates the operational state of the machine. Ξ 1 = ( # # # + p → ) Ξ 2 = ( # # + p → ) Ξ 3 = ( # # + p → ☺) Ξ 4 = ( # # # # + g → # ) Ξ 5 = ( # # # + i → ) Ξ 6 = ( # # # + i → ) Related Works CALM-POPDE is an original mechanism that enables an agent to incrementally create a world model during the course of its interaction. This work is the continuity of our previous work (Perotto [START_REF] Alvares | Incremental Inductive Learning in a Constructivist Agent[END_REF], extended to deal with partially observable environments. The pioneer work on Constructivist AI has been presented by [START_REF] Drescher | Made-Up Minds: A Constructivist Approach to Artificial Intelligence[END_REF]. He proposed the first constructivist agent architecture (called schema mechanism), that learns a world model by an exhaustive statistical analysis of the correlation between all the context elements observed before each action, combined with all resulting transformations. Drescher has also suggested the necessity to discover hidden properties by creating 'synthetic items'. The schema mechanism represents a strongly coherent model, however, there are no theoretical guarantees of convergence. Another restriction is the computational cost of the kind of operations used in the algorithm. The need of space and time resources increases exponentially with the problem size. Nevertheless, many other researchers have presented alternative models inspired by Drescher, like as [START_REF] Yavuz | PAL: A Model of Cognitive Activity[END_REF], [START_REF] Birk | Schemas and Genetic Programming[END_REF], (Morrison et al. 2001), [START_REF] Chaput | The Constructivist Learning Architecture[END_REF]) and [START_REF] Holmes | Schema Learning: Experience-based Construction of Predictive Action Models[END_REF]. Our mechanism (CALM) differs from these previous works because we limit the problem to the discovery of deterministic regularities (even in partially deterministic environments), and in this way, we can implement direct induction methods in the agent learning mechanism. This approach presents a low computational cost, and it allows the agent to learn incrementally and find high-level regularities. We are also inspired by [START_REF] Rivest | Diversity-based inference of finite automata[END_REF]) and (Ron 1995), who had suggested the notion of state signature as an historical identifier to the DFA states, strongly reinforced recently by [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], who have developed the idea of learning anticipations trough the analysis of relevant pieces of history. Discussion and Next Steps The CALM mechanism can provide autonomous adaptive capability to an agent, because it is able to incrementally construct knowledge to represent the deterministic regularities observed during its interaction with the environment, even in partially deterministic universes. We have also presented an extension to the basic CALM mechanism in a way which enables it to deal with partially observable environments, detecting highlevel regularities. The strategy is the induction and prediction of unobservable properties, represented by synthetic elements. Synthetic elements enable the agent to overpass the limit of instantaneous and sensorimotor regularities. In the agent mind, synthetic elements can represent 3 kinds of "unobservable things". (a) Hidden properties in partially observed worlds, or sub-environment identifiers in discrete non-stationary worlds. (b) Markers to necessary steps in a sequence of actions, or to different possible agent points of view. And (c), abstract properties, which do not exist properly, but which are powerful and useful tools to the agent, enabling it to organize the universe in higher levels. With these new capabilities, CALM becomes able to overpass the sensorial perception, constructing more abstract terms to represent the universe, and to understand its own reality in more complex levels. CALM can be very efficient to construct models in partially but highly deterministic (1 > ∂ >> 0), partially but highly observable (1 > ω >> 0), and its properties are partially but highly informative (1 > ϕ >> 0). Several problems found in the real world present these characteristics. Currently, we are improving CALM to enable it to form action sequences by chaining schemas. It will allow the creation of composed actions and plans. We are also including methods to search good policies of actions using the world model constructed by the learning functions. The next research steps comprehends: to formally demonstrate the mechanism efficiency and correctness; to make comparisons between CALM and related solutions proposed by other researchers; and to analyze the mechanism performance in more complex problems. Future works can include the extension of CALM to deal with non-deterministic regularities, noisy environments, and continuous domains. Figure 1 . 1 Figure 1. Schematic Tree. Each node is a schema composed of three vectors: context, action and expectation. The leaf nodes are decider schemas. Figure 2 . 2 Figure 2. Differentiation method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) sub-tree generated by differentiation. Figure 3 . 3 Figure 3. Adjust method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) schema expectation reduction after adjustment. Figure 4 . 4 Figure 4. Integration method; (a) sub-tree after some adjustment; (b) an integrated schema substitutes the sub-tree. Figure 5 . 5 Figure 5. Synthetic element creation method; (d) incremented context and expectation vectors, and differentiation using synthetic element. Figure 6 . 6 Figure 6. The flip problem. Figure 7 . 7 Figure 7. Initial schematic tree to the flip problem. The vector represents synthetic elements (Fh), perceptible elements (Fp) and actions (A). The decider schemas show the expectations. Figure 11 . 11 Figure 11. Final schematic tree to solve the flip problem. . Proceedings of the Seventh International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. Lund University Cognitive Studies, 135. Table 1 . 1 Schemas to the drink machine problem.
38,232
[ "18738", "1118575" ]
[ "396069", "392282", "93269", "392282", "448187", "43688" ]
01762458
en
[ "spi" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762458/file/LEM3_COST_2018_MERAGHNI.pdf
E Tikarrouchine G Chatzigeorgiou F Praud B Piotrowski Y Chemisky F Meraghni email: [email protected] Three-dimensional FE 2 method for the simulation of non-linear, rate-dependent response of composite structures Keywords: Multi-scale finite element computation, FE 2 method, periodic homogenization, composite materials, elastoviscoplastic behavior, ductile damage In this paper, a two scale Finite Element method (FE 2 ), is presented to predict the non-linear macroscopic response of 3D composite structures with periodic microstructure that exhibit a timedependent response. The sensitivity to the strain rate requires an homogenization scheme to bridge the scales between the macroscopic boundary conditions applied and the local evaluation of the strain rate. In the present work, the effective response of composite materials where the matrix has a local elasto-viscoplastic behavior with ductile damage are analyzed using periodic homogenization, solving simultaneously finite element problems at the microscopic scale (unit cell) and at the macroscopic scale. This approach can integrate any kind of periodic microstructure with any type of non-linear behavior for the constituents (without the consideration of non-linear geometric effects), allowing to treat complex mechanisms that can occur in every phase and at their interface. The numerical implementation of this simulation strategy has been performed with a parallel computational technique in ABAQUS/Standard,with the implementation of a set of dedicated scripts. The homogenization process is performed using a user-defined constitutive law that solve a set full-field non-linear simulations of a Unit Cell and perform the necessary homogenization of the mechanical quantities. The effectiveness of the method is demonstrated with three examples of 3D composite structures with plastic or viscoplastic and ductile damage matrix. In the first example, the numerical results obtained by this full field approach are compared with a semi-analytical solution on elastoplastic multilayer composite structure. The second example investigates the macroscopic response of a complex viscoplastic composite structure with ductile damage and is compared with the mean field Mori-Tanaka method. Finally, 3D corner structure consisting of periodically aligned short fibres composite is analysed under complex loading path. These numerical simulations illustrate the capabilities of the FE 2 strategy under non-linear regime. when time dependent constitutive models describe the response of the constituents Introduction Polymer based composite materials are considered to be a good technological solution for automotive and aeronautic industries, thanks to their structural durability and their lightness. A major preoccupation of these industries is to predict the response of such structures with in-service loadings. This requires the development of predictive models that are able to capture the microstructure impact on the mechanical response, and the proper identification of the mechanical properties of the constituents. In this purpose, advanced modelling and simulation methods that integrate the effect of the microstructure is an active area of research. According to the bibliography, several numerical approaches have been proposed for the numerical simulation of the non-linear response of polymer based composite structures including: i) Phenomenological models, which predict the overall response of the composite materials without taking into account the effect of the different constituents observed at the microscopic scale. Several authors have proposed constitutive models that integrate various rheologies and deformation mechanisms, i.e. viscoelasticity [START_REF] Moreau | Analysis of thermoelastic effects accompanying the deformation of pmma and pc polymers[END_REF][START_REF] Akhtar | Thermo-mechanical large deformation response and constitutive modeling of viscoelastic polymers over a wide range of strain rates and temperatures[END_REF], viscoplasticity [START_REF] Duan | A uniform phenomenological constitutive model for glassy and semi crystalline polymers[END_REF][START_REF] Achour | Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers[END_REF][START_REF] Drozdov | Cyclic viscoplasticity of solid polymers: The effects of strain rate and amplitude of deformation[END_REF], coupled viscoelasticity and viscoplasticity [START_REF] Miled | Coupled viscoelastic-viscoplastic modeling of homogeneous and isotropic polymers: Numerical algorithm and analytical solutions[END_REF][START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF], or even both coupled viscoelasticity, viscoplasticity and damage [START_REF] Launay | Cyclic behaviour of short glass fibre reinforced polyamide: Experimental study and constitutive equations[END_REF][START_REF] Launay | Multiaxial fatigue models for short glass fiber reinforced polyamide -part i: Nonlinear anisotropic constitutive behavior for cyclic response[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF]; ii) Multi-scale methods, that can be classified into two main categories: mean-field and full field approaches. The mean-field approaches are used to describe the behavior of composites for certain categories of microstructures through the Mori-Tanaka Method [START_REF] Mori | Average stress in matrix and average elastic energy of materials with misfitting inclusions[END_REF][START_REF] Doghri | Homogenization of two-phase elasto-plastic composite materials and structures: Study of tangent operators, cyclic plasticity and numerical algorithms[END_REF] or the self-consistent scheme [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF]15,[START_REF] Walpole | On bounds for the overall elastic moduli of inhomogeneous systems-i[END_REF][START_REF] Milton | Variational bounds on the effective moduli of anisotropic composites[END_REF]. These methodologies have been developed in order to estimate the overall behavior of the composite using average stress and strain quantities for each material phase [START_REF] Castañeda | The effective mechanical properties of nonlinear isotropic composites[END_REF][START_REF] Meraghni | Micromechanical modelling of matrix degradation in randomly oriented discontinuous-fibre composites[END_REF][START_REF] Meraghni | Implementation of a constitutive micromechanical model for damage analysis in glass mat reinforced composite structures[END_REF]. These methods have been proved to be accurate for the linear cases. However, for non-linear constitutive laws, especially when the matrix phase exhibits a non-linear behavior, the response of these approaches is inaccurate. It is commonly observed in the literature that the response of the composite obtained by mean-field methods appears to be stiffer than the reality especially when the matrix is ductile and the reinforcements are stiffer [START_REF] Gavazzi | On the numerical evaluation of eshelby's tensor and its application to elastoplastic fibrous composites[END_REF][START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF][START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF]. The numerical simulation of these composite systems has necessitated the development of full-field approaches. To determine the response of a composite structure, accounting for the description of the microstructure, the so-called FE 2 method, appear to be an adequate solution. The major benefit of the FE 2 method is the ability to analyse complex mechanical problems with heterogeneous phases that present a variety of behavior at different scales. This idea was originally introduced by Feyel [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF], then this method was used and developed by several authors, for example [START_REF] Feyel | Fe2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre sic/ti composite materials[END_REF][START_REF] Nezamabadi | A multilevel computational strategy for handling microscopic and macroscopic instabilities[END_REF][START_REF] Nezamabadi | A multiscale finite element approach for buckling analysis of elastoplastic long fiber composites[END_REF][START_REF] Asada | Fully implicit formulation of elastoplastic homogenization problem for two-scale analysis[END_REF][START_REF] Tchalla | An abaqus toolbox for multiscale finite element compu-tation[END_REF][START_REF] Schröder | Algorithmic two-scale transition for magneto-electro-mechanically coupled problems: Fe2-scheme: Localization and homogenization[END_REF][START_REF] Papadopoulos | The impact of interfacial properties on the macroscopic performance of carbon nanotube composites. a fe2-based multiscale study[END_REF]. The majority of these works consider two-dimensional structures, which if they provide a good study case for the analysis of the capabilities of the method, is of limited interest for practical use for the prediction of the overall response of heterogeneous materials and composites, since the spatial arrangement of the phases is mostly three-dimensional. In this paper, a two-level FE 2 method, based on the concept of periodic homogenization under the small strain assumption is implemented in a commercial FE code (ABAQUS/Standard). The method predicts the 3D non-linear macroscopic behavior of a composite with periodic microstructure by considering that each macroscopic integration point is a material point where the characteristics at the macroscopic scale are represented by its own unit cell, which includes the material and geometrical characteristics of the constituents (fibre, matrix) in the microstructure. Therefore, a multilevel finite element analysis has been developed using an implicit resolution scheme, with the use of a Newton-Raphson algorithm to solve simultaneously the non-linear system of equations on the two scales (macroscopic and microscopic). The main advantage of this methodology is that it can account for any type of non-linear behavior of the constituents (plasticity, viscoelasticity, viscoplasticity and damage), as well as any type of periodic microstructure. The proposed FE 2 approach is implemented through a parallelization technique, leading to a significant reduction of the computational time. The layout of this paper is as follows: in section 2, the theoretical formulation of the homogenization theory is described as well as the principle of scale transition between the local and the global fields. The section also presents the rate dependent constitutive law considered for the matrix phase. In section 3, details of the numerical implementation of the FE 2 method is given for a 3D non-linear problem in ABAQUS/Standard with the parallel implementation. In section 4, the approach is validated by comparing the FE 2 results with semi-analytical method on 3D multilayer composite structure. Afterwards, an example of 3D composite structure exhibiting non-uniform strain fields, in which the microstructure consists of an elastoviscoplastic polymer matrix with ductile damage, reinforced by short glass fibres is presented. The numerical results of the simulation are compared with the Mori-Tanaka method. Finally, the capabilities of this method are shown by simulating the mechanical response of a more complex structure under complex loading path with different strain rate. Theoretical background and Scale transition In this section, the periodic homogenization principle, as well as the transition between the two scales (microscopic and macroscopic) are presented. The principal objective is to determine the macroscopic quantities (stress and tangent modulus) that are obtained through periodic homogenization by accounting for the different mechanisms that exist in the microscopic level, as non-linear plastic/viscoplastic behavior with ductile damage of the matrix. After that, the local constitutive law of each constituent is presented, where a linear elastic law is chosen for the reinforcement and a constitutive model that incorporate elastoviscoplasticity coupled with ductile damage for the matrix. Theoretical background for periodic homogenization The objective of the periodic homogenization theory is to define a fictitious homogenized medium having an equivalent response of the heterogeneous medium that is representative of the microstructure. A periodic medium is characterized by a repeated unit cell in the three spatial directions, which forms an unit cell. The theory of periodic homogenization is valid as long as the separation between the scales exists, i.e. the sizes of the unit cell are much smaller than the macroscopic sizes of the medium (x >> x) (Fig. 1). In this paper, the notation (•) will be used to denote macroscopic quantities. The motion of any macroscopic and microscopic material points M(x) and M(x, x) respectively, are governed by the macroscopic and the microscopic equations (Tab.1). In Tab.1 σ, ε, σ and ε represent the microscopic and the macroscopic stress and strain tensors respectively, b v is the body forces, V and V are the volumes of the micro and the macro structure. Equations Macro-scale Micro-scale ∀ x ∈ V ∀ x ∈ V, ∀ x ∈ V Equilibrium div x σ + b v = 0 div x (σ) = 0 Kinematics ε = 1 2 (Grad x (u) + Grad x (u)) ε = 1 2 (Grad x (u) + Grad x (u)) Constitutive law σ = F(x, ε) σ = F(x, x, ε) Strain energy rate Ẇε = σ : ε Ẇε = σ : ε Moreover, x, x, u and u are the microscopic and the macroscopic positions and displacement vectors, while F and F are operators that define the micro and macro relationships between the stress and strain. Both F and F are considered non-linear operators in this work. The homogenization theory attempts to define the F operator, which characterizes the macroscopic behavior, from the local behaviors defined by the F operator. In order to make this possible, it is necessary to introduce the concept of scale transition between the macro and the micro scales. According to the average stress and strain theorems, it can be demonstrated that the stress and strain averages within the unit cell are equal to the stress and strains corresponding to uniform tractions and linear displacements respectively that are applied at its boundaries. These averages represent the macroscopic stress and strain tensors respectively. The relationships between the two scales are given by the following equations: σ = σ = 1 V V σ dV = 1 V ∂V σ.n ⊗ x dS (1) ε = ε = 1 V V ε dV = 1 2V ∂V (u ⊗ n + n ⊗ u) dS ( 2 ) where n is the outgoing normal of the unit cell boundary ∂V. • is the mean operator and ⊗ the dyadic product. Non-linear scale transition: incremental approach Since the homogenization is based on the separation between the different scales, the connection between these scales (microscopic and macroscopic problems) should be defined in order to be able to predict the overall behavior of the structure. Microscopic problem The periodicity condition implies that, the displacement field u of any material point located in x can be described by an affine part, in which a periodic fluctuation u is added as is presented in Fig. 2: The periodic fluctuating quantity u takes the same value on each pair of opposite parallel sides of the unit cell and the strain average produced by u is null [Eq. 5]. Therefore, the full strain average is well equal to the macroscopic strain [Eq. 6]. u(x, x, t) = ε(x, t) • x + u (x, x, t) (3) ε (u) = ε + ε (u ) (4) ε (u ) = 1 V V ε (u ) dV = 0, (5) ε (u) = ε + ε (u ) = ε (6) The traction vector σ. n is anti periodic and satisfies the conditions of equilibrium within the unit cell. The micro problem is formulated as follows:                      σ = F (x, ε (u (x))) ∀x ∈ V, div x (σ (x)) = 0 ∀x ∈ V, u i -u j = ε . x i -x j ∀x ∈ V (7) where u i , u j , x i and x j are the displacements and the positions of each pair of opposite parallel material point of the unit cell boundary respectively, while ε is the macroscopic strain. The relationship between the microscopic stress and the microscopic strain in incremental approaches is provided by the linearised expression [Eq. 8]: ∆σ(x) = C t (x) : ∆ε (x) ∀x ∈ V, (8) where C t is the local tangent operator tensor defined as the numerical differentiation of the stress with respect to the total strain. Macroscopic problem The relationship between macroscopic stress and strain cannot be explicitly provided by a stiffness tensor. Nevertheless, for a given macroscopic strain, the macroscopic stress response can be computed using an implicit resolution scheme, where the local behavior is linearized and corrected at each strain increment [Eq. 8]. Then, using the same incremental methodology, the macroscopic behavior can also be linearized in order to predict the next increment.This linearization requires to write the macroscopic constitutive law in the non-linear form of Eq. 9. The equilibrium at the macroscopic level in the absence of body forces is given by the Eq. 10. ∆σ x = C t (x) : ∆ε (x) ∀x ∈ V, (9) div x ∆σ x = 0 ∀x ∈ V, (10) where ∆σ(x) is the macroscopic stress tensor associated with the point x of the macrostructure at each macroscopic strain increment. The relationship between the macroscopic stress and strain is given in Voigt notation in Eq. 11. The macroscopic tangent operator C t is recovered by computing the macroscopic stress resulting from the six elementary strain states written in Eq. 12 (also in Voigt notation) at each macroscopic strain increment:                                          ∆σ 1 ∆σ 2 ∆σ 3 ∆σ 4 ∆σ 5 ∆σ 6                                          =                                          C t,                                          ×                                          ∆ε 1 ∆ε 2 ∆ε 3 2 ∆ε 4 2 ∆ε 5 2 ∆ε 6                                          (11)                                          ∆ε (1) = (K 0 0 0 0 0) T ∆ε (2) = (0 K 0 0 0 0) T ∆ε (3) = (0 0 K 0 0 0) T ∆ε (4) = (0 0 0 K 0 0) T ∆ε (5) = (0 0 0 0 K 0) T ∆ε (6) = (0 0 0 0 0 K) T (12) Then, the ij component of the tangent operator is given by the i th component of the stress vector calculated with the j th elementary strain state, divided by the j th component of the strain vector of the elementary strain state: C t, i j = ∆σ ( j) i K , i, j = 1, 2, 3, 4, 5, 6. (13) Usually, K is chosen to be equal to 1. Local elasto-viscoplastic behavior with ductile damage for the matrix The constitutive law of the matrix material is defined through a thermodynamically based phenomenological model for viscoplasticity and ductile damage in semi-crystalline polymers [START_REF] Lemaitre | Mechanics of solid materials[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF]. These materials exhibit a dissipative behavior that combines solid and fluid properties with some apparent stiffness reduction. The model is described by the rheological scheme given in the Fig. 3. It is composed of: one single linear spring, subjected to an elastic strain ε e , and a viscoplastic branch, subjected to a viscoplastic strain ε p which consists of a frictional element, a non-linear spring and a non-linear dash-pot. The linear spring and the viscoplastic branch are positioned in series. The model is formulated within the thermodynamics framework [START_REF] Lemaitre | Mechanics of solid materials[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF]. The state laws are obtained by differentiation of the Helmholtz potential with respect to the state variables. This potential is formulated as the sum of the stored energies of the spring and the viscoplastic branch. ρψ ε, r, ε p , D = 1 2 ε -ε p : (1 -D) C e : ε -ε p + r 0 R(ξ) dξ (14) The internal state variables ε p , r and D represent the viscoplastic strain, effective equivalent viscoplastic strain variable and the damage variable respectively. C e is the initial fourth order stiffness tensor of the single spring, classically defined for bulk isotropic materials. R is the hardening function, chosen under the form of the power law function, that must be increasing, positive and vanishes at r = 0: R (r) = Kr n , (15) where K and n are the viscoplastic material parameters. According to the second principle of thermodynamics, dissipation is always positive or null (Clausius Duhem inequality). Assuming that the mechanical and thermal dissipations are uncoupled, the rate of the mechanical dissipated energy Φ is positive or zero and is given by the difference between the strain energy rate Ẇε and the stored energy rate ρ ψ (Eq. 16). Φ = Ẇε -ρ ψ = σ : ε -ρ ∂ψ ∂ε : ε + ∂ψ ∂ε p : εp + ∂ψ ∂r : ṙ + ∂ψ ∂D : Ḋ = σ : εp -Rṙ + Y Ḋ ≥ 0. ( 16 ) The viscoplasticity and damage are considered to be coupled phenomena [START_REF] Lemaitre | Coupled elasto-plasticity and damage constitutive equations[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF]. Consequently, the evolution of ε p , r and D are described by the normality of a convex indicative function that satisfies the above inequality: F (σ, R, Y; D) = eq(σ) 1 -D -R -R 0 f (σ, R; D) + S (β + 1) (1 -D) Y S β+1 f D (Y;D) (17) In the last expression, the term f (σ, R; D) denotes the yield criterion function which activates the mechanism (ṙ > 0 if f > 0, else ṙ = 0). The function f is expressed in the effective stress space. f D is an additive term that takes into account the evolution of the damage at the same time as the viscoplasticity. eq(σ) denotes the von Mises equivalent stress, R 0 denotes the yield threshold, while S and β are damage related material parameters. The viscous effect is introduced by considering a relation between the positive part of f and ṙ through a function Q. This function is chosen under the form of the power law: f + = Q (ṙ) , Q (ṙ) = Hṙ m (18) where H and m are the material parameters. The function Q (ṙ) must be increasing, positive and null at ṙ = 0. This type of model allows to capture some well known effects of thermoplastic polymers, namely the rate effect through the creep and relaxation phenomena, as well as the stiffness reduction due to the ductile damage. Tab. 2 summarizes the thermodynamic variables, the evolution laws and the von Mises type viscoplastic criterion of the model. In the table, Dev(σ) denotes the deviatoric part of the stress. ε σ = ρ ∂ψ ∂ε = (1 -D) C e : ε -ε p State variables Associated variables Evolution laws r R = ρ ∂ψ ∂r = R (r) ṙ = - ∂F ∂R λ = λ ε p -σ = ρ ∂ψ ∂ε p εp = ∂F ∂σ λ = 3 2 Dev(σ) eq(σ) ṙ 1 -D D Y = ρ ∂ψ ∂D Ḋ = ∂F ∂Y λ = Y S β ṙ 1 -D Multiplier Criterion Active ( λ > 0 if f > 0) λ = r f (σ, R; D) = eq(σ) 1 -D -R -R 0 f + = Q (ṙ) Multi scale FE computation and numerical implementation To predict the macroscopic behavior of a composite structure, taking into account the effect of the microstructure, homogenization scheme within the framework of a FE 2 is an accurate solution. According to Feyel [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF], this approach considers that the macroscopic problem and the microscopic heterogeneous unit cell are solved simultaneously. On the macroscopic scale, the material is assumed as a homogenized medium with non-linear behavior. The macroscopic response is calculated by solving an appropriate periodic boundary value problem at the microscopic level within a homogenization scheme. The important macroscopic information (strain) passes to the unit cell through the constraint drivers. The concept of constraint drivers is explained in the next subsection. It is pointed out that the response at the macroscopic scale is obtained by the homogenization process and is frequently called "homogenized". The macroscopic fields and tangent moduli depend on the microscopic response at each unit cell. Since the macroscopic strains are heterogeneous in the structure, the homogenized response varies at every macroscopic point, providing a type of spatial heterogeneity. Unit cell computations for periodic homogenization using the concept of constraint drivers The method of constraint drivers is a numerical technique which allows to apply any state of macroscopic stress, strain or even mixed stress/strain on a periodic finite element unit cell. More detailed exposition about this concept is given in [START_REF] Li | On the unit cell for micromechanical analysis of fibre-reinforced composites[END_REF][START_REF] Li | General unit cells for micromechanical analyses of unidirectional composites[END_REF][START_REF] Shuguang | Unit cells for micromechanical analyses of particle-reinforced composites[END_REF]. In the finite element framework a unit cell for periodic media should be associated with periodic mesh. This means that for each border node, there must be another node at the same relative position on the opposite side of the unit cell. The aim of the constraint drivers in a periodic homogenization approach is to apply a macroscopic strain ε on the unit cell, taking into account the periodic boundary conditions. In practice, a displacement gradient is applied between each pair of opposite parallel border nodes (denoted by the indices i and j). This gradient is directly related to the macroscopic strain tensor ε i j by the following general kinematic relationship: u i = u j ⇐⇒ u i -u j = ε . (x i -x j ) ∀x ∈ V (19) The proposed method introduces the six components of the macroscopic strain tensor as additional degrees of freedom (constraint drivers) that are linked to the mesh of the unit cell using the kinematic equation 19. The displacements of these additional degrees of freedom, noted as , respectively, and they permit to recover directly the corresponding components of the macroscopic stress tensor (Fig. 4) at the end of the unit cell calculations. Dividing the dual force by the unit cell volume leads to the corresponding macroscopic stresses. u Concept and numerical algorithm of FE 2 method After defining the concept of constraint drivers, the implementation of a two-scale finite element approach is the next step in the computational homogenization framework. The proposed method lies within the general category of multi-scale models. In this method the macroscopic constitutive behavior is calculated directly from the unit cell, providing the geometry and the phenomenological constitutive equations of each constituent. The FE 2 method consists of three main steps according to [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF]: (1) A geometrical description and a FE model of the unit cell. (2) The local constitutive laws expressing the response of each component of the composite within the unit cell. (3) Scale transition relationships that define the connection between the microscopic and the macroscopic fields (stress and strain). The scale transition is provided by the concept of homogenization theory, using volume averaging of microscopic quantities of the unit cell, which is solved thanks to periodic boundary conditions. The macroscopic fields (stress and strain) are introduced in a unit cell by using the six additional degrees of freedom (constraint drivers), that are linked with the boundaries through the kinematic equations [Eq. 19]. The macroscopic behavior of a 3D composite structure is computed by considering that the material response of each macroscopic integration point is established from the homogenization of a unit cell that is connected to each macroscopic integration point. Each unit cell contains the local constitutive laws of different phases and the geometrical characteristics of the microstructure. The FE 2 approach presented here has been developed using an implicit resolution scheme, with the use of a Newton-Raphson algorithm, that solves the non-linear problems at the two scales. At each macroscopic integration point, the macroscopic stress and the macroscopic tangent operator are computed for the calculated macroscopic strain at each time increment, by solving iteratively a FE problem at the microscopic scale. Concept of transition between scales in FE 2 computations In the framework of FE 2 modelling the global resolution step is performed at each time increment by solving a local equilibrium problem at each macroscopic integration point. At each step, the microscopic problem is solved by applying the macroscopic strain increment to the unit cell through the periodic boundary conditions. The system of equations in the linearized incremental form is given as follows:                      ∆σ (x) = C t (x) : ∆ε (x) ∀x ∈ V, div x (∆σ (x)) = 0 ∀x ∈ V, ∆u i -∆u j = ∆ε . x i -x j ∀x ∈ V (20) By using the developed user subroutine at the microscopic scale which contains the non-linear local behavior of the constituents, the microscopic stress, tangent operator and internal state variables V k are computed at every microscopic point. The macroscopic stress σ is then computed through volume averaging of the microscopic stresses, and the local tangent operators of all microscopic points are utilized to obtain the macroscopic tangent operator C t by solving six elastic-type loading cases with the elementary strain states described in subsection 2.1.2. The internal state variables and the local stress are saved as initial conditions for the next time increment. Once the macroscopic quantities σ and C t are computed, the analysis at the macroscopic level is then performed and the macroscopic strain increment ∆ε is provided by the Finite Element Analyses Package ABAQUS at every macroscopic point through the global equilibrium resolution. This information is passed to the macroscopic scale by using a user defined constitutive model (denoted here as Meta-UMAT) that represent the behavior of a macroscopic material point and contains the unit cell equations and hence the process returns to the local problem. The iterative procedure inside the Meta-UMAT is depicted in Fig. 5. The loop is repeated until numerical convergence is achieved in both micro and macro-scales numerical problems. After the convergence, the analysis proceeds to the next time step. Both the Meta-UMAT and the structural analysis in the macroscopic level define the FE 2 approach. Algorithm of FE 2 and parallel calculation The algorithm of the FE 2 computational strategy for the non-linear case in ABAQUS/Standard is presented in Fig. 6. As shown in Figures 5 and6, the macroscopic problem is solved at each increment in a linearized manner, considering the homogenized tangentmod modulus C t . The elastic prediction -inelastic correction is performed at the scale of the constituents laws (Micro-UMAT) using the well-known "return mapping algorithm -convex cutting plane" scheme [START_REF] Simo | Computational Inelasticity[END_REF]. The aim of the FE 2 approach is to perform structural numerical simulations, thus reduction of the computational time is of outmost importance. Since the FE 2 homogenization requires very costly computations, parallel calculation procedures for running the analysis on multiple CPUs are unavoidable. Parallel implementation of the FE 2 code in ABAQUS/Standard It is known that the FE 2 computation is expensive in terms of CPU time, caused by the transition between the two scales and the degree of freedom number of the microscopic and the macroscopic models. To reduce this computational time, a parallel implementation of the FE 2 procedure is set-up in ABAQUS/Standard. All the Finite Element Analyses of the unit cells within a sin-initialisation -Apply the PBCs on the unit cell. -Compute the initial macroscopic tangent modulus C t . Macro-level -Solve the macro problem. -Get the macroscopic strain increment ∆ε n+1 . Micro-level -Python script for the micro problem. -Compute the local fields σ, ε, C t . -Compute the macroscopic stress σ. -Compute the macroscopic tangent modulus C t . Global check convergence Next increment n=n+1 Update all fields: gle macroscopic element (one per integration point) is sent on a single computation node (a set of processors) and their are solved iteratively. The computations that correspond to each macroscopic element are solved in different computations nodes. Thus, theoretically the parallelization can be performed simultaneously on every element. In practice the parallel computation is limited to the number of available calculation nodes. Note that the computations of every microscopic Finite Element Analysis can also be computed in parallel within the computation node if it possess several processors (which is often the case) and this parallelization process is governed by the Finite Element Analyses Package ABAQUS. In practice, the Meta-UMAT calls an appropriate python script that solves the local problem (including the computation of the macroscopic tangent modulus) at each macroscopic integration point, with the use of the microscopic UMAT, which contains the local non-linear behavior of the constituents (Fig. 7). Afterwards, the global solver of ABAQUS checks that all calculations at different processors are completed before proceeding to the resolution of the macroscopic problem, before passing to the next time increment, or the next macroscopic iteration. V k , σ, ε, C t , σ, C t . C t ∆ε n+1 C t , σ non yes Implementation of the microscopic problem in ABAQUS With regard to the microscopic problem, as mentioned previously, the Meta-UMAT executes a properly designed python script in each macroscopic integration point of the composite structure with the first macroscopic strain increment given by ABAQUS. The periodic boundary conditions (PBCs) and the macroscopic strain are applied on the unit cell by means of the python script at each time increment, since this last information is given at each integration point from the prediction of the strain increment that should satisfy the global equilibrium. The script also calls the solver ABAQUS to solve the microscopic Finite Element Analyses, which utilize the microscopic user subroutines that contains the local constitutive laws of the constituents, in order to obtain the microscopic response through a return mapping iterative process. Once the local equilibrium is achieved, the local response (σ and C t ) are computed. Then, the macroscopic stress is recovered as a reaction force divided by the unit cell volume on the constraint drivers (section 3.1). The macroscopic tangent modulus is calculated by mapping the local tangent moduli on the unit cell through the six elementary strain states. Through the python script, the macroscopic quantities (σ and C t ) are calculated and transferred to in the Meta-UMAT. At this point, the global equilibrium is checked, if the convergence is reached, we proceed to the next time increment n+1. Applications and Capabilities of the FE 2 framework In order to validate the two-scale computational approach within the framework of 3D nonlinear composite structures, two test cases have been addressed: the first one is a periodic multilayer composite structure with non-linear, elastoplastic phases. It has been demonstrated that the use of an incremental linearized temporal integration approach, there exist a semi-analytical solution for this problem [START_REF] Chatzigeorgiou | Computational micro to macro transitions for shape memory alloy composites using periodic homogenization[END_REF]. This test case is utilized as a validation of the implementation of the FE 2 framework. The second one is the simulation of three-dimensional composite structure, with a two-phase microstructure: A matrix phase that exhibits a coupled elastoviscoplastic with ductile damage response, reinforced by short glass fibres. The results of such multi-scale simulation are compared with a legacy modelling approach, i.e. the use of an incremental Mori-Tanaka scheme [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF]. Comparison with semi-analytical homogenization method for elastoplastic multilayer composites The multi-scale structure simulated is presented in Fig. 8 and is composed at the microscopic scale of a periodic stack of two different layers, one with an elastic response (superscript e) and the second one with an elastic-plastic response (superscript p). The volume fraction of the two phases is equal, i.e. c e = c p = 0.5. The macroscopic shape of the structure is a cuboid. For the elastic-plastic phase, the plastic yield criterion is given by: f p (σ, p) = eq(σ) -R(p) -R 0 ≤ 0. ( 21 ) where eq(σ) is the equivalent Von Mises stress and R 0 is the yield threshold. The hardening function R(p) is chosen under the form of a power law [START_REF] Lemaitre | Mechanics of solid materials[END_REF]: R(p) = K.p n (22) where K and n are material parameters. p is the accumulated plastic strain. The material parameters of the two phases are given in Tab. 3. As discussed in the Section 3.1, periodic boundary conditions are applied to the unit cell of the multilayer material. The macroscopic boundary conditions imposed correspond to a pure shear loading and are such that the relationship between the displacement at the boundary is u 0 = ε 0 n, n being the outward normal of the surface and all the components of the tensor ε 0 are zero except ε 12 = ε 21 , see Fig. 9-b. Note that under such conditions, the numerical results of the two Finite Element Analyses should be mesh-independent, since homogeneous fields are considered in all the phases. The results of the two approaches, the FE 2 and the semi-analytical are identical (Fig. 10), which demonstrates the capability of the computational method to predict the response of 3D non-linear multi-scale composite structures. 3D structure (Meuwissen) with short fibre reinforced composite To demonstrate the capabilities of the FE 2 approach to identify the overall behavior of 3D composite structures close to parts that are commonly manufactured, the second test case is performed on structure where heterogeneous strain and stress fields are observed during a tensile load field. the composite material is considered as an elastoviscoplastic polymer matrix with ductile damage, reinforced by aligned glass short fibres arranged in a periodic hexagonal array (Fig. 11). The volume fractions of the matrix and the fibres are V m = 0.925 and V f = 0.075 respectively, while the aspect ratio for the elliptic fibre is (4, 1, 1). The fibres elastic properties are the following: a Young's modulus E f = 72000 MPa and a Poisson's ratio ν f = 0.26. The material properties of the matrix phase are listed in Tab. 4. It should be mentioned that these material parameters are motivated by the work of [START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF], but they do not consider the viscoelastic response, which is taken into account in that article. Thus, the material properties are related to viscoplastic behavior coupled to damage in polymeric media. The structure presented in the Fig. 12-a is clamped at the left side and subjected to the loading path of Fig. 12-b at the right side. The displacement controlled path consists in three loading steps with different velocities (u (1) x = 1 mm.s -1 , u(2) x = 0.2 mm.s -1 , u (3) x = 0.8 mm.s -1 ) followed by an unloading stage at a displacement rate of (u (4) x = 2 mm.s -1 ). The results of the full-field FE 2 method are compared with those obtained by using the incremental mean-field Mori-Tanaka method. This method has been widely utilized for the simulation of composites [START_REF] Doghri | Homogenization of two-phase elasto-plastic composite materials and structures: Study of tangent operators, cyclic plasticity and numerical algorithms[END_REF] as well as smart structures [START_REF] Piotrowski | Modeling of niobium precipitates effect on the ni47ti44nb9 shape memory alloy behavior[END_REF]. Such homogenization scheme is however considered valid under specific cases [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF], and some specific corrections might be required [START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF][START_REF] Brassart | Homogenization of elasto-plastic composites coupled with a nonlinear finite element analysis of the equivalent inclusion problem[END_REF]. Since the proposed corrections are not unique and depends on the type of composites, the regular incremental method is employed, where the linearized problem is written in term of the anisotropic algorithmic tangent modulus of the non-linear phases [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF]. The advantage of the Mori-Tanaka scheme relies in its computational efficiency, since it is a semi-analytical method and accounts for the material non-linearities only on an average sense and not at every local microscopic point in the unit cell. The overall load-displacement response computed using FE 2 approach is shown in Fig. 13 and compared to the global response predicted by the mean-field Mori-Tanaka approach. As expected the both approaches predict comparable responses notably in the elastic regime. However, for the viscoplastic regime, the mean-field based simulation does not capture well the strain rate change of the applied loading path and provides stiffer response than the full-field FE 2 . This aspect is known and occurs when one phase exhibits a non-linear behavior. Similar observations have been reported to the literature especially when the matrix phase behaves as viscoelastic-viscoplastic media [START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF][START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF]. The authors proposed specific numerical formulations to address this limit of mean-field based methods. Fig. 14 demonstrates stress-strain curves at the macroscopic point A (Fig. 15-a). Due to the semi-analytical form of the Mori-Tanaka method, the computations are faster than in the FE 2 method but it requires a smaller time increment. The results indicate that the response of the two approaches describe the changing of the rate loading caused by the viscous behavior of polymer matrix, but it is clearly shown that the Mori-Tanaka response misdescribed this phenomena because it is more rigid with a considerable loss of plasticity as expected, compared to the FE 2 . The results illustrate that the response of the composite is highly influenced by the presence of the matrix, exhibiting both viscoplastic response through relaxation phenomena, as well as stiffness reduction during unloading due to the ductile damage. It is worth noticing that the inelastic characteristics of the different phases are mainly taken into account in the microscale and, accordingly, the unit cell is adequately meshed (6857 elements). The authors have performed several analyses at different meshes of the macroscopic structure and have confirmed that the chosen meshing of 100 elements was sufficient for the purposes of the manuscript. At a characteristic critical point of the structure (centre of one notch), the deformed macro-scale structure and the microscopic stress response (component 11) of the unit cell that represent a macroscopic integration point A are shown in Fig. 15. It is clear that at such critical material point, the adopted incremental Mori-Tanaka scheme do not predict the local response with a sufficient accuracy to be able to utilize such results for the computation of damage evolution of fatigue life predictions, which are unavoidable in the case of most load-bearing application of composite structures. Even if the mesh convergence is difficult to reach to obtain exact results, the FE 2 framework could provide a standard of predictability that is much higher than the mean-fields methods when a composite is simulated, where the matrix present a strongly non-linear response. It deserves to be mentioned that the periodic homogenization gives excellent results for 3D structures, and the numerical accuracy depends on that of the FE calculations. However, when addressing plate or shell structures, the periodic homogenization requires proper modifications, as described in [START_REF] Kalamkarov | Analysis, design and optimization of composite structures[END_REF], due to the loss of periodicity in the thickness direction (out of plane). This results in less accurate prediction for the out of plane Poisson ratio. Nevertheless, the out of plane periodicity can be reasonably assumed when the microstructure contains high number of fibers or layers in the thickness direction. Complex 3D structure with corner shape In this section, a second 3D composite structure is simulated in order to illustrate the capability and the flexibility of the approach, when more complex boundary conditions are applied to the macroscopic structure. The modelled structure consists of a 3D part having a corner shape (Fig. 16-a). It is made of a thermoplastic aligned short fibre reinforced composite in which the matrix and reinforcement phases exhibit the same behavior as in Section 4.2. The structure is clamped at the bottom side and subjected to a normal uniform displacement path at left side (Fig. 16-b). The displacement controlled path consists in two loading steps with different displacement rates (u (1) x = 2.1875 mm.s -1 , u(2) x = 0.15625 mm.s -1 ) and an unloading step at a displacement rate of (u (3) x = 0.9375 mm.s -1 ). In Figs. 17 and 18, the whole response of the composite in terms of macroscopic stress vs strain are depicted at two distinct points A and B (Fig. 19-c). The approach is able to reproduce the effect of such microstructure on the overall response of the composite, as on the most stressed point shown in Fig. 19. Indeed, on the clamped part (point A), it is clear that the structure is subjected to a tensile load according to the 22 direction, and a shear load in the direction 12. These results are attributable primarily by the macroscopic boundary conditions. Furthermore, for the point B, a high stress value in the direction of loading was noticed. Fig. 19-c For the parallelization procedure and with the same number of increments (42 increments), the computation becomes 18 times faster than the non parallel solution. The actual computational time of the analysis performed on 18 processors was approximately 72h for a macroscopic structure containing 90 elements of type C3D8 with 6857 microscopic elements of type C3D4. Conclusions and further work This work presents a non-linear three-dimensional two-scale finite element (FE 2 ) framework fully integrated in the Finite Element Analysis Package ABAQUS/Standard, using parallel computation. The main advantage of the method is that it does not require an analytical form for the constitutive law at the macro-scale, while accounting for the microstructural effects and the local behaviors. It can integrate any kind of periodic microstucture with any type of non-linear behavior of the reinforcement (fibres and/or particles) and the matrix (plastic, viscoelastic, viscoplastic and damage). The multi-scale strategy has been tested on three independent numerical examples: In the first example, a 3D multilayer composite structure with elastoplastic phases is simulated and compared with semi analytical solution, to validate the numerical implementation. In the second example, a short glass fibre reinforced composite with elastoviscoplastic-damageable matrix under complex loading is examined through the FE 2 strategy and the results are compared to those obtained by the Mori-Tanaka method. The obtained responses were in agreement with those presented in the literature in similar cases, and highlight the importance of utilizing full-field method for a generic modelling strategy with high predictability capabilities. In the third example, 3D a complex composite structure with corner shape is simulated in which the microstucture is made of an elastoviscoplastic matrix with ductile damage reinforced by short glass fibre. The capabilities of such approach to reproduce the effect of such microstructure on the macrostructure response at each macroscopic integration point has been demonstrated. The response of the structure clearly highlights creep and relaxation phenomena, which are characteristic for rate dependent responses. This viscous behavior and the stiffness reduction observed during unloading have been induced by the viscoplastic nature of the polymer matrix. It worth noticing that for composites where the matrix is viscoplastic material, the Mori-Tanaka method under proper modifications can provide quite accurate results [START_REF] Mercier | Homogenization of elastic-viscoplastic heterogeneous materials: Self-consistent and mori-tanaka schemes[END_REF][START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF] compared to the full-field based approach. A last advantage of this approach is that it can be extended to predict the overall fully coupled thermomechanical response of 3D composite structures [START_REF] Berthelsen | Computational homogenisation for thermoviscoplasticity: application to thermally sprayed coatings[END_REF][START_REF] Chatzigeorgiou | Computational micro to macro transitions for shape memory alloy composites using periodic homogenization[END_REF] with more complex mechanisms between fibres-matrix as interfacial damage mechanisms. Such fully-coupled analyses on multiscale structures should be of a high interest for industrial applications that are usually computed with commercial finite element analyses packages. Figure 1 : 1 Figure 1: Schematic representation of the homogenization computational. Figure 2 : 2 Figure 2: Definition of the displacement field as the sum of an affine part and a periodic fluctuation Figure 3 : 3 Figure 3: Rheological scheme of the viscoplastic behavior and ductile damage [11]. Figure 4 : 4 Figure 4: Connection of the constraint drivers with the unit cell. Figure 5 : 5 Figure 5: Meta-UMAT for the overall response computation of the composite using FE 2 approach at time increment n+1. Figure 6 : 6 Figure 6: The flow chart of the two scales FE 2 algorithm in ABAQUS/Standard for non-linear case. Figure 7 : 7 Figure 7: Parallelization steps of the FE 2 code. Figure 8 : 8 Figure 8: Multilayer composite structure with their microstructure associated with each macroscopic integration point. Figure 9 : 9 Figure 9: Multilayer composite structure under shear loading path. Figure 10 : 10 Figure 10: Comparison of the numerical result of FE 2 approach with semi-analytical solution on multilayer witch elastoplastic phases in term of macroscopic stress-strain response. (a) Mesh of the entire unit cell.(b) Short fibres reinforcement. Figure 11 : 11 Figure 11: Composite microstructure. (a) Mesh of the entire 3D composite structure. (b) Applied loading path. Figure 12 : 12 Figure 12: Tensile and compression test on the 3D Meuwissen test tube. Figure 13 :Figure 14 : 1314 Figure 13: The overall load-displacement response of the structure in the directions 11. Comparison between the FE 2 and the Mori-Tanaka solution. ( a ) a Macroscopic stress field of the composite structure. (b) Microscopic stress field of the microstructure at point A. Figure 15 : 15 Figure 15: FE 2 solution with ABAQUS/Standard (component 11). shows the stress response (component 11) of the macroscopic structure and the resulting microscopic stress in the two unit cells situated at two different macroscopic integration points A and B (Figs. 19-a and 19-b respectively). The response of the composite is highly affected by the matrix behavior through the relaxation phenomena caused by the change of the loading rate. The apparent stiffness reduction during the unloading caused by the ductile damage in the matrix is clearly observed. (a) Macroscopic mesh of the 3D composite structure. (b) Applied loading path. Figure 16 : 16 Figure 16: Tensile and compression test on the 3D composite structure with corner. - 1 . 1 75 -1.50 -1.25 -1.00 -0.75 -0.50 -0.25 0 Figure 17 : 17 Figure 17: Macroscopic response of the composite at point A in term of stress-strain in the directions 11, 22, 33 and shear 12. Figure 18 : 18 Figure 18: Macroscopic response of the composite at point B in term of stress-strain in the directions 11, 22, 33 and shear 12. (a) Microscopic stress field of the microstructure at point A. (b) Microscopic stress field of the microstructure at point B. (c) Macroscopic stress field of the 3D composite structure (component 11). Figure 19 : 19 Figure 19: FE 2 solution with ABAQUS/Standard (component 11). Table 1 : 1 Macroscopic and microscopic scale transition[START_REF] Praud | Modélisation multi-échelle des composites tissés à matrice thermoplastique sous chargements cycliques non proportionnels[END_REF] Table 2 : 2 State and evolution laws Observable state variable Associated variable cd 11 , u cd 22 , u cd 33 , u cd 12 , u cd 13 and u cd 23 , take the values of each component of the macroscopic strain tensor ε 11 , ε 22 , ε 33 , 2ε 12 , 2ε 13 and 2ε 23 , respectively. The dual forces of the constrain drivers are noted as F cd 11 , F cd 22 , F cd 33 , F cd 12 , F cd 13 and F cd 23 Table 3 : 3 Material parameters for the two phases Elastic-plastic phase Parameter value unit E p 2000 MPa ν p 0.3 - R 0 10 MPa K 60.0 MPa n 0.15 - Elastic phase E e 6000 MPa ν e 0.2 - Table 4 : 4 Material parameters for polymer matrix Parameter value unit E m 1680 MPa ν m 0.3 - R 0 10 MPa K 365.0 MPa n 0.39 - H 180.0 MPa.s m 0.3 - S 6.0 MPa β -1.70 -
55,780
[ "774911", "863727" ]
[ "178323", "242513", "178323", "178323", "178323", "178323", "178323" ]
01762547
en
[ "math", "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762547/file/convergenceNonatomicFullProofs.pdf
Paulin Jacquot email: [email protected] Cheng Wan email: [email protected] † Cheng Routing Game on Parallel Networks: the Convergence of Atomic to Nonatomic We consider an instance of a nonatomic routing game. We assume that the network is parallel, that is, constituted of only two nodes, an origin and a destination. We consider infinitesimal players that have a symmetric network cost, but are heterogeneous through their set of feasible strategies and their individual utilities. We show that if an atomic routing game instance is correctly defined to approximate the nonatomic instance, then an atomic Nash Equilibrium will approximate the nonatomic Wardrop Equilibrium. We give explicit bounds on the distance between the equilibria according to the parameters of the atomic instance. This approximation gives a method to compute the Wardrop equilibrium at an arbitrary precision. Introduction Motivation. Network routing games were first considered by Rosenthal [START_REF] Rosenthal | The network equilibrium problem in integers[END_REF] in their "atomic unsplittable" version, where a finite set of players share a network subject to congestion. Routing games found later on many practical applications not only in transport [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF][START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF], but also in communications [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF], distributed computing [START_REF] Altman | Nash equilibria in load balancing in distributed computer systems[END_REF] or energy [START_REF] Atzeni | Demand-side management via distributed energy generation and storage optimization[END_REF]. The different models studied are of three main categories: nonatomic games (where there is a continuum of infinitesimal players), atomic unsplittable games (with a finite number of players, each one choosing a path to her destination), and atomic splittable games (where there is a finite number of players, each one choosing how to split her weight on the set of available paths). The concept of equilibrium is central in game theory, for it corresponds to a "stable" situation, where no player has interest to deviate. With a finite number of players-an atomic unsplittable game-it is captured by the concept of Nash Equilibrium [START_REF] Nash | Equilibrium points in n-person games[END_REF]. With an infinite number of infinitesimal players-the nonatomic case-the problem is different: deviations from a finite number of players have no impact, which led Wardrop to its definition of equilibria for nonatomic games [START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF]. A typical illustration of the fundamental difference between the nonatomic and atomic splittable routing games is the existence of an exact potential function in the former case, as opposed to the latter [START_REF] Nisan | Algorithmic game theory[END_REF]. However, when one considers the limit game of an atomic splittable game where players become infinitely many, one obtains a nonatomic instance with infinitesimal players, and expects a relationship between the atomic splittable Nash equilibria and the Wardrop equilibrium of the limit nonatomic game. This is the question we address in this paper. Main results. We propose a quantitative analysis of the link between a nonatomic routing game and a family of related atomic splittable routing games, in which the number of players grows. A novelty from the existing literature is that, for nonatomic instances, we consider a very general setting where players in the continuum [0, 1] have specific convex strategy-sets, the profile of which being given as a mapping from [0, 1] to R T . In addition to the conventional network (congestion) cost, we consider individual utility function which is also heterogeneous among the continuum of players. For a nonatomic game of this form, we formulate the notion of an atomic splittable approximating sequence, composed of instances of atomic splittable games closer and closer to the nonatomic instance. Our main results state the convergence of Nash equilibria (NE) associated to an approximating sequence to the Wardrop equilibrium of the nonatomic instance. In particular, Thm. 11 gives the convergence of aggregate NE flows to the aggregate WE flow in R T in the case of convex and strictly increasing price (or congestion cost) functions without individual utility; Thm. 14 states the convergence of NE to the Wardrop equilibrium in ((R T ) [0,1] , . 2 ) in the case of player-specific strongly concave utility functions. For each result we provide an upper bound on the convergence rate, given from the atomic splittable instances parameters. An implication of these new results concerns the computation of an equilibrium of a nonatomic instance. Although computing an NE is a hard problem in general [START_REF] Koutsoupias | Worst-case equilibria[END_REF], there exists several algorithms to compute an NE through its formulation with finite-dimensional variational inequalities [START_REF] Facchinei | Finite-dimensional variational inequalities and complementarity problems[END_REF]. For a Wardrop Equilibrium, a similar formulation with infinite-dimensional variational inequalities can be written, but finding a solution is much harder. Related work. Some results have already been given to quantify the relation between Nash and Wardrop equilibria. Haurie and Marcotte [START_REF] Haurie | On the relationship between nashcournot and wardrop equilibria[END_REF] show that in a sequence of atomic splittable games where atomic splittable players replace themselves smaller and smaller equal-size players with constant total weight, the Nash equilibria converge to the Wardrop equilibrium of a nonatomic game. Their proof is based on the convergence of variational inequalities corresponding to the sequence of Nash equilibria, a technique similar to the one used in this paper. Wan [START_REF] Wan | Coalitions in nonatomic network congestion games[END_REF] generalizes this result to composite games where nonatomic players and atomic splittable players coexist, by allowing the atomic players to replace themselves by players with heterogeneous sizes. In [START_REF] Gentile | Nash and wardrop equilibria in aggregative games with coupling constraints[END_REF], the authors consider an aggregative game with linear coupling constraints (generalized Nash Equilibria) and show that the Nash Variational equilibrium can be approximated with the Wardrop Variational equilibrium. However, they consider a Wardrop-type equilibrium for a finite number of players: an atomic player considers that her action has no impact on the aggregated profile. They do not study the relation between atomic and nonatomic equilibria, as done in this paper. Finally, Milchtaich [START_REF] Milchtaich | Generic uniqueness of equilibrium in large crowding games[END_REF] studies atomic unsplittable and nonatomic crowding games, where players are of equal weight and each player's payoff depends on her own action and on the number of players choosing the same action. He shows that, if each atomic unsplittable player in an n-person finite game is replaced by m identical replicas with constant total weight, the equilibria generically converge to the unique equilibrium of the corresponding nonatomic game as m goes to infinity. Last, Marcotte and Zhu [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF] consider nonatomic players with continuous types (leading to a characterization of the Wardrop equilibrium as a infinite-dimensional variational inequality) and studied the equilibrium in an aggregative game with an infinity of nonatomic players, differentiated through a linear parameter in their cost function and their feasibility sets assumed to be convex polyhedra. Structure. The remaining of the paper is organized as follows: in Sec. 2, we give the definitions of atomic splittable and nonatomic routing games. We recall the associated concepts of Nash and Wardrop equilibria, their characterization via variational inequalities, and sufficient conditions of existence. Then, in Sec. 3, we give the definition of an approximating sequence of a nonatomic game, and we give our two main theorems on the convergence of the sequence of Nash equilibria to a Wardrop equilibrium of the nonatomic game. Last, in Sec. 4 we provide a numerical example of an approximation of a particular nonatomic routing game. Notation. We use a bold font to denote vectors (e.g. x) as opposed to scalars (e.g. x). 2 Splittable Routing: Atomic and Nonatomic Atomic Splittable Routing Game An atomic splittable routing game on parallel arcs is defined with a network constituted of a finite number of parallel links (cf Fig. 1) on which players can load some weight. Each "link" can be thought as a road, a communication channel or a time slot on which each user can put a load or a task. Associated to each link is a cost or "latency" function that depends only of the total load put on this link. O D t = 1, c 1 t = 2, c 2 • • • t = T, c T Definition 1. Atomic Splittable Routing Game An instance G of an atomic splittable routing game is defined by: • a finite set of players I = {1, . . . , I}, • a finite set of arcs T = {1, . . . , T }, • for each i ∈ I, a feasibility set X i ⊂ R T + , • for each i ∈ I, a utility function u i : X i → R, • for each t ∈ T , a cost or latency function c t (.) : R → R . Each atomic player i ∈ I chooses a profile (x i,t ) t∈T in her feasible set X i and minimizes her cost function: f i (x i , x -i ) := t∈T x i,t c t j∈I x j,t -u i (x i ). ( ) 1 composed of the network cost and her utility, where x -i := (x j ) j =i . The instance G can be written as the tuple: G = (I, T , X , c, (u i ) i ) , (2) where X := X 1 × • • • × X I and c = (c t ) t∈T . In the remaining of this paper, the notation G will be used for an instance of an atomic game (Def. 1). Owing to the network cost structure (1), the aggregated load plays a central role. We denote it by X t := i∈I x i,t on each arc t, and denote the associated feasibility set by: X := X ∈ R T : ∃x ∈ X s.t. i∈I x i = X . (3) As seen in ( 1), atomic splittable routing games are particular cases of aggregative games: each player's cost function depends on the actions of the others only through the aggregated profile X. For technical simplification, we make the following assumptions: Assumption 1. Convex costs Each cost function (c t ) is differentiable, convex and increasing. Assumption 2. Compact strategy sets For each i ∈ I, the set X i is assumed to be nonempty, convex and compact. Assumption 3. Concave utilities Each utility function u i is differentiable and concave. Note that under Asms. 1 and 3, each function f i is convex in x i . An example that has drawn a particular attention is the class of atomic splittable routing games considered in [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF]. We add player-specific constraints on individual loads on each link, so that the model becomes the following. Example 1. Each player i has a weight E i to split over T . In this case, X i is given as the simplex: X i = { x i ∈ R T + : t x i,t = E i and x i,t ≤ x i,t ≤ x i,t } . E i can be the mass of data to be sent over different canals, or an energy to be consumed over a set of time periods [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF]. In the energy applications, more complex models include for instance "ramping" constraints r i,t ≤ x i,t+1 -x i,t ≤ r i,t . Example 2. An important example of utility function is the distance to a preferred profile y i = (y i,t ) t∈T , that is: u i (x i ) = -ω i x i -y i 2 2 = -ω i t (x i,t -y i,t ) 2 , (4) where ω i > 0 is the value of player i's preference. Another type of utility function which has found many applications is : u i (x i ) = -ω i log (1 + t x i,t ) , (5) which increases with the weight player i can load on T . Below we recall the central notion of Nash Equilibrium in atomic non-cooperative games. Definition 2. Nash Equilibrium (NE) An NE of the atomic game G = (I, X , (f i ) i ) is a profile x ∈ X such that for each player i ∈ I: f i ( xi , x-i ) ≤ f i (x i , x-i ), ∀x i ∈ X i . Proposition 1. Variational Formulation of an NE Under Asms. 1 to 3, x ∈ X is an NE of G if and only if: ∀x ∈ X , ∀i ∈ I, ∇ i f i ( xi , x-i ), x i -xi ≥ 0 , (6) where 6) is the necessary and sufficient first order condition for xi to be a minimum of f i (., x-i ). ∇ i f i ( xi , x-i ) = ∇f i (•, x-i )| •= xi = c t ( Xt ) + xi,t c t ( Xt ) t∈T -∇u i ( xi ). An equivalent condition is: ∀x ∈ X , i∈I ∇ i f i ( xi , x-i ), x i -xi ≥ 0 . Proof. Since x i → f i (x i , x -i ) is convex, ( Def. 1 defines a convex minimization game so that the existence of an NE is a corollary of Rosen's results [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF]: Theorem 2 (Cor. of [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF]. Existence of an NE If G is an atomic routing congestion game (Def. 1) satisfying Asms. 1 to 3, then there exists an NE of G. Rosen [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF] gave a uniqueness theorem applying to any convex compact strategy sets, relying on a strong monotonicity condition of the operator (∇ xi f i ) i . For atomic splittable routing games [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF], an NE is not unique in general [START_REF] Bhaskar | Equilibria of atomic flow games are not unique[END_REF]. To our knowledge, for atomic parallel routing games (Def. 1) under Asms. 1 to 3, neither the uniqueness of NE nor a counter example of its uniqueness has been found. However, there are some particular cases where uniqueness has been shown, e.g. [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF] for the case of Ex. 1. However, as we will see in the convergence theorems of Sec. 3, uniqueness of NE is not necessary to ensure the convergence of NE of a sequence of atomic unsplittable games, as any sequence of NE will converge to the unique Wardrop Equilibrium of the nonatomic game considered. Infinity of Players: the Nonatomic Framework If there is an infinity of players, the structure of the game changes: the action of a single player has a negligible impact on the aggregated load on each link. To measure the impact of infinitesimal players, we equip real coordinate spaces R k with the usual Lebesgue measure µ. The set of players is now represented by a continuum Θ = [0, 1]. Each player is of Lebesgue measure 0. Definition 3. Nonatomic Routing Game An instance G of a nonatomic routing game is defined by: • a continuum of players Θ = [0, 1], • a finite set of arcs T = {1, . . . , T }, • a point-to-set mapping of feasibility sets X . : Θ ⇒ R T + , • for each θ ∈ Θ, a utility function u θ (.) : X θ → R, • for each t ∈ T , a cost or latency function c t (.) : R → R. Each nonatomic player θ chooses a profile x θ = (x θ,t ) t∈T in her feasible set X θ and minimizes her cost function: F θ (x θ , X) := t∈T x θ,t c t X t -u θ (x θ ), (7) where X t := Θ x θ,t dθ denotes the aggregated load. The nonatomic instance G can be written as the tuple: G = (Θ, T , (X θ ) θ∈Θ , c, (u θ ) θ∈Θ ) . (8) For the nonatomic case, we need assumptions stronger than Asms. 2 and 3 for the mappings X . and u., given below: Assumption 4. Nonatomic strategy sets There exists M > 0 such that, for any θ ∈ Θ, X θ is convex, compact and X θ ⊂ B 0 (M ), where B 0 (M ) is the ball of radius M centered at the origin. Moreover, the mapping θ → X θ has a measurable graph Γ X := {(θ, x) : θ ∈ Θ, x ∈ X θ } ⊂ R T +1 . Assumption 5. Nonatomic utilities There exists Γ > 0 s.t. for each θ, u θ is differentiable, concave and ∇u θ ∞ < Γ. The function Γ X (θ, x θ ) → u θ (x θ ) is measurable. Def. 3 and Asms. 4 and 5 give a very general framework. In many models of nonatomic games that have been considered, players are considered homogeneous or with a finite number of classes [START_REF] Nisan | Algorithmic game theory[END_REF]Chapter 18]. Here, players can be heterogeneous through X θ and u θ . Games with heterogeneous players can find many applications, an example being the nonatomic equivalent of Ex. 1: Example 3. Let θ → E θ be a density function which designates the total demand E θ for each player θ ∈ Θ. Consider the nonatomic splittable routing game with feasibility sets X θ := {x θ ∈ R T + : t x θ,t = E θ }. As in Ex. 1, one can consider some upper bound x θ,t and lower bound x θ,t for each θ ∈ Θ and each t ∈ T , and add the bounding constraints ∀t ∈ T , x θ,t ≤ x θ,t ≤ x θ,t in the definition of X θ . Heterogeneity of utility functions can also appear in many practical cases: if we consider the case of preferred profiles given in Ex. 2, members of a population can attribute different values to their cost and their preferences. Since each player is infinitesimal, her action has a negligible impact on the other players' costs. Wardrop [START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF] extended the notion of equilibrium to the nonatomic case. Definition 4. Wardrop Equilibrium (WE) x * ∈ (X θ ) θ is a Wardrop equilibrium of the game G if it is a measurable function from θ to X and for almost all θ ∈ Θ, F θ (x * θ , X * ) ≤ F θ (x θ , X * ), ∀x θ ∈ X θ , where X * = θ∈Θ x * θ dθ ∈ R T . Proposition 3. Variational formulation of a WE Under Asms. 1, 4 and 5, x * ∈ X is a WE of G iff for almost all θ ∈ Θ: c(X * ) -∇u θ (x * θ ), x θ -x * θ ≥ 0, ∀x θ ∈ X θ . (9) Proof. Given X * , ( 9) is the necessary and sufficient first order condition for x * θ to be a minimum point of the convex function F θ (., X * ). According to [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF], the monotonicity of c is sufficient to have the VI characterization of the equilibrium in the nonatomic case, as opposed to the atomic case in [START_REF] Facchinei | Finite-dimensional variational inequalities and complementarity problems[END_REF] where monotonicity and convexity of c are needed. Theorem 4 (Cor. of Rath, 1992 [16]). Existence of a WE If G is a nonatomic routing congestion game (Def. 3) satisfying Asms. 1, 4 and 5, then G admits a WE. Proof. The conditions required in [START_REF] Rath | A direct proof of the existence of pure strategy equilibria in games with a continuum of players[END_REF] are satisfied. Note that we only need (c t ) t and (u θ ) θ∈Θ to be continuous functions. The variational formulation of a WE given in Thm. 3 can be written in the closed form: Theorem 5. Under Asms. 1, 4 and 5, x * ∈ X is a WE of G iff: θ∈Θ c(X * ) -∇u θ (x * θ ), x θ -x * θ dθ ≥ 0, ∀x ∈ X . (10) Proof. This follows from Thm. 3. If x * ∈ X is a Wardrop equilibrium so that (9) holds for almost all θ ∈ Θ, then [START_REF] Koutsoupias | Worst-case equilibria[END_REF] follows straightforwardly. Conversely, suppose that x * ∈ X satisfies condition [START_REF] Koutsoupias | Worst-case equilibria[END_REF] but is not a WE of G. Then there must be a subset S of Θ with strictly positive measure such that for each θ ∈ S, (9) does not hold: for each θ ∈ S, there exists y θ ∈ X θ such that c(X * ) -∇u θ (x * θ ), y θ -x * θ < 0 For each θ ∈ Θ \ S, let y θ := x * θ . Then y = (y θ ) θ∈Θ ∈ X , and θ∈Θ c(X * ) -∇u θ (x * θ ), y θ -x * θ dθ = θ∈S c(X * ) -∇u θ (x * θ ), y θ -x * θ dθ < 0 contradicting (10). Corrolary 6. In the case where u θ ≡ 0 for all θ ∈ Θ, under Asms. 1 and 4, x * ∈ X is a WE of G iff: c(X * ), X -X * ≥ 0, ∀X ∈ X . (11) From the characterization of the WE in Thm. 5 and Thm. 6, we derive Thms. 7 and 8 that state simple conditions ensuring the uniqueness of WE in G. Theorem 7. Under Asms. 1, 4 and 5, if u θ is strictly concave for each θ ∈ Θ, then G admits a unique WE. Proof. Suppose that x ∈ X and y ∈ X are both WE of the game. Let X = θ∈Θ x θ dθ and Y = θ∈Θ y θ dθ. Then, according to Theorem 5, θ∈Θ c(X) -∇u θ (x θ ), y θ -x θ dθ ≥ 0 ( 12 ) θ∈Θ c(Y ) -∇u θ (y θ ), x θ -y θ dθ ≥ 0 (13) By adding ( 12) and ( 13), one has θ∈Θ c(X) -c(Y ) -∇u θ (x θ ) + ∇u θ (y θ ), y θ -x θ dθ ≥ 0 ⇒ c(X) -c(Y ), θ∈Θ (y θ -x θ )dθ + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ), y θ -x θ dθ ≥ 0 ⇒ c(X) -c(Y ), X -Y + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ), x θ -y θ dθ ≤ 0 Since for each θ, u θ is strictly concave, ∇u θ is thus strictly monotone. Therefore, for each θ ∈ Θ, -∇u θ (x θ ) + ∇u θ (y θ ), x θ -y θ ≥ 0 and equality holds if and only if x θ = y θ . Besides, c is monotone, hence c(X) -c(Y ), X -Y ≥ 0. Consequently, c(X) -c(Y ), X -Y + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ), x θ -y θ dθ ≥ 0, and equality holds if and only if for almost all θ ∈ Θ, x θ = y θ . (In this case, X = Y .) Theorem 8. In the case where u θ ≡ 0 for all θ ∈ Θ, under Asms. 1 and 4, if c = (c t ) T t=1 : [0, M ] T → R T is a strictly monotone operator, then all the WE of G have the same aggregate profile X * ∈ X . Proof. Suppose that x ∈ X and y ∈ X are both WE of the game. Let X = θ∈Θ x θ dθ and Y = θ∈Θ y θ dθ. Then, according to Corollary 6, c(X), Y -X ≥ 0 (14) c(Y ), X -Y ≥ 0 ( 15 ) By adding ( 14) and ( 15), one has c(X) -c(Y ), Y -X ≥ 0 Since c is strictly monotone, c(X) -c(Y ), X -Y ≥ 0 and equality holds if and only X = Y . Consequently, X = Y . Remark 1. If for each t ∈ T , c t (.) is (strictly) increasing, then c is a (strictly) monotone operator from [0, M ] T → R T . One expects that, when the number of players grows very large in an atomic splittable game, the game gets close to a nonatomic game in some sense. We confirm this intuition by showing that, considering a sequence of equilibria of approximating atomic games of a nonatomic instance, the sequence will converge to an equilibrium of the nonatomic instance. Approximating Nonatomic Games To approximate the nonatomic game G, the idea consists in finding a sequence of atomic games (G (ν) ) with an increasing number of players, each player representing a "class" of nonatomic players, similar in their parameters. As the players θ ∈ Θ are differentiated through X θ and u θ , we need to formulate the convergence of feasibility sets and utilities of atomic instances to the nonatomic parameters. Approximating the nonatomic instance Definition 5. Atomic Approximating Sequence (AAS) A sequence of atomic games G (ν) = I (ν) , T , X (ν) , c, (u (ν) i ) i is an approximating sequence (AAS) for the nonatomic instance G = Θ, T , (X θ ) θ , c, (u θ ) θ if for each ν ∈ N, there exists a partition of cardinal I (ν) of set Θ, denoted by (Θ (ν) i ) i∈I (ν) , such that: • I (ν) -→ +∞, • µ (ν) := max i∈I (ν) µ (ν) i -→ 0 where µ (ν) i := µ(Θ (ν) i ) is the Lebesgue measure of subset Θ (ν) i , • δ (ν) := max i∈I (ν) δ (ν) i -→ 0 where δ i is the Hausdorff distance (denoted by d H ) between nonatomic feasibility sets and the scaled atomic feasibility set: δ (ν) i := max θ∈Θi d H X θ , 1 µ (ν) i X (ν) i , (16) • d (ν) := max i∈I (ν) d (ν) i -→ 0 where d i is the L ∞ -distance (in B 0 (M ) → R) between the gradient of nonatomic utility functions and the scaled atomic utility functions: d (ν) i = max θ∈Θi max x∈B0(M ) ∇u (ν) i µ (ν) i x -∇u θ (x) 2 . ( 17 ) From Def. 5 it is not trivial to build an AAS of a given nonatomic game G, one can even be unsure that such a sequence exists. However, we will give practical examples in Secs. 3.4.1 and 3.4.2. A direct result from the assumptions in Def. 5 is that the players become infinitesimal, as stated in Thm. 9. Lemma 9. If (G (ν) ) ν is an AAS of a nonatomic instance G, then considering the maximal diameter M of X θ , we have: ∀i ∈ I (ν) , ∀x i ∈ X (ν) i , x i 2 ≤ µ (ν) i (M + δ (ν) i ) . ( 18 ) Proof. Let x i ∈ X (ν) i . Let θ ∈ Θ (ν) i and denote by P X θ the projection on X θ . By definition of δ (ν) i , we get: x i µ (ν) i -P X θ x i µ (ν) i 2 ≤ δ (ν) i (19) ⇐⇒ x i 2 ≤ µ (ν) i δ (ν) i + P X θ x i µ (ν) i 2 ≤ µ (ν) i (δ (ν) i + M ) . (20) Lemma 10. If (G (ν) ) ν is an AAS of a nonatomic instance G, then the Hausdorff distance between the aggregated sets X = Θ X . and X (ν) = i∈I (ν) X (ν) i is bounded by: d H X (ν) , X ≤ δ (ν) . (21) Proof. Let (x θ ) θ ∈ X be a nonatomic profile. Let P i denote the Euclidean projection on X (ν) i for i ∈ I (ν) and consider y i := P i Θ (ν) i x θ dθ ∈ X (ν) i . From ( 16) we have: Θ x θ dθ - i∈I (ν) y i 2 = i∈I (ν) Θ (ν) i x θ dθ -y i 2 (22) = i∈I (ν) Θ (ν) i x θ - 1 µ (ν) i y i dθ 2 (23) ≤ i∈I (ν) Θ (ν) i x θ - 1 µ (ν) i y i 2 dθ (24) ≤ i∈I (ν) Θ (ν) i δ (ν) i dθ = i∈I (ν) µ (ν) i δ (ν) i ≤ δ (ν) , (25) which shows that d X, X (ν) ≤ δ (ν) for all X ∈ X . On the other hand, if i∈I (ν) x i ∈ X (ν) , then let us denote by Π θ the Euclidean projection on X θ for θ ∈ Θ, and y θ = Π θ 1 µ (ν) i x i ∈ X θ for θ ∈ Θ (ν) i . Then we have for all θ ∈ Θ (ν) i , 1 µ (ν) i x i -y θ 2 ≤ δ (ν) i and we get: i∈I (ν) x i - Θ y θ dθ 2 ≤ i∈I (ν) Θ (ν) i 1 µ (ν) i x i -y θ dθ 2 (26) ≤ i∈I (ν) Θ (ν) i 1 µ (ν) i x i -y θ 2 dθ (27) ≤ i∈I (ν) µ (ν) i δ (ν) i ≤ δ (ν) , (28) which shows that d X, X ≤ δ (ν) for all X ∈ X (ν) and concludes the proof. To ensure the convergence of an AAS, we make the following additional assumptions on costs functions (c t ) t : Assumption 6. Lipschitz continuous costs For each t ∈ T , c t is a Lipschitz continuous function on [0, M ]. There exists C > 0 such that for each t ∈ T , |c t (•)| ≤ C. Assumption 7. Strong monotonicity There exists c 0 > 0 such that, for each t ∈ {1, . . . , T }, c t (•) ≥ c 0 on [0, M ] In the following sections, we differentiate the cases with and without utilities, because we found different convergence results in the two cases. Players without Utility Functions: Convergence of the Aggregated Equilibrium Profiles In this section, we assume that u θ ≡ 0 for each θ ∈ Θ. We give a first result on the approximation of WE by a sequence of NE in Thm. 11. Theorem 11. Let (G (ν) ) ν be an AAS of a nonatomic instance G, satisfying Asms. 1, 2, 4, 6 and 7. Let ( x(ν) ) a sequence of NE associated to (G (ν) ), and (x * θ ) θ a WE of G. Then: X(ν) -X * 2 2 ≤ 2 c 0 × B c × δ (ν) + C(M + 1) 2 × µ (ν) , where B c := max x∈B0(M ) c(x) 2 . Proof. Let P i denote the Euclidean projection onto X (ν) i and Π the projection onto X . We omit the index ν for simplicity. From [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF], we get: c(X * ), Π( X) -X * ≥ 0 . ( 29 ) On the other hand, with x * i := Θi x θ dθ, we get from (1): 0 ≤ i∈I c t ( Xt ) + xi,t c t ( Xt ) t∈T , P i (x * i ) -xi (30) = c( X), i P i (x * i ) -X + R( x, x * ) (31) with R( x, x * ) = i xi,t c t ( Xt ) t , P i (x * i ) -xi . From the Cauchy-Schwartz inequality and Thm. 9, we get: |R( x, x * )| ≤ i∈I (ν) xi,t c t ( Xt ) t 2 × P i (x * i ) -xi 2 (32) ≤ i∈I (ν) (µ (ν) i (M + δ (ν) i )C × 2(µ (ν) i (M + δ (ν) i )) (33) ≤ 2C(M + 1) 2 max i µ (ν) i . ( 34 ) Besides, with the strong monotonicity of c and from ( 29) and (30): c 0 X -X * 2 ≤ c( X) -c(X * ), X -X * = c( X), X -X * + c(X * ), X * -X ≤ c( X), X - i P i (x * i ) + c(X * ), X * -Π( X) + c( X), i P i (x * i ) -X * + c(X * ), Π( X) -X ≤ |R( x, x * )| + 0 + 2B c × max i δ i , which concludes the proof. Players with Utility Functions: Convergence of the Individual Equilibrium Profiles In order to establish a convergence theorem in the presence of utility functions, we make an additional assumption of strong monotonicity on the utility functions stated in Asm. 8. Note that this assumption holds for the utility functions given in Ex. 2. Assumption 8. Strongly concave utilities For all θ ∈ Θ, u θ is strongly concave on B 0 (M ), uniformly in θ: there exists α > 0 such that for all x, y ∈ B 0 (M ) 2 and any τ ∈]0, 1[ : u θ ((1 -τ )x + τ y) ≥ (1 -τ )u θ (x) + τ u(y) + α 2 τ (1 -τ ) x -y 2 . Remark 2. If u θ (x θ ) is α θ -strongly concave, then the negative of its gradient is a strongly monotone operator: -∇u θ (x θ ) -∇u θ (y θ ), x θ -y θ ≥ α θ x θ -y θ 2 . (35) We start by showing that, under the additional Asm. 8 on the utility functions, the WE profiles of two nonatomic users within the same subset Θ (ν) i are roughly the same. Proposition 12. Let (G (ν) ) ν be an AAS of a nonatomic instance G and (x * θ ) θ the WE of G satisfying Asms. 1, 4, 5 and 8. Then, if θ, ξ ∈ Θ (ν) i , we have: x * θ -x * ξ 2 2 ≤ 2 α M d (ν) i + (B c + Γ)δ (ν) i v ξ (x * ξ ) -c(X) , which gives the desired result when combined with (40). This result reveals the role of the strong concavity of utility functions: when α goes to 0, the right hand side of the inequality diverges. This is coherent with the fact that, without utilities, only the aggregated profile matters, so that we cannot have a result such as Thm. 12. According to Thm. 12, we can obtain a continuity property of the Wardrop equilibrium if we introduce the notion of continuity for the nonatomic game G, relatively to its parameters: Definition 6. Continuity of a nonatomic game The nonatomic instance G = Θ, T , (X θ ) θ , c, (u θ ) θ is said to be continuous at θ ∈ Θ if, for all ε > 0, there exists η > 0 such that: ∀θ ∈ Θ, θ -θ ≤ η ⇒ d H (X θ , X θ ) ≤ ε max x∈X θ ∪X θ ∇u θ (x) -∇u θ (x) 2 ≤ ε . (41) Then the proof of Thm. 12 shows the following intuitive property: Proposition 13. Let G = Θ, T , (X θ ) θ , c, (u θ ) θ be a nonatomic instance. If G is continuous at θ 0 ∈ Θ and (x * θ ) θ is a WE of G, then θ → x * θ is continuous at θ 0 . The next theorem is one of the main results of this paper. It shows that a WE can be approximated by the NE of an atomic approximating sequence. and(x * θ ) θ the WE of G. Under Asms. 1 to 6 and 8, the approximating solution defined by Theorem 14. Let (G (ν) ) ν be an AAS of a nonatomic instance G. Let ( x(ν) ) a sequence of NE associated to (G (ν) ), x(ν) θ := 1 µ (ν) i x(ν) i for θ ∈ Θ (ν) i satisfies: θ∈Θ x(ν) θ -x * θ 2 2 dθ ≤ 2 α B c + Γ)δ (ν) + C(M + 1) 2 µ (ν) + M d (ν) . Proof. Let ( xi ) i be an NE of G (ν) , and x * ∈ X the WE of G. For the remaining of the proof we ommit the index (ν) for simplicity. Let us consider the nonatomic profile defined by xθ := 1 µi xi for θ ∈ Θ i , and its projection on the feasibility set ŷθ := P X θ ( xθ ). Similarly, let us consider the atomic profile given by x * i := Θi x * θ dθ for i ∈ I (ν) , and its projection y * i := P Xi (x * i ). For notation simplicity, we denote ∇u θ by v θ . From the strong concavity of v θ and the strong monotonicity of c, we have: α θ∈Θ x(ν) θ -x * θ 2 2 + c 0 X(ν) -X * 2 2 (42) ≤ Θ c( X) -v θ ( xθ ) -(c(X * ) -v θ (x * θ )) , xθ -x * θ dθ (43) = Θ c( X) -v θ ( xθ ), xθ -x * θ dθ + Θ c(X * ) -v θ (x * θ ), x * θ -xθ dθ . ( 44 ) To bound the second term, we use the characterization of a WE given in Thm. 3, with ŷθ ∈ X θ : Θ c(X * ) -v θ (x * θ ), x * θ -xθ dθ (45) = Θ c(X * ) -v θ (x * θ ), x * θ -ŷθ dθ + Θ c(X * ) -v θ (x * θ ), ŷθ -xθ dθ (46) ≤ 0 + i∈I (ν) Θi c(X * ) -v θ (x * θ ) 2 × ŷθ -xθ 2 dθ (47) ≤ i∈I (ν) Θi (B c + Γ) × δ i ≤ (B c + Γ) × δ . (48) To bound the first term of (44), we divide it into two integral terms: Θ c( X) -v θ ( xθ ), xθ -x * θ dθ (49) = i∈I (ν) Θi c( X) -v i ( xi ), xθ -x * θ dθ + Θi v i ( xi ) -v θ ( xθ ), xθ -x * θ dθ . ( 50 ) The first integral term is bounded using the characterization of a NE given in Thm. 1: i∈I (ν) Θi c( X) -v i ( xi ), xθ -x * θ dθ (51) = i∈I (ν) c( X) -v i ( xi ), xi -x * i (52) ≤ i∈I (ν) c( X) -v i ( xi ), xi -y * i + i∈I (ν) c( X) -v i ( xi ), y * i -x * i (53) ≤ -R( x, x * ) + i∈I (ν) c( X) -v i ( xi ) 2 × y * i -x * i 2 (54) ≤ 2C(M + 1) 2 µ + (B c + Γ) × 2M δ i∈I (ν) µ i (55) = 2C(M + 1) 2 µ + (B c + Γ) × 2M δ . (56) For the second integral term, we use the distance between utilities (17): i∈I (ν) Θi v i ( xi ) -v θ ( xθ ), xθ -x * θ dθ (57) ≤ i∈I (ν) µ i v i ( xi ) -v θ ( xθ ) 2 × xθ -x * θ 2 (58) ≤ i∈I (ν) µ i d i × 2M ≤ d2M . (59) We conclude the proof by combining (48),( 56) and (59). As in Thm. 12, the uniform strong concavity of the utility functions plays a key role in the convergence of disaggregated profiles ( x(ν) θ ) ν to the nonatomic WE profile x * . Construction of an Approximating Sequence In this section, we give examples of the construction of an AAS for a nonatomic game G, under two particular cases: the case of piecewise continuous functions and, next, the case of finitedimensional parameters. Piecewise continuous parameters, uniform splitting In this case, we assume that the parameters of the nonatomic game are piecewise continuous functions of θ ∈ Θ: there exists a finite set of K discontinuity points 0 ≤ σ 1 < σ 2 < • • • < σ K ≤ 1, and the game is uniformly continuous (Def. 6) on (σ k , σ k+1 ), for each k ∈ {0, . . . , K + 1} with the convention σ 0 = 0 and σ K = 1. For ν ∈ N * , consider the ordered set of I ν cutting points (υ (ν) i ) Iν i=0 := k ν 0≤k≤ν ∪ {σ k } 1≤k≤K and define the partition (Θ (ν) i ) i∈I (ν) of Θ by: ∀i ∈ {1, . . . , I ν } , Θ (ν) i = [υ (ν) i-1 , υ (ν) i ). ( 60 ) Proposition 15. For ν ∈ N * , consider the atomic game G (ν) defined with I (ν) := {1 . . . I ν }, and for each i ∈ I (ν) : X (ν) i := µ (ν) i X ῡ(ν) i and u (ν) i := x → µ (ν) i u ῡ(ν) i 1 µ (ν) i x , with ῡ(ν) i = υ (ν) i-1 +υ (ν) i 2 . Then G (ν) ν = I (ν) , T , X (ν) , c, u (ν) ν is an AAS of the nonatomic game G = (Θ, T , X . , c, (u θ ) θ ). Proof. We have I (ν) > ν -→ ∞ and for each i ∈ I (ν) , µ(Θ (ν) i ) ≤ 1 ν -→ 0. The conditions on the feasibility sets and the utility functions are obtained with the piecewise uniform continuity conditions. If we consider a common modulus of uniform continuity η associated to an arbitrary ε > 0, then, for ν large enough, we have, for each i ∈< I (ν) , µ (ν) i < η. Thus, for all θ ∈ Θ (ν) i , |ῡ (ν) i -θ| < η, so that from the continuity conditions, we have: d H X θ , 1 µ (ν) i X (ν) i = d H (X θ , X ῡ(ν) i ) < ε (61) and max x∈B0(M ) ∇u (ν) i µ (ν) i x -∇u θ (x) 2 = µ (ν) i µ (ν) i ∇u ῡ(ν) i 1 µ (ν) i µ (ν) i x -∇u θ (x) 2 < ε , (62) which concludes the proof. Proof. The proof follows [START_REF] Batson | Combinatorial behavior of extreme points of perturbed polyhedra[END_REF] in several parts, but we extend the result on the compact set B, and drop the irredundancy assumption made in [START_REF] Batson | Combinatorial behavior of extreme points of perturbed polyhedra[END_REF]. For each b, we denote by V (b) the set of vertex of the polyhedron Λ b . Under Assumption 4, V (b) is nonempty for any b ∈ B. First, as Λ b is a polyhedra, we have Λ b = conv(V (b)) where conv(X) is the convex hull of a set X. As the function x → d(x, Λ b ) defined over Λ b is continuous and convex, by the maximum principle, its maximum over the polyhedron Λ b is achieved on V (b). Thus, we have: d H (Λ b , Λ b ) = max[ max x∈Λ b d(x, Λ b ) , max x∈Λ b d(Λ b , x)] (64) = max[ max x∈V (b) d(x, Λ b ) , max x∈V (b ) d(Λ b , x)] (65) ≤ max[ max x∈V (b) d(x, V (b )) , max x∈V (b ) d(V (b), x)] (66) =d H (V (b), V (b )) . (67) Let's denote by H i (b) the hyperplane {x : A i x = b i } and by H - i (b) = {x : A i x ≤ b i } and H + i (b) = {x : A i x ≥ b i } the associated half-spaces. Then Λ b = i∈[1,m] H - i (b). Now fix b 0 ∈ B and consider v ∈ V (b 0 ) . By definition, v is the intersection of hyperplanes i∈K H i (b 0 ) where K ⊂ {1, . . . , m} is maximal (note that k := card(K) ≥ n otherwise v can not be a vertex). For J ∈ {1, . . . , m}, let A J denote the submatrix of A obtained by considering the rows A j for j ∈ J. Let us introduce the sets of derived points (points of the arangement) of the set K, for each b ∈ B: V K (b) := {x ∈ R n ; ∃J ⊂ K ; A J is invertible and x = A -1 J b} . By definition, V K (b 0 ) = {v} and, for each b ∈ B, V K (b) is a set of at most k n elements. First, note that for each b ∈ B and v := A -1 J b ∈ V K (b), one has: v -v = A -1 J b 0 -A -1 J b ≤ A -1 J × b 0 -b ≤ α b 0 -b (68) where α := max with y θ = (0, E θ ) the preference of user θ for period P and ω θ := θ the preference weight of player θ. We consider approximating atomic games by splitting Θ uniformly (Sec. 3.4.1) in 5, 20, 40 and 100 segments (players). We compute the NE for each atomic game using the best-response dynamics (each best-response is computed as a QP using algorithm [START_REF] Brucker | An o(n) algorithm for quadratic knapsack problems[END_REF], see [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF] for convergence properties) and until the KKT optimality conditions for each player are satisfied up to an absolute error of 10 -3 . Fig. 2 shows, for each NE associated to the atomic games with 5, 20, 40 and 100 players, the linear interpolation of the load on the peak period x θ,P (red filled area), while the load on the offpeak period can be observed as x θ,O = E θ -x θ,P . We observe the convergence Figure 2: Convergence of the Nash Equilibrium profiles to a Wardrop Equilibrium profile to the limit WE of the nonatomic game as stated in Thm. 14. We also observe that the only discontinuity point of θ → x * θ,P comes from the discontinuity of θ → E θ at θ = 0.7, as stated in Thm. 13. Conclusion This paper gives quantitative results on the convergence of Nash equilibria, associated to atomic games approximating a nonatomic routing game, to the Wardrop equilibrium of this nonatomic game. These results are obtained under different differentiability and monotonicity assumptions. Several directions can be explored to continue this work: first, we could analyze how the given theorems could be modified to apply in case of nonmonotone and nondifferentiable functions. Another natural extension would be to consider routing games on nonparallel networks or even general aggregative games: in that case, the separable costs structure is lost and the extension is therefore not trivial. Figure 1 : 1 Figure 1: A parallel network with T links. η := min j∈{1...m}\K d v, H + j (b) . By the maximality of K, η > 0. As x → d(x, H + j ) is continuous for each j, and from (68) , there exists δ > 0 such that: b0 -b ≤ δ =⇒ ∀v ∈ V K (b), min j∈{1...m}\K d v , H + j (b) > 0. Next, we show that, for b such that b 0 -b ≤ δ, there exists v ∈ V K (b) ∩ V (b). We proceed by induction on k -n. If k = n, then v = A -1 K b 0 and for any b in the ball S δ (b 0 ), V K (b) = {A -1 K b}. Thus v := A -1 K b verifies A K v = b K , and A j v < b j for all j / ∈ K, thus v belongs to V (b). If k = n + t with t ≥ 1, there exists j 0 ∈ K such that with K = K \ {j 0 }, V K (b 0 ) = {v}. Consider the polyhedron P = i∈K H - i (b 0 ). By induction, there exists J ⊂ K such that A -1 J b is a vertex of P . If it satisfies also A j0 x ≤ b j0 then it is an element of V (b). Else, consider a vertex v of the polyhedron P ∩ H - j0 (b) on the facet associated with H j0 (b). Then, v ∈ V K (b) and, as b ∈ S δ (b 0 ), it verifies A j v < b j for all j / ∈ K, thus v ∈ V (b). Thus,in any case and for b ∈ S δ (b 0 ), d(v, V (b)) ≤ vv ≤ α b 0 -b and finally d (V (b 0 ), V (b)) ≤ α b 0 -b . The collection S δ b (b) b∈B is an open covering of the compact set B, thus there exists a finite subcollection of cardinal r that also covers B, from which we deduce that there exists D ≤ max(rα) such that: ∀b, b ∈ B, d H (V (b ), V (b)) ≤ D bb . Acknowledgments We thank Stéphane Gaubert, Marco Mazzola, Olivier Beaude and Nadia Oudjane for their insightful comments. This work was supported in part by the PGMO foundation. Finite dimension, meshgrid approximation Consider a nonatomic routing game G = (Θ, X , F ) (Def. 3) satisfying the following two hypothesis: • The feasibility sets are K-dimensional polytopes: there exist A ∈ M K,T (R) and b : Θ → R K bounded, such that for any θ, X θ := {x ∈ R T ; Ax ≤ b θ }, with X θ nonempty and bounded (as a polytope, X θ is closed and convex). • There exist a bounded function s : Θ → R q and a function u : R q × B 0 (M ) → R such that for any θ ∈ Θ, u θ = u(s θ , .). Furthermore, u is Lipschitz-continuous in s. For ν ∈ N * , we consider the uniform meshgrid of ν K+q classes of Let [s k , s k ] which will give us a set of I (ν) = ν K+q subsets. More explicitly, if we define: the set of indices for the meshgrid, and with the cutting points b k, n of Θ as: Since some of the subsets Θ n can be of Lebesgue measure 0, we define the set of players I (ν) as the elements n of Γ (ν) for which µ(Θ Remark 3. If there is a set of players of positive measure that have equal parameters b and s, then the condition max i∈I (ν) µ i → 0 will not be satisfied. In that case, adding another dimension in the meshgrid by cutting Θ = [0, 1] in ν uniform segments solves the problem. Proposition 16. For ν ∈ N * , consider the atomic game G (ν) defined by: Before giving the proof of Thm. 16, we show the following nontrivial Thm. 17, from which the convergence of the feasibility sets is easily derived. Proof of Thm. 16. First, to show the divergence of the number of players and their infinitesimal weight, we have to follow Remark 3 where we consider an additional splitting of the segment Θ = [0, 1]. In that case, we will have I (ν) ≥ ν hence goes to positive infinity and for each n ∈ I (ν) , µ(Θ ν hence goes to 0. Then, the convergence of the strategy sets follows from the fact that, for each n ∈ I (ν) : and from Thm. 17 which implies that, for each θ ∈ Θ (ν) n : Finally, the convergence of utility functions comes from the Lipschitz continuity in s. For each n ∈ I (ν) and each θ ∈ Θ (ν) n , we have: which terminates the proof. Of course, the number of players considered in Thm. 16 is exponential in the dimensions of the parameters K + q, which can be large in practice. As a result, the number of players in the approximating atomic games considered can be very large, which will make the NE computation really long. Taking advantage of the continuity of the parametering functions and following the approach of Thm. 15 gives a smaller (in terms of number of players) approximating atomic instance. Numerical Application We consider a population of consumers Θ = [0, 1] with an energy demand distribution θ → E θ . Each consumer θ splits her demand over T := {O, P }, so that her feasibility set is X θ := {(x θ,O , x θ,P ) ∈ R 2 + x θ,O + x θ,P = E θ }. The index O stands for offpeak-hours with a lower price c O (X) = X and P are peak-hours with a higher price c P (X) = 1 + 2X. The energy demand and the utility function in the nonatomic game are chosen as the piecewise continuous functions:
43,033
[ "22117" ]
[ "450091", "528446", "89626", "245281", "454323" ]
01762672
en
[ "sdv" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762672/file/EpiToolsSEEG.pdf
Samuel Medina Villalon Rodrigo Paz Nicolas Roehri Stanislas Lagarde Francesca Pizzo Bruno Colombet Fabrice Bartolomei Romain Carron Christian-G Bénar email: [email protected] S Medina Roehri 1+ C-G Bénar Epitools, a software suite for presurgical brain mapping in epilepsy : Intracerebral EEG Keywords: Epilepsy, SEEG, automatic segmentation, contacts localization, 3D rendering, CT de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction About one third of patients with epilepsy are refractory to medical treatment [START_REF] Kwan | Early identification of refractory epilepsy[END_REF]. Some of them suffer from focal epilepsy; for these patients, epilepsy surgery is an efficient option [START_REF] Ryvlin | Epilepsy surgery in children and adults[END_REF]. The goal of epilepsy surgery is to remove the epileptogenic zone (EZ), i.e., the structures involved in the primary organization of seizures, and to spare the functional cortices (e.g. involved in language or motor functions). For this purpose, a pre-surgical work-up is needed including a non-invasive phase with clinical history and examination, cerebral structural and functional imaging (MRI and PET), neuropsychological assessment and long-term surface EEG recordings. Nevertheless, for about one quarter of these patients, it is difficult to correctly identify the EZ and/or its relationship with functional areas. For these difficult cases, an invasive study with intracranial EEG is required [START_REF] Jayakar | Diagnostic utility of invasive EEG for epilepsy surgery: Indications, modalities, and techniques[END_REF]. Stereo-EEG (SEEG) is a powerful method to record local field potentials from multiple brain structures, including mesial and subcortical structure. It consists in a stereotaxic surgical procedure performed under general anesthesia and aiming at implanting 10 to 20 electrodes within the patient's brain. Each electrodes being made up of 5 to 18 contacts, [START_REF] Mcgonigal | Stereoelectroencephalography in presurgical assessment of MRI-negative epilepsy[END_REF], [START_REF] Cossu | Explorations préchirurgicales des épilepsies pharmacorésistantes par stéréo-électro-encéphalographie : principes, technique et complications[END_REF]. SEEG allows high resolution mapping of both the interictal, i.e. between seizures, and ictal activity, and helps delineate the epileptic network, determine which specific areas need to be surgically removed [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF], and identify the functional regions to be spared [START_REF] Fonseca | Hemispheric lateralization of voice onset time (VOT) comparison between depth and scalp EEG recordings[END_REF]. The SEEG interpretation crucially relies on combined use of three sources of information: clinical findings, anatomy and electrophysiology. Therefore, it is of utmost importance to precisely localize the origin of the SEEG signal within the patient's anatomy, as was done in the field of EcoG [START_REF] Dykstra | Individualized localization and cortical surface-based registration of intracranial electrodes[END_REF], [START_REF] Groppe | iELVis : An open source MATLAB toolbox for localizing and visualizing human intracranial electrode data[END_REF]. The intracerebral position of SEEG electrodes is usually assessed visually by clinicians, based on the registration of MRI scan on CT image. However, this approach is time consuming (100 to 200 contacts by patients) and a potential source of error. The use of a software allowing automatic localization of electrodes contact within the patient's MRI would thus be very helpful. Such software should be able to perform: i) an automatic and optimal localization of contact positions with respect to each patient's cerebral anatomy, ii) an automatic labelling of the SEEG contacts within an individualized atlas. Currently, there isn't any software available to perform automatically such processes. Indeed, previous studies have proposed semi-automatic registration of intracranial electrodes contacts, based on Computerized tomography (CT) and Magnetic resonance imaging (MRI) images and prior information such as planed electrodes trajectory [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF], [START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF] or manual interactions [START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF], [START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF]. On the other hand, advances in signal analysis (including connectivity) of epileptic activities in SEEG have been major in recent years and now constitute key factors in the understanding of epilepsy [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. The translation of such advanced analyses to clinical practice is challenging but should lead to improvement in SEEG interpretation. Therefore, there is a need to easily apply some signal analysis to raw SEEG signal and then to graphically display their results in the patient's anatomy. Our objective was thus to design a suite of software tools, with user-friendly graphical user interfaces (GUIs), which enables to i) identify with minimal user input the location of SEEG contacts within individual anatomy ii) label contacts based on a surface approach as implemented in FreeSurfer (Fischl, 2012) or MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF] iii) interact with our in-house AnyWave software [START_REF] Colombet | AnyWave: A cross-platform and modular software for visualizing and processing electrophysiological signals[END_REF] and associated plugins in order to display signal processing results in the patient's MRI or in a surface rendering of the patient's cortex. Hereafter, we will describe step by step the implementation of our suite "EpiTools". The suite as well as the full documentation are freely available on our web site http://meg.univ-amu.fr/wiki/GARDEL:presentation. It includes mainly the following software programs: GARDEL (for "GUI for Automatic Registration and Depth Electrodes Localization"), the 3Dviewer for 3D visualization of signal processing results within the AnyWave framework. Material and Methods The complete pipeline is illustrated in Fig. 1. Firstly, FreeSurfer pipeline (1) can be run optionally on MRI image to obtain pial surface and Atlases. Then, "GARDEL" (2) tool was developed to co-register MRI on CT scan, detects automatically SEEG contacts and label them if an atlas is available. It saves electrodes coordinates to be reused. Finally, the "3Dviewer" (4) tool is designed to display signal analysis results, obtained thanks to "AnyWave" (3) and attached plugins, inside patient individual brain mesh or MRI slices. Inputs are : i) MR pre-implantation and CT post-implantation images, ii) if needed, results of FreeSurfer pipeline (Pial surface and Atlases) (Fischl, 2012) or MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF], iii) SEEG electrophysiological signal processing results. The output consists of a 3D visualization of these data in the patient's own anatomical rendering, a selection of SEEG contacts found inside grey matter for signal visualization in AnyWave software and brain matter or label associated with each contact. The different tools of our pipeline are written in Matlab (Mathworks, Natick, MA), and can be compiled to be used as standalone in all operating systems (requiring only the freely available Matlab runtime). SEEG and MRI data In our center, SEEG exploration is performed using intracerebral multiple-contact electrodes (Dixi Medical (France) or Alcis (France): 10-18 contacts with length 2 mm, diameter 0.8 mm, and 1.5 mm apart; for details see [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF]). Some electrodes include only one group of contacts regularly spaced (5,10, 15 and 18 contacts per electrode), and others are made up of 3 groups of 5 or 6 contacts spaced by 7-11 mm (3x5 or 3x6 contacts electrodes). There are two types of electrodes implantation: orthogonal and oblique. Electrodes implanted orthogonally are almost orthogonal to the sagittal plane and almost parallel to the axial and coronal planes. Electrodes implanted obliquely can be implanted with variable angle. The choice of the type and number of electrodes to be implanted depends on clinical needs. Intracerebral recordings were performed in the context of their pre-surgical evaluation. Patients signed informed consent, and the study was approved by the Institutional Review board (IRB00003888) of INSERM (IORG0003254, FWA00005831). All MR examinations were performed on a 1.5 T system (Siemens, Erlangen, Germany) during the weeks before SEEG implantation. The MRI protocol included at least T1-weighted gradient-echo, T2weighted turbo spin-echo, FLAIR images in at least two anatomic planes, and a 3D-gradient echo T1 sequence after gadolinium based contrast agents (GBCA) injection. Cerebral CT (Optima CT 660, General Electric Healthcare, 120 kV, 230-270 FOV, 512x512 matrix, 0.6mm slice thickness), without injection of contrast agents, were performed the day after SEEG electrodes implantation. Each CT scan was reconstructed using the standard (H30) reconstruction kernel to limit the level of streaks or flaring. The AnyWave framework Our tools GARDEL and the 3Dviewer are intended to interact with the AnyWave1 software, developed in our institution for the visualization of electrophysiological signals and for subsequent signal processing [START_REF] Colombet | AnyWave: A cross-platform and modular software for visualizing and processing electrophysiological signals[END_REF]. AnyWave is multi-platform and allows to add plugins created in Matlab or Python2 . Some specific plugins were created and added to implement measure for SEEG analysis as Epileptogenicity Index (EI) [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF], non-linear correlation h 2 [START_REF] Wendling | Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEG[END_REF], Interictal spikes or high frequency oscillations (HFO) detections [START_REF] Roehri | What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations[END_REF][START_REF] Roehri | Time-Frequency Strategies for Increasing High-Frequency Oscillation Detectability in Intracerebral EEG[END_REF] as well as graph measures [START_REF] Courtens | Graph Measures of Node Strength for Characterizing Preictal Synchrony in Partial Epilepsy[END_REF]. Electrode contact segmentation and localization GARDEL localizes the SEEG contacts in the patient's anatomy. Unlike most existing techniques [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF][START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF][START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF][START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF], it relies on an automatic segmentation of each contact. Minimal user intervention is required (only for attributing the electrode names). It also labels each individual contact with respect to a chosen atlas (Desikan-Kiliany (Desikan et al., 2006), Destrieux [START_REF] Destrieux | Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature[END_REF], or in particular MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF]). GARDEL takes as input a post-implantation CT-scan (where the SEEG contacts are visible as hyper density signals areas) and an anatomical MRI Volume. DICOM, NIFTI or Analyze formats are accepted. Classically, clinical images are in the DICOM format, and are converted into a NIfTI format for easier manipulation, using a DICOM to NIfTI converter3 that can be launched within GARDEL. Image opening is made thanks to Jimmy Shen ("Tools for NIfTI and ANALYZE image") toolbox4 . Since MRI and CT images have different resolution, size and origin, spatial alignment and registration are performed. In order to maintain the good quality of the CT image and keep an optimal visualization of electrodes, it was preferred to register the MRI to the CT space. The MRI is registered using maximization of Normalized Mutual Information and resliced into the CT space using a trilinear interpolation, thanks to the SPM85 toolbox [START_REF] Penny | Statistical Parametric Mapping: The Analysis of Functional Brain Images[END_REF]. With both images co-registered in the same space, the next step is to segment the electrodes on the CT scan. To do so, the resliced MRI is segmented into 3 regions: white matter, grey matter and cerebrospinal fluid using SPM (spm_preproc function). These three images are combined into a mask that enables us to remove extra-cerebral elements such as skull and wires (spm_imcalc function). In order to segment the electrodes, a threshold is found from the histogram of grey values of the masked CT image. Electrode intensity values being significantly greater than brain structures [START_REF] Hebb | Imaging of deep brain stimulation leads using extended hounsfield unit CT[END_REF], they are defined as outliers, with a threshold based on median and quartiles : Thr= (Q3 + 1.5*IQR) with Q3 third quartile and IQR inter-quartile range (Q3-Q1) and could also be adjusted. The electrode segmentation is divided into 2 steps. The first step aims at segmenting each electrode individually and the second step aims at separating each contact of a given electrode. Thus, once the CT images have been thresholded, the resulting binary image is dilated using mathematical morphology (MATLAB Image Processing Toolbox, imdilate function) in order to bind the contacts of the same electrode together. We then find each connected component, which corresponds to each electrode, and label them separately (bwconncomp function) (Fig. 2a). This results in one mask per electrode. These masks are iteratively applied to the non-dilated thresholded CT to obtain a binary image of the contacts. We then apply a distance transform of the binary image preceding a watershed segmentation [START_REF] Meyer | Topographic distance and watershed lines[END_REF] (Matlab watershed function). A first issue is that the watershed technique may oversegment some contacts, i.e. identify several contacts instead of one, or may miss some contacts because the contacts were too small or removed after thresholding. To solve this issue, we calculated the distance between contacts within each electrode as well as their individual volume. We removed the outliers of each feature and reconstructed the missing contacts by building a model of the electrode using the median distance and the direction vector of the electrode. The vector is obtained by finding the principal component of the correctly segmented contacts. If contacts are missed between consecutive contacts, i.e. if the distance between two consecutive contacts is greater than the median, it is divided by the median to estimate the number of missing contacts, which are then placed on the line formed by the two contacts, equally spaced. If contacts are missed at the edge of the electrode, we place contacts at the median distance of the last contact on the line given by the direction vector until the electrode mask is filled. This method allows adding missing contacts within or at the tip of the electrode. It could add contacts inside the guide screw, if it belongs to the electrode mask, but can easily be deleted after a quick review. This method enables reconstructing electrodes even if they are slightly bent, and the error made during the reconstruction of missing contacts are minimized using piece-wise linear interpolation (in contrast with simple linear interpolation) (Fig. 2b). A second issue is the segmentation of the 3x5 or 3x6 contacts electrodes, which are classically detected as 3 different electrodes. This is solved in two steps. Firstly, for each 5 or 6 contacts electrode, their direction vector is calculated. We then compute a dot product between them to check if they are collinear and group them. Secondly, only on these subsets of electrodes, dot products between a vector of a given electrode and vectors constructed with a point of this electrode and points of other electrodes are calculated to check if they need to be grouped. A third issue is the difficulty to segments oblique electrodes because of the CT resolution that is not high enough in the vertical direction. To solve it, a second method for contact localization is applied after the fully automatic one, based on pre-defined electrodes characteristics (we have built in templates for Alcis and DIXI electrodes used in our center). The directions of clustered electrodes that failed the previous step are obtained by finding the principal component of each point composing these electrodes. Then, based on these electrodes characteristics and the estimated size of the clustered electrodes, a match is made and the electrodes can be reconstructed. Fig. 2c displays the end of the segmentation process. Electrodes are projected on the maximum intensity projection image of the CT scan. After the automatic segmentation, the user has the possibility to delete and/or add manually electrodes and/or single contacts. In order to create an electrode, the user has to mark the first contact, as well as another one along the electrode, and to choose the electrode type. This electrode will be created with respect to pre-defined electrodes characteristics (number of contacts, contact size and spacing). In order to manually correct contact position, contacts can be deleted or added one by one. Contacts numbers will be reorganized automatically. Another important feature of GARDEL is the localization of each contact within patients' anatomy. As manual localization is usually time consuming, our goal is to localize precisely and to label automatically each contact in the brain and in individualized atlases. Atlases from FreeSurfer (Desikan-Killiany [START_REF] Desikan | An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest[END_REF] or Destrieux [START_REF] Destrieux | Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature[END_REF]) or "MarsAtlas" [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF] can be imported after having performed a transformation from FreeSurfer space to the native MRI space. For each contact, its respective label and those of its closest voxels neighbors (that formed a 3x3x3 cube) are obtained. In case of multiple labels in this region of interest, the most frequent one is used to define contact label. GARDEL provides for each SEEG contact its situation within the grey and white matter and its anatomical localization based on this atlas. At the end, contacts coordinates, their situation within cerebral grey or white matter and anatomical labels can be saved, for later use in the 3Dviewer (see below). GARDEL also saves automatically AnyWave montages (selection of contacts located inside grey matter and grouped them by area (frontal, occipital, parietal, temporal) for visualization of SEEG signals in AnyWave software). Fig. 3 displays output rendering of GARDEL tool. It is possible to display electrode in patient anatomy (Fig. 3a), in an atlas (Fig. 3b) or all electrodes in a surface rendering of the cortex (Fig. 3c). 3D representation (3Dviewer) 3Dviewer tool, closely linked to GARDEL, permits to display in a 3D way a series of relevant information inside individual mesh of the cortex: -SEEG electrodes, -mono-variate values such as the Epileptogenicity Index, spike or high frequency oscillation rate, -bi-variate values such as non-linear correlation h 2 or co-occurrences graph. Data required for this tool are the following: patient's MRI scan, electrodes coordinates as given by GARDEL, pial surface made by FreeSurfer or cortex mesh made by any other toolbox (e.g. SPM), and mono-variate or bi-variate values based on format created by AnyWave and associated plugins. The parameters to be displayed can be easily set by the user: mesh, electrodes, mono or bi-variate values. Each SEEG contact is reconstructed as a cylinder with the dimension of electrodes used. Mono-variate values can either be displayed as a sphere or a cylinder with diameter and color proportional to the values. Bi-variate values (connectivity graphs) are displayed as cylinders with diameter and color proportional to the strength of the value and an arrow for the direction of the graph. Views can be switched from 3D to 2D. In the 2D view SEEG contacts are displayed on the MRI with mono-variate values in color scale (Fig. 4a). Values are listed as tables. As explained above, the 3Dviewer allows the 3D visualization of several signal analysis measures obtained from AnyWave software within the patient's anatomy (Fig. 4b). One type of measure is the quantification of ictal discharges as can be done by the Epileptogenicity Index [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF]. Briefly, the EI is a tool quantifying epileptic fast discharge during a seizure, based on both the increase of the high frequency content of the signal and on the delay of involvement at seizure onset. It gives for each channel a measure between 0 and 1 that can be used to assess the epileptogenicity of the underlying cortex. Typically, a threshold between 0.2 and 0.3 is used for delineating the epileptogenic zone. The results could be automatically exported as a Microsoft Excel file with numerical data. Another type of measure comes from the interictal activity, i.e. the activity between seizures. Both spikes and high frequency oscillations (HFO) are markers of epileptic cortices, and their detection and quantification important in clinical practice. Spikes and HFO are detected by the Delphos plugin. This detector is able to automatically detect in all channels, oscillations and spikes based on the shape of peaks in the normalized ("ZH0") time-frequency image [START_REF] Roehri | Time-Frequency Strategies for Increasing High-Frequency Oscillation Detectability in Intracerebral EEG[END_REF][START_REF] Roehri | What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations[END_REF]. Results could be exported as histogram of spike, HFO rates and combination of these two markers. A step further, the co-occurrence of inter-ictal paroxysms could bring some information about the network organization of the spiking cortices. Co-occurrence graphs can be built using the time of detection of spikes or HFOs [START_REF] Malinowska | Interictal networks in Magnetoencephalography[END_REF]. Furthermore, epilepsy also leads to connectivity changes both during and between seizures. The study of such changes is important in the understanding of seizure organization, semiology, seizure onset localization, etc. The non-linear correlation h 2 is a connectivity analysis method, based on non-linear regression, usefully applied in the study of epilepsy [START_REF] Wendling | Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEG[END_REF] [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. It is computed within the core part of AnyWave. The results could be visualized on a graph with weighted edges, directionality and delay and could also be exported on a Matlab file in order to proceed others connectivity analyses. The GraphCompare plugin quantifies the number and the strength of links between selected contacts based on h2 results to compare , with statistical testing, the connectivity of a period of interest with that of a reference period [START_REF] Courtens | Graph Measures of Node Strength for Characterizing Preictal Synchrony in Partial Epilepsy[END_REF]. The results are represented in form of boxplots and histograms representing: total degree and strengths of nodes, and the repartition of degree of the entire network. Statistical results are also automatically exported. Finally, graph representations of edge with significant changes could also be represented. SEEG contacts. We investigated 30 patients with 10 to 18 electrodes each, resulting in a total of 4590 contacts. The validation was made by an expert clinician (SL). The measure was the concordance of each reconstructed contact with the real contact as visualized on the native CT. Then, we estimated the sensitivity and precision of GARDEL. Sensitivity was defined as the number of good detections of contacts (true positive) divided by the total number of contacts of the patient and precision as the number of good detection of contacts over the total number of detections. Our segmentation tool has a mean sensitivity of 97% (first quartile 98%, median and third quartile 100%) and a mean precision of 95% (first quartile 92%, median and third quartile 100%). Only a small subset of contacts was missed (129 out of 4590) and few false detections were obtained (243). The decrease in performance is mostly due to oblique electrodes that are not clearly distinguishable in some CT scans or when electrodes cross each other. We also did a multi-rater comparison to confirm our results. We chose 3 more patients (more than 600 detections) that were validated by 3 raters. We obtained 83.1 % of inter-rater agreement using Krippendorf's alpha. Discrepancies were due to oblique electrodes. Separating electrode types, we obtained a full agreement among raters for orthogonal electrodes (477 contacts) and an alpha equal to 0.78 for obliques electrodes (130 contacts). The comparison with another tool (iElectrodes [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF]) showed a mean difference of 0.59mm between round voxel coordinates obtained by the two tools. This difference could potentially be explained by rounding effect of centroid computation at the voxel level (our CT images had 0.42*0.42*0.63mm voxel size). Therefore, our method appears to be efficient and can be used in clinical practice. It is automatic (except for naming electrodes), contrary to previous studies that used planned electrodes trajectory [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF][START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF] or manual interaction [START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF][START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF]). In the few cases where there are errors, it is possible to easily correct manually the segmentation (explained in method section). Moreover, automatic contacts labelling using individualized atlas opens accelerated and more robust interpretation of SEEG data in correlation with patient's anatomy. It is also possible to co-register post-resection MRI with electrode position to identify whether electrode sites were resected. However, such registration has to be performed with care for large resections, for which the brain tissues may have moved (Supp.Fig. 1). Nevertheless, a limitation of our tool resides in the fact that, in the current version, it requires intercontact distance to be the same across electrodes. Indeed, to estimate the position of missing contacts, it takes the median distance between all contacts. Validation of labelling from Atlases The second step was to validate the concordance between the obtained labels of each contact and its actual anatomical location. For this purpose, a senior neurosurgeon (RC) reviewed the data of 3 patients to check if our tool properly assigned each contact to its correct label according to a given atlas (different atlas per patient). Concordance results are the following: 534 over 598 contacts were accurately labeled (89.3%), 28 were uncertain (4.7%), i.e., contacts difficult to label automatically because of their location at the junction between two areas or between grey and white matters, and 36 (6%) were wrong. These errors were mostly due to incorrect segmentation of individual MRIs because of abnormalities/lesions, or in rare cases to a mismatch between atlas label and clinician labeling. It is to be noted that segmentation can be corrected at the level of the FreeSurfer software6 . Results across patient were concordant (90%, 88% and 89%). 3Dviewer: representation of physiological data Signal processing is increasingly used for the analysis of SEEG [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. The major interest of our pipeline is the possibility to represent on the patient's MRI scan the data from advanced electrophysiological signal analyses. This is the goal of the 3Dviewer. Data could be represented on the 3D mesh of the patients or MRI slices in the 3 spatial planes (Fig. 4). This two modes of representation are complementary in SEEG interpretation making it possible not only to visualize the estimate of the epileptic abnormalities on 3D but also to precisely localize them within brain structures and provide potentially useful guidance for surgical planning on 2D MRI slices. CONCLUSION In this study, we presented a suite of tools, called EpiTools, to be used for SEEG interpretation and related clinical research applications. The SEEG section of EpiTools is mainly based on 2 distinct parts. The first part, GARDEL , is designed for automatic electrode segmentation and labelling. It is, to the best of our knowledge, the first software to perform automatic segmentation, electrode grouping and contact labelling within individual atlas, needing only from the user to name electrodes and to correct results if necessary. We validated the contact detection and obtained good results both for sensitivity and precision. The second part consists of the 3Dviewer, which displays on the patient's MRI scan or on a 3D surface rendering, the results of signal processing at the contact location. It creates advanced link between individual anatomy and electrophysiological data analyses. In the future, we will present the application of EpiTools to non-invasive electrophysiological data such as EEG and MEG. médecin" from Aix Marseille Université (PhD program ICN). Fig. 1 1 Fig.1 Scheme of EpiTools pipeline for intracerebral EEG . Firstly, FreeSurfer(1) can be run to obtain Fig. 2 2 Fig.2 Steps of electrodes segmentation a) steps of the clustering of contacts within electrodes (left Fig. 3 3 Fig.3 Results of GARDEL tool a) MRI co-registered on the CT image with one electrode and its Fig. 4 4 Fig.4 Overview of 3Dviewer tool results a) coronal, sagittal and axial planes of the patient MRI with Available at meg.univ-amu.fr Available at https://www.python.org/ Available at http://fr.mathworks.com/matlabcentral/fileexchange/42997-dicom-to-nifti-converter--nifti-tooland-viewer 4 Available at https://fr.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze- image 5 Available at http://www.fil.ion.uclac.uk/spm/ https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/TroubleshootingData Acknowledgements The authors wish to thank Olivier Coulon and Andre Brovelli for useful discussions on CT-MRI coregistration and MarsAtlas/Brainvisa use. We thank Dr Gilles Brun for helping with the writing of methods about CT imaging. The calcultation of Krippendorff alpha for this paper was generated using the Real Statistics Resource Pack software (Release 5.4). Copyright (2013 -2018) Charles Zaiontz. www.real-statistics.com. This work has been carried out within the FHU EPINEXT with the support of the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French Governement program managed by the French National Research Agency (ANR). Part of this work was funded by a joint Agence Nationale de la Recherche (ANR) and Direction Génerale de l'Offre de Santé (DGOS) under grant "VIBRATIONS" ANR-13-PRTS-0011-01. Part of this work was funded by a TechSan grant from Agence Nationale de la recherche "FORCE" ANR-13-TECS-0013. F Pizzo was funded by "Bourse doctorale jeune All these results could be imported in the 3Dviewer to be represented in the patient's anatomy. Validation methodology We validated the two aspects of our segmentation tool. Firstly, the validity of the segmentation of SEEG contact. Detected centroids were superimposed on the CT images and clinicians assessed whether centroids were entirely within the hyper-intensity region corresponding to the contact in the CT image. We performed a multi-rater analysis in order to evaluated the inter-rater agreement (using Krippendorf's alpha [START_REF] Krippendorff | Reliability in Content Analysis[END_REF]). Moreover, we compared our tool to another one (iElectrodes [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF]). On 3 patients we picked randomly 20 contacts detected by the two software applications and calculated the mean distance between their round voxels coordinates in the CT space. Secondly, the validation of anatomical labels. The expert neurosurgeon verified if the label assigned to each contact was concordant with the true location of the contact in the anatomy. It was labelled as "good" if it was within the right anatomical region, as "uncertain" if it was at the boundary between 2 areas and as "wrong" if it was discordant with the region it assigned. RESULTS and DISCUSSION Validation of electrodes Segmentation Manual localization is time consuming. It takes in average 49 minutes for 91 contacts as explained in [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF] or 75 minutes for 1 implant segmentation as explained in [START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF]. Our method takes in average 19 minutes of automated processing (including MRI to CT coregistration, brain segmentation and contact localization) by implantation for a double 1.90Ghz processor with 6 cores each and 16 GB RAM machine (the automatic segmentation process is parallelized between the available cores) added to manual corrections if needed. Thus, our method saves significant time for users. Concerning the validation of our method, the first important step was to validate the localization of Conflict of Interest: The authors declare that they have no conflict of interest. Authors' contributions
36,247
[ "1030473", "743844", "183354", "780307", "178940", "762248", "183494" ]
[ "1151256", "220661", "531926", "220661", "220661", "1151256", "220661", "220661", "220661", "1151256", "220661", "300073", "220661" ]
01762687
en
[ "shs" ]
2024/03/05 22:32:13
2010
https://insep.hal.science//hal-01762687/file/152-%20Effects%20of%20a%20trail%20running%20competition.pdf
Christopher Easthope Schmidt Christophe Hausswirth Julien Louis Romuald Lepers Fabrice Vercruyssen Jeanick Brisswalter email: [email protected] C S Easthope Effects of a trail Keywords: Trail running, Ultra long distance, Master athlete, Eccentric contractions, Muscle damage, Efficiency come Introduction While the popularity of running trail events has increased over the past 5 years [START_REF] Hoffman | The Western States 100-Mile Endurance Run: participation and performance trends[END_REF], limited information concerning the physiological responses of the tanner occurring during this type of contest is available. Trails can be defined as ultra-long-distance runs lasting over 5 h in duration which are performed in a mountain context, involving extensive vertical displacement (both uphill and downhill). One of the main performance determining components of trail runs is exercise duration. In general, ultra-endurance exercices such as marathon running, road cycling, or Ironman triathlons are well-known to impose a strenuous physical load on the organism, which leads to decreases in locomotion efficiency and concomitant substrate changes [START_REF] Brisswalter | Carbohydrate ingestion does not influence the charge in energy cost during a 2-h run in welltrained triathletes[END_REF][START_REF] Fernström | Reduced efficiency, but increased fat oxidation, in mitochondria from human skeletal muscle after 24-h ultraendurance exercise[END_REF], thermal stress coupled with dehydration [START_REF] Sharwood | Weight changes, medical complications, and performance during an Ironman triathlon[END_REF]), oxidative stress [START_REF] Nieman | Vitamin E and immunity after the Kona triathlon world championship[END_REF][START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF], and specifically in running events, structural muscle damage [START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF]. The second major characteristic of trail running events is the large proportion of eccentric work performed during the downhill segments of the race. Eccentric contractions involve force generation in a lengthening muscle, and are known to procure severe structural damage in muscles, affecting their contractile and recuperative properties [START_REF] Nicol | The stretch-shortening cycle: a model to study naturally occurring neuromuscular fatigue[END_REF]. Several studies in the last decade have investigated the effects of long-distance runs performed on level courses. Collective results show an increased release of muscular enzymes into the plasma, a structural disruption of the sarcomere, a substantial impairment in maximal force generation capacity (Lepers et al. 2000a;[START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF][START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF][START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF], and a decrease in post-race locomotion efficiency [START_REF] Millet | Influence of ultra-long term fatigue on the oxygen cost of two types of locomotion[END_REF][START_REF] Millet | Running from Paris to Beijing: biomechanical and physiological consequences[END_REF], indicating that muscles are progressively damaged during the exercise. Specifically, maximal isometric knee extension force has been reported to decrease by 24% after a 30-km running race [START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF], by 28% after 5 h of treadmill running [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]) and by 30% after a 65-km ultra-marathon [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF]. Recently, [START_REF] Millet | Running from Paris to Beijing: biomechanical and physiological consequences[END_REF] reported a 6.2% decrease in running efficiency 3 weeks after a 8,500-km run between Paris and Beijing performed in 161 days. [START_REF] Gauché | Vitamin and mineral supplementation and neuromuscular recovery after a running race[END_REF] have reported that maximal voluntary force decreased by 37% at the end of a prolonged trail run. Repeated eccentric contractions may also affect locomotion efficiency, as demonstrated by [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF], who observed a decrease of 3.2% in running efficiency 48 h after a 30-min downhill run. In a similar vein, [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF] found a 6% decrease in cycling efficiency after 10 series of 25 repetitions of squats, an eccentric exercise. Repeated eccentric contractions, independent of their context, seem to induce a decrease in locomotion efficiency, even if efficiency is evaluated in concentrically dominated cycling [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF]. Based upon the reviewed literature, it was assumed that trail running races would accentuate muscle damage when compared to level running, due to the large proportion of eccentric contractions occurring in the successive downhill segments of courses and therefore lead to both a decrease in muscular performance and locomotion efficiency. Few studies so far have analyzed the physiological aspects of trail running. The existing studies mainly focused on the origin of the decline in contraction capacity (e.g. [START_REF] Miles | Carbohydrate influences plasma interleukin-6 but not C-reactive protein or creatine kinase following a 32-km mountain trail race[END_REF][START_REF] Gauché | Vitamin and mineral supplementation and neuromuscular recovery after a running race[END_REF] or on pacing strategies during the race [START_REF] Stearns | Influence of hydration status on pacing during trail running in the heat[END_REF]). To our knowledge, only limited data is available on the impact of this type of events on locomotion efficiency [START_REF] Millet | Influence of ultra-long term fatigue on the oxygen cost of two types of locomotion[END_REF]. A further characteristic of trail running competitions is the increasing participation of master athletes [START_REF] Hoffman | The Western States 100-Mile Endurance Run: participation and performance trends[END_REF]. Tanaka and Seals defined master athletes in their 2008 article as individuals who regularly participate in endurance training and who try to maintain their physical performance level despite the aging process. In a competition context, competitors are traditionally classified as master athletes when over 40 years of age, the age at which a first decline in endurance peak performance is observed [START_REF] Lepers | Age related changes in triathlon performances[END_REF][START_REF] Sultana | Effects of age and gender on Olympic triathlon performances[END_REF][START_REF] Tanaka | Endurance exercise performance in masters athletes: age-associated changes and underlying physiological mechanisms[END_REF]. The aging process induces a great number of structural and functional transformations, which lead to an overall decline in physical capacity (Thompson 2009). The current rising of the average age in western countries has procured the need to design strategies which increase functional capacity in older people and so forth ameliorate the standard of living (e.g. [START_REF] Henwood | Short-term resistance training and the older adult: the effect of varied programmes for the enhancement of muscle strength and functional performance[END_REF]. Observing master athletes can give insight into age-induced changes in physiology and adaptability, thus enabling scientists to develop more concise and effective recuperation and mobilization programs. Recent studies have shown that master endurance athletes are able to maintain their performance despite exhibiting the structural changes in muscle performance and in maximal aerobic power which are classically associated with aging [START_REF] Lepers | Age related changes in triathlon performances[END_REF][START_REF] Tanaka | Endurance exercise performance in masters athletes: age-associated changes and underlying physiological mechanisms[END_REF][START_REF] Bieuzen | Age-related changes in neuromuscular function and performance following a high-intensity intermittent task in endurance-trained men[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF]. In this context, the first purpose of our study was to evaluate muscle performance and efficiency of runners participating in a long-distance trail competition. Trail running was expected to procure important muscle damage and decrease in locomotion efficiency. The second purpose was to compare changes in these parameters between young and master participants. We hypothesized that neuromuscular alterations following the competition would be greater in master athletes compared to young athletes. Materials and methods Subjects Eleven young and fifteen master athletes, all well-motivated, volunteered to participate in this study. The characteristics of the subjects are shown in Table 1. All subjects had to be free from present or past neuromuscular and metabolic conditions that could have affected the recorded parameters. The subjects had regular training experience in long-distance running prior to the study (8.4 ± 6.0 years for the young vs. 13.3 ± 7.8 years for the master runners), and had performed a training program of 72.1 ± 25.1 and 74.1 ± 23.6 km/week, respectively, for young and masters during the 3 months preceding the experiment. The local ethics committee (St Germain en Laye, France) reviewed and approved the study before its initiation and all subjects gave their informed written consent before participation. Experimental procedure The study was divided into four phases; preliminary testing and familiarization, pre-testing, trail race intervention and post-testing (see Fig. 1). During the first phase, subjects were familiarized with the test scheme and location, and preliminary tests were performed. During the third phase, subjects performed a 55-km trail running race in a medium altitude mountain context. During the second and the fourth phases, muscle performance and efficiency were analyzed and blond samples were collected. All physiological parameters were recorded 1 day before (pre) and 3 days after the trail running race (post 1, 24, 48, and 72 h). Preliminary session During a preliminary session that took place 1 month before the experiment, 26 subjects (11 young and 15 masters) underwent an incremental cycling test at a self-selected cadence on an electromagnetically braked ergocycle (SRM, Schoberer Rad Messtechnik, Jülich, Welldorf, Germany). In accordance with the recommendations of the ethic committee and the French Medical Society, a cycle ergometer protocol was chosen to evaluate ventilatory parameters and efficiency, even though a running protocol would have been preferred. Considerations were based on the assumption that extremely fatigued subjects would have difficulties running on a treadmill and that this might lead to injuries. [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF] have shown that eccentric muscle damage affects locomotion efficiency and ventilatory parameters in a cycling protocol in a similar way to a running protocol. The ergocycle allows subjects to maintain a constant power output which is independent of the selected cadence, by automatically adjusting torque to angular velocity. The test consisted of a warm-up lasting 6 min at 100 W, and an incremental period in which the power output was increased by 30 W each minute until volitional exhaustion. During this incremental cycling exercise, oxygen uptake (VO 2 ), minute ventilation (VE), and respiratory exchange ratio (RER) were continuously measured every 15 s using a telemetric system (Cosmed K4b2, Rome, Italy). The criteria used for the determination of VO2max were a plateau in VO 2 despite an increase in power output, a RER above 1.1, and a heart rate (HR) above 90% of the predicted maximal HR [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. Maximal oxygen uptake (VO 2max ) was determined as the average of the last three highest VO 2 values recorded (58.8 ± 6.5 ml/min/kg for the young vs. 55.0 ± 5.8 ml/ min/kg for the master athletes). The ventilatory threshold (VT) was determined according to the method described by Wasserman et al. (1973). The maximal aerobic power output (MAP) was the highest power output completed in 1 min (352.5 ± 41.1 W for the young vs. 347.6 ± 62.9 W for the master athletes). Race conditions The running event was a 55-km trail race involving a 6,000 m vertical displacement (3,000-m up and 3,000-m down). The starting point and finishing line were at 694-m altitude, and the highest point of the race was at 3,050 m. Due to the competitive nature of the intervention, each subject was well motivated to perform maximally over the distance. From the initial group (11 young and 15 masters), only three subjects (one young and two master athletes) did not finish the course. Therefore, all data presented corresponds to the finalist group (10 young and 13 master athletes). Physical activity after the race was controlled (walking activities were limited and massages were prohibited). Mean race times performed by subjects are shown in Table 1. Maximal isometric force and muscle properties Ten minutes after the sub-maximal cycling exercise, the maximal voluntary isometric force of the right knee extensor (KE) muscles was determined using an isometric ergometer chair (type: J. Sctnell, Selephon, Germany) connected to a strain gauge (Type: Enertec, schlumberger, Villacoublay, France). Subjects were comfortably seated and the strain gauge was securely connected to the right ankle. The angle of the right knee was fixed at 100° (0° = knee fully extended). Extraneous movement of the upper body was limited by two harnesses enveloping the chest and the abdomen. For each testing session, the subjects were asked to perform three 2-3 s maximal isometric contractions (0 rad/s) of the KE muscles. The subjects were verbally encouraged and the three trials were executed with a 1-min rest period. The trial with the highest force value was selected as the maximal isometric voluntary contraction (MVC, in Newton). In addition to MVC, the M-wave of the vastus lateralis was recorded from a twitch evoked by an electrical stimulation. Changes in neuromuscular properties were evaluated throughout all the testing sessions (Lepers et al. 2000b;[START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]. Electrical stimulation was applied to the femoral nerve of the dominant leg according to the methodology previously described by [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]. The following parameters of the muscular twitch were obtained: (a) peak twitch (Pt), i.e. the highest value of twitch tension production and (b) contraction time (Ct), i.e. the time from the origin of the mechanical response to Pt. EMG recordings During the MVC, electrical activity of the vastus lateralis (VL) muscle was monitored using bipolar surface electrodes (Blue sensor Q-00-S, Medicotest SARL, France). The pairs of pregelled Ag/AgC1 electrodes (interelectrode distance = 20 mm; area of electrode = 50 mm 2 ) were applied along the fibers at the height of the muscle belly, as recommended by the SENIAM. A low skin impedance (<5 kΏ) was obtained by abrading and cleaning the area with an alcohol wipe. The impedance was subsequently measured with a multimeter (Isotech, IDM 93N). To minimize movement artifacts, the electrodes were secured with surgical tape and cloth wrap. A ground electrode was placed on a bony site over the right anterior superior spine of the iliac crest. To ensure that the electrodes were precisely at the same place for each testing session, the electrode location was marked on the skin with an indelible marker. EMG signals were pre-amplified (Mazet Electronique Model, Electronique du Mazet, Mazet Saint-Voy, France) close to the detection site (common-mode rejection ratio = 100 dB; Z input = 10 GΏ; gain = 600; bandwidth frequency = 6-1,600 Hz). EMG data were sampled at 1,000 Hz and quantified by using the root mean square (RMS). Maximal RMS EMG of VL muscle was set as the maximal 500-ms RMS value found over the 3-s MVC (i.e.,500 ms window width, 1-ms overlap) using the propriety software Origin 6.1. During evoked stimulation performed before the MVC, peak-to-peak amplitude (PPA) and peak-to-peak duration (PPD) of the M-wave were determined for the VL muscle. Amplitude was defined as the sum of absolute values for maximum and minimum points of the biphasic (one positive and one negative deflection) M-wave. Duration was defined as the time from maximum to minimum points of the biphasic M-wave. Blood markers of muscle damages For each evaluations series, 15 ml of blood was collected into vacutainer tubes via antecubetal venipuncture. The pre-exercise sample was preceded by a 10-min rest period. Once the blood sample was taken, tubes were shuffled by turning and placed on ice for 30 s before centrifugation (10 min, 3,000 T/min, 4°C). The obtained plasma sample was then stored in multiple aliquots (Ependorf type, 500 µl per samples) at -80°C until analyzed for the markers described below. All assays were performed in duplicate on first thaw. As a marker of sarcolemma disruption, the activity of the muscle enzymes creatine kinase (CK) and lactodeshydrogenase (LDH) were measured spectrophotometrically in the blood plasma using commercially available reagents (Roche/Hitachi, Meylan, France). Locomotion efficiency Subjects were asked to perform a cycling control exercise (CTRL) at a self-selected cadence on the same ergocycle as used in the preliminary session. This cycling exercise involved 6 min at 100 W followed by 10 min at a relative power output corresponding to the ventilatory threshold. For each subject and each cycling session, metabolic data was continuously recorded to assess the efficiency in cycling. Efficiency can be expressed as a ratio between (external) power output and the ensuing energy expenditure (EE). Efficiency may, however, be calculated in a variety of ways [START_REF] Martin | Has Armstrong's cycle efficiency improved?[END_REF]. In this study, two types of efficiency calculation were employed, gross efficiency (GE), and delta efficiency (DE). GE is defined as work rate divided by energy expenditure and calculated using the following equation [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF]: Gross efficiency (%) DE is considered by many to be the most valid estimate of muscular efficiency [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF][START_REF] Coyle | Cycling efficiency is related to the percentage of type I muscle fibers[END_REF]). DE calculations are based upon a series of work rates which are then subjected to linear regression analysis. In this study, work rates were calculated from the two intensity tiers completed in the test described at the beginning of this section. Delta efficiency (%) In order to obtain precise values for work rate utilized in the efficiency calculations, power output was assessed from the set work rate and the true cadence as monitored by the SRM crank system. EE was obtained from the rate of oxygen uptake, using the equations developed by [START_REF] Brouwer | One simple formulate for calculating the heat expenditure and the quantities of carbohydrate and fat oxidized in metabolism of men and animals from gaseous exchange (oxygen intake and calorie acid output) and urine-N[END_REF]. These equations take the substrate utilization into account, by calculating the energetic value of oxygen based on the RER value. To minimize a potential influence of the VO 2 slow component, which might vary between subject groups, the mean EE during the 3rd to 6th minute was used in the calculations of GE and DE. Statistical analysis All data presented are mean ± SD (tables and figures). Each dependent variable was then compared between the different testing conditions using a two-way ANOVA with repeated measures (period vs. group). Newman-Keuls post-hoc tests were applied to determine the between-mean differences, if the analysis of variance revealed a significant main effect for period or interaction of group x period. For all statistical analysis, a P < 0.05 value was accepted as the level of significance. Results Muscular performance In all evaluations, MVC values of master athletes were significantly lower than the young group' s values (-21.8 ± 4.6%, P < 0.01). One hour after the intervention (post), maximal isometric strength values of knee extensors decreased significantly when compared with pre-race values, in nonsignificant different proportions for young (-32%, P < 0.01) and master athletes (-40%, P < 0.01). MVC values for young subjects returned to baseline at post 24 h, at which time the MVC reduction in masters remained significant (-13.6%, P = 0.04). A significant decrease in EMG activity (RMS) during MVC of the vastus lateralis (VL) muscle was observed at 1 and 24 h post-exercise without any differences between groups or periods. Compared with pre-race values, post-exercise MVC RMS values decreased in young by -40.2 ± 19% and in masters by -42 ± 19.2% (P < 0.01) (Fig. 2). Muscular twitch and M-wave properties Before the race, no significant effect of age was observed on peak twitch torque (Pt) or contraction time (Ct). One hour after the race, no effect was recorded on Ct or Pt independent of the group. Post 24 h, a slower contraction time (Ct) and a lower peak twitch torque (Pt) were recorded in both groups. Compared to pre-race values, Pt decreased by 18.2% (P = 0.04) in young and by 23.5% (P = 0.02) in master runners at post 24 h. These alterations in twitch properties returned close to pre-test values in young subjects, but remained significant for 48 h (P = 0.03) and 72 h (P = 0.04) in masters subjects (Table 2). Before the race, no significant effect of age was observed on PPA or PPD values of the M-wave for the VL muscle (Table 2). One hour after the race, a significant increase in PPD was observed in both groups. This increase remained significant in the master athlete group at post 24 h (P = 0.02), while the young group returned close to baseline values. Furthermore, in masters, PPA values decreased below pre-race values 48 h (P = 0.04) and 72 h (P = 0.02) after the race, while no effects were observed in young subjects. Blood markers of muscle damages Twenty-four hours (Post 24 h) after the race, the plasma activities of CK and LDH increased significantly in comparison to pre-race values, with a greater increase for master subjects (P = 0.04). CK and LDH values remained significantly elevated at post 48 h and post 72 h, without any difference between groups (Table 3). Locomotion efficiency and cycling cadence Gross efficiency (GE), delta efficiency (DE) and cadence values are presented in Table 4. No significant difference in GE, DE or cadence was observed between groups before the race. After the race, results indicated a non-group specific, significant decline in GE from post 24 to 72 h (GE mean decrease in young vs. masters in % of pre-race values: -4.7 vs. -6.3%, respectively). In both groups, VE increased post 24, 48, 72 h in comparison to pre-test values (VE mean increase in young vs. masters in % of pre-race values: +11.7 vs. +10.1%, respectively). No significant change in DE was observed in young subjects alter the race. On contrary, a significant decrease in DE was recorded in master subjects (DE decrease in master athletes of post 24 h (P = 0.02), post 48 h (P = 0.01) and post 78 h (P = 0.03) in % of pre-race values: -10.6, -10.4, -11.5%, respectively). Post-race cadence was significantly higher in all post-race evaluations for young subjects when compared with masters. Results indicate a significant increase in cycling cadence post 24 h (+4.4%, P = 0.04), post 48 h (+10.6%, P = 0.03), and post 72 h (+17%, P < 0.01) for young, and only in post 48 h (+3.9%, P = 0.04) and post 72 h (+10.8%, P = 0.03) for master athletes. Discussion The objective of the present study was to investigate changes in muscular performance and locomotion efficiency in well-trained endurance runners procured through a trail running competition and to compare these to literature data taken from level courses. The participation of two different age groups of runners (young vs. masters) allowed additional study of the effect of age on fatigue generation and recuperation. The main results of our study indicate that: (1) post-run muscular performance and locomotion efficiency decline, while the associated concentrations of muscle damage indicating blood markers rise, regardless of age, and (2) changes of proportion of the differences between groups are only visible in the recuperation phase (post 24-72 h). The running event analyzed in this study was a 55-km trail race featuring a 6,000-m vertical displacement (3,000-m up and 3,000-m down). The average race time was 06:45 + 00:45. As stated, the main performance components of trail running are exercise duration and vertical displacement (uphill and downhill). From this perspective, trail running competitions induce an intensive physical work load on the organism. Considering the popularity of trail running and the abundance of competitions over the world, it appears important to precisely characterize the acute physiological reactions consecutive to such events. One of the most significant consequences of the race was a reduction in muscular performance. The recorded data manifests a significant decline in maximal force-generating capacities in young (-32%) and master athletes (-40%) 1 h post-race. The intervention seems to have decreased MVC in a slightly greater magnitude compared to: (i) prolonged run on level such as 5-h treadmill running (-28%, [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]) or shorter 30-km trait race (-24%, [START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF] or race of longer duration but with lower altitude change (-30%, [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF]). An adaption for workload distribution, age, training status and a methodological standardization would have to be conducted before a precise comparison between races is possible. It is generally accepted though, that the structural muscle damage leading to MVC loss is generated by the eccentric muscle contractions occurring in running [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF][START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF][START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]). As a logical consequence, interventions with a greater overall percentage of eccentric force production should induce a more pronounced MVC decline in comparison to level courses of same duration, in which the eccentric component is not as pronounced: the recorded MVC data of this study thus conforms to this idea. Additional studies will need to clarify the relative contribution of duration versus altitude changes to the decrease of muscular capacities following prolonged running exercices. After the race (post 24-72 h), MVC values progressively returned to their pre-race level. In addition, results indicate a significant decrease in VL muscle activity (i.e. RMS values) recorded during MVC performed 1 h after the race which persisted until 72 h after the race. Further parameters used to characterize muscular fatigue included muscular twitch and M-wave properties. Pt decreased significantly 24 h after the race, accompanied by a concomitant increase in Ct from 24 to 72 h after the race, albeit only in masters. The main explanation for these perturbations of contractile parameters could be an alteration of the excitation-contraction coupling process that can be attributed to several mechanisms including, but not limited to, reduced Ca 2+ release from the sarcoplasmic reticulum (Westerblad et al. 1991), a decrease in blood pH and a reduced rate or force of crossbridge latching [START_REF] Metzger | Effects of tension and stiffness due to reduced pH in mammalian fast-and slow-twitch skinned skeletal muscle fibres[END_REF]). An increase in Ct after the race could also indicate an impairment in type II muscles fibers (i.e. fast contraction fibers) which may be compensated for by the more fatigue resistant type I muscle fibers (i.e. slow contraction fibers). Twitch muscle properties were unchanged at 1-h post-race, alterations appearing only 24 h after the race and later. This phenomenon might suggest that muscle fatigue was counterbalanced by potentiation mechanisms occurring immediately after the race [START_REF] Baudry | Postactivation potentiation influences differently the nonlinear summation of contractions in young and elderly adults[END_REF][START_REF] Shima | Mechanomyographic and electromyographic responses to stimulated and voluntary contractions in the dorsiflexors of young and old men[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF]. By contrary, M-wave PPD was significantly reduced immediately after the race (post) and tended to return to basal values 24-72 h after the race. The master group exhibited increased M-wave PPA at 24-h post-race. As previously described in the literature, these increases in M-wave parameters suggest an alteration in muscle excitability; probably generated by impairments in neuromuscular propagation due to an increase in sarcolemma permeability to sodium, potassium and chloride [START_REF] Lepers | Advances in neuromuscular physiology of motor skills and muscle fatigue[END_REF]. These results support the assumption of muscle damage development through trail running. The data recorded for muscle damage indicating blood markers underscores this observation. A post-race increase in the plasma activity of muscle enzymes (CK and LDH), which persisted for several days after the race (Table 3), was recorded. Similarly, [START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF] reported a significant increase in CK and LDH activities in the plasma soon after an Ironman triathlon, which remained elevated until 1 day after the race. Intracellular enzymes such CK and LDH indicate that the muscle injury arises from myofibrillar disruption [START_REF] Clarkson | Muscle function after exercise-induced muscle damage and rapid adaptation[END_REF][START_REF] Noakes | Effect of exercise on serum enzyme activities in humans[END_REF], and are classically used to assess the loss of sarcolemmal integrity after strenuous exercices [START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF]). As neither CK nor LDH is considered redundant indicators (Warren et al. 1999), the analysis was augmented by the acquisition of further physiological variables. As an important determinant of performance in endurance events, locomotion efficiency is classically surveyed in athletes in order to evaluate the effects of particular training periods [START_REF] Santalla | Muscle efficiency improves over time in world-class cyclists[END_REF]). It has been reported that even small increments in cycling efficiency may lead to major improvements in endurance performance [START_REF] Moseley | The reliability of cycling efficiency[END_REF]. The efficiency of physical work is a measure of the body' s effectiveness in converting chemical energy into mechanical energy. Efficiency was here calculated as described in the methods section; the quotient of work rate and energy expenditure [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF]. A decrease in locomotion efficiency can therefore be interpreted as either a relative increase in energy expenditure, or a relative decrease in work rate. Considering that work rate was fixed for all the tests, increased energy expenditure remains the only viable option. Recorded values show a decline in GE in both groups of athletes after the race, which persisted until 72 h post-race. Although commonly employed, GE has been criticized for its inclusion of energy-delivery processes that do not contribute to production of mechanical work in the denominator. Therefore, locomotion efficiency was also evaluated through DE calculation, which is by many considered to be the most valid estimate of muscular efficiency [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF][START_REF] Coyle | Cycling efficiency is related to the percentage of type I muscle fibers[END_REF][START_REF] Mogensen | Cycling efficiency in humans is related to low UCP3 content and to type I fibres but not to mitochondrial efficiency[END_REF]. While GE values declined for both groups, DE values only declined in the masters group (P24, P48, and P72), thus confirming the increase in energy expenditure to ensure a continuous power output. This phenomenon is largely related to a decline in muscular performance. In order to produce the same locomotive work as in the pre-race condition, strategies such as an increase in spatio-temporal recruitment of muscle fibers or an increase in cycling cadence, involving a concomitant increase in VE (Table 4) could be engaged. The attained results provide evidence of an alteration of cycling efficiency in both groups tested. The second aim of this study was to analyze age-related effects on muscular performance and cycling efficiency after the trail race by comparing physiological variables recorded in young and master athletes. Race completion time did not significantly differ between groups (06:42 + 00:51 vs. 06:51 + 00:47, for young vs. masters, respectively). Despite the structural and functional alterations typically observed during the aging process, master athletes were able to produce the same level of performance as the young group. This observation confirms the realistic possibility of preventing the age-related decline of physical performance through physical activity. The analysis of muscular performance in the two groups of athletes shows a classical decline in maximal force-generating capacity in masters (-21.8 ± 4.6%), when compared with young for all testing sessions performed before and after the race [START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF]. Results additionally indicate a similar decrease in MVC values at 1-h post-race in both age groups which, in the master subjects only, persisted until 24 h after the race, suggesting a slower recovery. Based on the results of [START_REF] Coggan | Histochemical and enzymatic characteristics of skeletal muscle in master athletes[END_REF], which were confirmed by Tarpenning et al. (2004), the age-induced decrease of MVC values in master athletes similar to our experimental population can be mainly explained by neural factors such as muscle recruitment and/or specific tension. The twitch analysis based assessment of muscular function seems to confirm this hypothesis. This study is to our knowledge the first to present twitch and M-wave data for master athletes after a trail running competition. As previously described in studies on long-distance exercise induced fatigue in young subjects (e.g. [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF], Pt and Ct parameters increased 24 h after the race. The proportions were similar in both groups tested. The alterations in muscular properties persisted several days after the race in masters only, further supporting the idea of a slower muscle recovery in this group. Master M-wave PPD values increased proportionally to the aforementioned development in the young group at 1-h post-race, and returned to pre-race values in all the following testing conditions. By contrary, master M-wave PPA values decreased significantly from 48 to 72 h after the race, while this decline was marginal in young athletes. Despite a similar training status in young and master athletes, the values of these parameters show a greater alteration in masters' muscular function (i.e. contractility and excitability) after the race, indicating a slower recovery of muscle strength. An assessment of VL muscle activity shows that this effect is not brought by an age-induced muscle activation impairment, as MVC RMS values declined in similar proportions between groups after the race. As depicted in Table 3, CK and LDH activity in plasma increased in similar proportions after the race for both groups indicating a similar level of muscular deterioration between groups following the trail competition. This is in line with the above mentioned results, as the proportion of the competitioninduced reduction of MVC was similar between groups. This might support the idea that regular endurance training reinforces active muscles, and therefore limits the structural and functional changes classically associated with aging [START_REF] Lexell | Human aging, muscle mass, and fiber type composition[END_REF]. Results of this study show an effect of aging on cycling efficiency before and after the running race. While GE declined in similar proportions in both groups after the race, DE declined only in masters 24, 48 and 72 h after the race (Table 4). The GE decline in both groups could be mainly related to increases in energy-delivery processes that do not contribute to mechanical work. Variations in these processes originate through modifications in cycling kinematics (e.g. cycling cadence) or muscular contraction patterns (e.g. recruitment of subsidiary muscles, increase in antagonistic co-activation) in fatigued muscles and must be considered when regarding the GE [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF]. The decline of DE in masters could be strongly related to alterations in muscular performance, provoking an increase in muscle activity in cycling to produce the same external work. [START_REF] Gleeson | Effect of exercise-induced muscle damage on the blood lactate response to incremental exercise in humans[END_REF] suggested that an increase in type II fiber recruitment may occur when exercise is performed in a fatigued state. In addition, if force-generating capacity was compromised, more motor units would have to be activated to achieve the same submaximal force output, resulting in a concomitant increase in metabolic cost [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF]. Such an effect could contribute to the significantly higher VE shown in the present study. The results demonstrate that master athletes reached a similar level of fatigue through the race, when compared to young athletes, but recovered significantly slower. The hypothesis that master athletes achieve a higher level of fatigue through similar exertion can therefore be no longer supported. This was surprising, as the input parameters of master athletes were considerably lower and therefore either a lower performance or a greater fatigue would be expected. Thus, it must be surmised that masters must in some form economize energy expenditure over the length of the course, for example through adaptations in strategy or locomotor patterns. Conclusion The aim of this study was to assess physiological responses to an exhaustive trail running competition and to analyze possible differences between young and master athletes. A 55-km ultra-endurance event was used as a fatigue generating intervention. An especially large amount of muscular fatigue was generated through the large proportion of eccentric contractions occurring during the downhill sections of the race. Results indicate an acute fatigue in all subjects (young and masters), which is mainly represented by decreases in muscle performance. Despite similar race performances between groups, the generated fatigue was similar between groups. Post-race development of CK, and neuromuscular properties suggests a decrease in the recuperation kinetics of the master subjects. The results attained in this study give indication that regular endurance training cannot halt the age-related decline in muscle performance, but that performance level can non-the-less be maintained by global or local strategy adaptations or to-date unknown adaptations on a physiological level. Tarpenning KM, Hamilton-Wessler M, Wiswell RA, Hawkins SA ( 2004 FIGURESFig. 1 AFig. 2 12 FIGURES Table 2 Twitch and M-wave parameters of the vastus lateralis muscle before (Pre), and 1 h (Post), 24 h (Post 24), 48 h (Post 48) and 72 h (Po; 72) alter the race 2 SD) values of 10 young and 13 master athletes are shown Pt peak twitch, Ct contraction time, HRt half-relaxation time, PPA peak-to-peak amplitude, PPD peakto-peak duration * Significantly different from pre-exercise (P < 0.05) Variable Pre Post Post 24 Post 48 Post 72 Twitch Pt (N) Young 36 (9) 35 (11) 29 (11)* 34 (9) † 35 (12) † Master 36 (11) 34 (12) 27 (12)* 28 (08)* 29 (12)* Ct (ms) Young 63.3 (13.7) 63.4 (10.6) 68.8 (11.2) * 64.7 (9.5) † 66.9 (10.3) Master 61.3 (15.6) 64.9 (17.4) 71.1 (12.9)* 73.2 (10.8)* 76.2 (12.7) M-Wave PPA (mV) Young 3.5 (1.4) 3.6 (1.6) 3.9 (1.5) 3.4 (1.7) 3.0 (1.4) Master 3.4 (1.5) 3.1 (1.3) 3.1 (1.5) 2.4 (1.4)* 2.3 (0.7)* PPD (ms) Young 7.6 (1.5) 9.2 (1.2)* 7.0 (2.2) † 7.0 (2.5) 7.3 (2.8) Master 7.9 (1.5) 9.5 (2.5)* 9.3 (2.8)* 7.8 (2.7) 7.6 (3.3) Mean ( † Significantly different from master (P < 0.05) Table 3 Changes in muscle damage indicating blond markers for young and master athletes before (Pre), 24 h (Post 24 h), 48 h (Post 48 h) and 72 h (Post 72 h) after the race 3 CK creatine kinase, LDH lactate dehydrogenase * Significantly different from pre-exercise (P < 0.05) † Significantly different from masters (P < 0.05) Variable Group Normal Pre Post 24 h Post 48 h Post 72 h range CK (U/1) Young 50-230 135 (26) 1,470 (565)* † 909 (303)* 430 (251)* Master 50-230 138 (107) 1,559 (593)* 920 (298)* 531 (271)* LDH (U/1) Young 120-245 229 (52) 528 (164)* 453 (65)* 410 (65)* Master 120-245 194 (63) 482 (142)* 468 (105)* 473 (165)* Table 4 Changes in efficiency, ventilation and cycling cadence for young and masters during cycling exercises performed before (Pre), 24 h (Post 24), 48 h (Post 48) and 52 h (Post 52) after the race 4 Variable Pre Post 24 Post 48 Post 72 GE (bpm)
44,248
[ "1012603", "1028528", "901994", "752657", "1029443" ]
[ "84764", "441096", "84764", "193141", "303091", "84764" ]
01762706
en
[ "shs" ]
2024/03/05 22:32:13
2005
https://insep.hal.science//hal-01762706/file/154-%20Modification%20of%20cycling%20biomechanics.pdf
Anne Delextrat Véronique Tricot Thierry Bernard Fabrice Vercruyssen Christophe Hausswirth Jeanick Brisswalter Van Hoecke Hausswirth, Smith, & Brisswalter ; Vercruyssen ) Elliot Modification of Cycling Biomechanics During a Swim-to-Cycle Trial Keywords: pedal rate, resultant torque, asymmetry, neuromuscular fatigue come have shown a significant decrease in stride length (SL) and a significant increase in stride rate (SR) during a 3,000-m run undertaken at constant velocity (5.17 m•s -1 ), while [START_REF] Brisswalter | Variability in energy cost and walking gait during race walking in competitive race walkers[END_REF] did not observe any variation in gait kinematics during a 3-hr walk at competition pace. The recent appearance of multisport activities such as triathlon (swimming-cycling-running) raises new questions relative to the influence of locomotion mode transitions on kinematic adaptation during each discipline. Triathlon events are characterized by a variety of distances ranging from Sprint (750-m swim, 20-km cycle, 5-km run) to . From the shortest to the longest distances the relative duration of the swimming, cycling, and running parts represents 18% to 10%, 52% to 56%, and 30% to 34%, respectively [START_REF] Dengel | Determinants of success during triathlon competition[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. Researchers have found significant correlations between cycling and running duration and triathlon performance, whereas no such relationship has been reported with swimming time [START_REF] Dengel | Determinants of success during triathlon competition[END_REF][START_REF] Schabort | Prediction of triathlon race time from laboratory testing in national triathletes[END_REF]. Consequently, many studies involved in performance optimization have focused on the adaptation of the locomotor pattern after the cycle-to-run transition [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF][START_REF] Millet | Duration and seriousness of running mechanics alterations after maximal cycling in triathletes[END_REF][START_REF] Quigley | The effects of cycling on running mechanics[END_REF]. One of the most studied parameters in the literature concerns stride characteristics (i.e., SL and SR). The influence of a prior cycling task on these variables during running is not clearly established. [START_REF] Quigley | The effects of cycling on running mechanics[END_REF] found no significant effect of a prior 30-min cycling bout on SL and SR measured during running. In contrast with these results, [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] observed a significant decrease in SL (7%) at the start of the running bout of a simulated triathlon (30-min swimming, 60-min cycling, 45-min running) compared with an isolated 45-min run. Moreover, it was reported that decreasing the cycling metabolic load by modifying the geometry of the bicycle frame for a 40-km trial involved significantly higher SL (12%) and SR (2%) during the first 5-km of the subsequent running bout. These increases in stride characteristics resulted in a faster mean running speed [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF]. It is often suggested that these modifications of running kinematics account for the increase in running energy demand classically observed during prolonged exercises. Indeed, significantly higher energy costs have been reported when SL was either increased or decreased from the freely chosen SL (e.g., [START_REF] Cavanagh | The effect of stride length variation on oxygen uptake during distance running[END_REF]. The same phenomenon is also observed during cycling, whereby a cadence associated with a minimization of energy demand (energetically optimal cadence) could be identified around 75 rev•min -1 [START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF]. However, to our knowledge the variations in cycling kinematics and energy expenditure have never been examined in the context of a swim-to-cycle trial. Within this framework, the main purpose of the present study was to investigate the influence of a prior 750-m swim on cycling kinematics. A secondary purpose was to relate these alterations to the metabolic demand of cycling. Methods Eight well trained and motivated male triathletes (age 27 ± 6 yrs; height 182 ± 8 cm; weight 72 ± 7 kg; body fat 12 ± 3%) participated in this study. They had been competing in the triathlon at the interregional or national level for at least 3 years. Mean ± SD training distances per week were 5.6 ±2.3 km in swimming, 65 ± 33 km in cycling, and 32 ± 16 km in running, which represented 131 ± 54 min, 150 ± 76 min, and 148 ± 79 min for these three disciplines, respectively. This training program was mostly composed of technical workouts and interval training in swimming, aerobic capacity (outdoor), and interval training (cycle ergometer) in cycling, and fartlek and interval training in running. It included only one crosstraining session (cycle-to-run) per week. These training volumes and intensities are relatively low when compared to the training load usually experienced by triathletes of this level, partly because the experiment was undertaken in winter when triathletes decrease their training load in all three disciplines. The participants were all familiarized with laboratory testing. They were fully informed of the procedures of the experiment and gave written informed consent prior to testing. The project was approved by the local ethics committee for the protection of individuals (Saint-Germain-en-Laye, France). On their first visit to the laboratory the triathletes underwent two tests. The first test was to determine their leg dominance, in which the 8 participants were classified by kicking dominance according to the method described by [START_REF] Daly | Asymmetry in bicycle ergometer pedaling[END_REF]. The second test was a laboratory incremental test on a cycle ergometer to determine maximal oxygen uptake (VO 2 max) and maximal aerobic power (MAP). After a 6-min warm-up at 150 W, power output was increased by 25 W every 2 minutes until volitional exhaustion. The criteria used for determinating VO 2 max were: a plateau in VO 2 despite the increase in power output, a heart rate (HR) over 90% of the predicted maximal heart rate (220age in years ± 10), and a respiratory exchange ratio (RER) over 1.15 [START_REF] Howley | Criteria for maximal oxygen uptake: Review and commentary[END_REF]. On their subsequent visits to the laboratory the triathletes underwent 4 submaximal sessions separated by at least 48 hours (Figure 1). The first session was always a 750-m swim performed alone at a sprint triathlon competition pace (SA trial). It was used to determine the swimming intensity for each participant. The 3 other sessions, presented in counterbalanced order, comprised 2 swim-to-cycle trials and one isolated cycling trial. The cycling test was a 10-min ride on the bicycle ergometer at a power output corresponding to 75% of MAP and at FCC. During the isolated cycling trial (CTRL trial) this test was preceded by a warm-up on the cycle ergometer at a power output corresponding to 30% of MAP for the same duration as SA swim. During the swim-to-cycle transitions, the cycling test was preceded either by a 750-m swim performed alone at the pace adopted during SA (SCA trial), or by a 750-m swim at the same pace in a drafting position (i.e., swimming directly behind a competitor in the same lane, SCD trial). The same lead swimmer was used for all participants. He was a highly trained triathlete competing at the international level. In order to reproduce the swimming pace adopted during the SA trial, the triathletes were informed of their performance every 50 m via visual feedback. The swim tests took place in the outdoor Olympic swimming pool of Hyères (Var, France); the participants wore a neoprene wet suit (integral wet suit Aqua-® , Pulsar 2000; thickness: shoulders 1.5 mm, trunk 4.5 mm, legs 1.5 mm, arms 1.5 mm). The cycling tests were conducted near the swimming pool in order to standardize the duration of the swim-to-cycle transition (3 min). The intensity of 75% of MAP was chosen because it was close to the pace adopted by triathletes of the same level during sprint triathlon competitions [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] and close to the intensity used in a recent study on the cycle-to-run transition in trained triathletes (e.g., [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF]. All triathletes rode an electromagnetically braked cycle ergometer (SRM Jülich, Welldorf, Germany) equipped with their own pedals and adjusted according to their anthropometrical characteristics. This system can maintain a constant power output independent of the pedal rate spontaneously adopted by the participants. Power output is continuously calculated as shown in Equation 1: Power (W) = Torque (Nm) Angular Velocity (rad•s -1 ) (1) 10 10 The torque generated at the crank axle is measured by 20 strain gauges situated between the crank arms and the chain-rings. The deformation of the strain gauges is proportional to the resultant force acting tangentially on the crank (i.e., effective pedaling torque). Pedal rate and torque are inductively transmitted to the power control unit with a sampling frequency of 500 Hz and data are averaged every 5 seconds and recorded by the power control unit. The SRM system was calibrated prior to each trial. It has been shown to provide a valid and reliable measure of power output when compared with a Monark cycle ergometer [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF]. From these data, several parameters were calculated for each pedal revolution. An average value of these parameters was then computed during the last 30 s of each minute The mean value of the resultant torque exerted during the downstroke of the dominant leg (MTD, in Nm) and during the downstroke of the nondominant leg (MTND, in Nm); The maximal (peak) value of the resultant torque exerted during the downstroke of the dominant leg (PTD, in Nm) and during the downstroke of the nondominant leg (PTND, in Nm); The arm crank angle corresponding to PTD (AD, in degrees) and PTND (AND, in degrees). Crank angle was referenced to 0° at top dead center (TDC) of the right crank arm and to 180° at the TDC of the left crank arm (thus the right leg downstroke was from 0° to 180° and the left leg downstroke was from 180° to 360°). Then the crank angle for the left leg downstroke was expressed relative to the TDC of the left crank arm (i.e., 180° was subtracted to the value obtained). In addition to the biomechanical analysis, the participants were asked to report their perceived exertion (RPE) immediately after each trial using the 15-graded Borg scale from 6 to 20 [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. Moreover, physiological effort during swimming and cycling was assessed from heart rate (HR), oxygen uptake (VO 2 ), and lactate values. During the cycling trials, VO 2 was recorded by the Cosmed K4b 2 telemetric system (Rome, Italy) recently validated by [START_REF] Mclaughlin | Validation of the Cosmed K4 b 2 portable metabolic system[END_REF], and HR was continuously monitored during swimming and cycling using a cardiofrequency meter (Polar vantage, Tampere, Finland). Blood lactate concentration (LA, mmol•L -1 ) was measured by the lactate Pro™ LT-1710 portable lactate analyzer (Arkray, KDK, Kyoto, Japan) from capillary blood samples collected from the participants' earlobes immediately after swimming (L1) and at the 3rd and 10th min of cycling (L2, L3). From these data, cycling gross efficiency (GE, %) was calculated as the ratio of work accomplished•min -1 (kJ•min -1 ) to metabolic energy expended•min -1 (kJ•min -1 ) [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF]. Since relative intensity of the cycling bouts could be superior to ventilatory threshold (VT), the aerobic contribution to metabolic energy was calculated from the energy equivalents for oxygen (according to respiratory exchange ratio value) and a possible anaerobic contribution was estimated using blood lactate increase with time (lactate: 63J•kg -1 mM -1 , di Prampero 1981). For this calculation, VO 2 and lactate increase was estimated from the difference between the 10th and 3rd minutes. All the measured variables were expressed as mean and standard deviation (M ± SD). Differences in biomechanical and physiological parameters between the three conditions (CTRL vs. SCA; CTRL vs. SCD; SCA vs. SCD trials) as well as differences between the values recorded during the downstroke of the dominant and the downstroke of the nondominant leg (MTD vs. MTND; PTD vs. PTND; AD vs. AND) were analyzed using a Wilcoxon test. The level of confidence was set at p < 0.05. Results The test to determine leg dominance showed that among the 8 participants, 6 were left-leg dominant and only 2 were right-leg dominant. During the maximal test, the observation of a plateau in VO 2 (mean value for VO 2 max: 68.6 ± 7.5 ml•min -1 •kg -1 ) and HRmax and RERmax values (respectively, 191 ± 8 beats•min -1 and 1.06 ± 0.05) showed that 2 out of 3 criteria for attainment of VO 2 max were met [START_REF] Howley | Criteria for maximal oxygen uptake: Review and commentary[END_REF]. MAP values (342 ± 41 W) were close to those previously obtained for triathletes of the same level [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF]. The cycling exercises were performed at a mean power output of 259 ± 30 W, which was 75% of MAP. There was no significant difference in performance between the two swimming trials (respectively for SCD and SCA: 653 ± 43 s, and 654 ± 43 s, p > 0.05). The mean velocity of the two 750-m was therefore 1.15 m•s -1 . In contrast, HR values recorded during the last 5 min of swimming and blood lactate concentration measured immediately after swimming were significantly lower in the SCD trial than in the SCA trial (mean HR values of 160 ± 16 vs. 171 ± 18 beats•min -1 , and blood lactate concentrations of 5.7 ± 1.8 vs. 8.0 ± 2.1 mmol•L -1 , respectively, for SCD and SCA trials, p < 0.05). Furthermore, RPE values recorded immediately after swimming indicated that the participants' perception of effort was significantly lower in the SCD trial than in the SCA trial (13 ± 2 vs. 15 ± 1, corresponding to "rather laborious" vs. "laborious," respectively, for SCD and SCA trials, p < 0.05). The biomechanical and physiological parameters measured during cycling in CTRL, SCD, and SCA trials are listed in Tables 1 and2. No significant difference between SCD and CTRL trials was observed for all biomechanical parameters measured. Moreover, in spite of significantly higher blood lactate levels during cycling in the SCD compared with the CTRL trial (Table 2, p < 0.05), the GE and VO 2 values recorded during these two these trials were not significantly different (Table 2, p > 0.05). Table 1 shows that several biomechanical parameters recorded during cycling were significantly different in the SCA trial compared to the SCD trial. The participants adopted a significantly lower pedal rate after the swimming bout performed in a drafting position than after the swimming bout performed alone (Table 1, p < 0.05). Consequently, the mean resultant torque measured at the crank axle was significantly higher in SCD than in SCA (MTD and MTND, Table 1, p < 0.05). A higher resultant peak torque was also observed during the SCD trial compared with the SCA trial. However, this difference was significant only during the downstroke of the nondominant leg (PTND, Table 1, p < 0.05). Moreover, MTD was significantly higher when compared with MTND in all trials (CTRL, SCD, and SCA, Table 1, p < 0.05), suggesting an asymmetry between the mean torques exerted by the dominant leg and the nondominant leg. A significant difference was also shown between PTD and PTND during the SCA and SCD trials only ( prior swimming bout performed alone (Table 2, p < 0.05). Therefore, cycling gross efficiency was significantly higher in the SCD trial than in the SCA trial (Table 2, p < 0.05). Finally, the SCA trial was also characterized by a significantly lower RMT and significantly higher lactate values measured during cycling compared with the CTRL trial (Tables 1 and2, p < 0.05). Discussion The main results of this study indicated that the lower metabolic load when swimming in a drafting position resulted in a modification of the locomotor pattern adopted in cycling and a better efficiency of cycling after the swim-to-cycle transition. Indeed, a significantly lower pedal rate and significantly higher mean and peak resultant torques were observed in the SCD trial compared with the SCA trial. The observation of a decrease in movement frequency following prior swimming in a drafting position is in accordance with results previously observed during the cycle-to-run transition. Indeed, [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] compared the influence of two drafting strategies during the 20-km cycling stage of a sprint triathlon on subsequent 5-km running performance, i.e., drafting continuously behind a leader or alternating drafting and leading every 500 m at the same pace. They showed that the continuous drafting strategy during cycling involved a significant decrease in SR (6.6%) during the first km of running when compared to the alternate drafting strategy. More recently, [START_REF] Bernard | Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes[END_REF] observed that the lower metabolic load associated with the adoption of a pedal rate of 60 vs. 100 rev•min -1 during a 20-min cycling bout at an average power output of 276 W resulted in a significant decrease in SR during the first 500-m of a subsequent 3,000-m run undertaken at competition pace (1.48 ± 0.03 Hz vs. 1.51 ± 0.05 Hz, respectively, for the 60 and 100 rev•min -1 conditions). The decrease in pedal rate measured in the present study (5.8%) is comparable to these previous results. Furthermore, [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] reported that the lower running SR following the continuous cycling drafting strategy was accompanied by a significant improvement in running speed (4.2%). This suggests that the decrease in pedal rate observed in the present study may lead to performance improvement during the cycling bout of the SCD trial compared with the SCA trial. Since these cycling bouts were undertaken at the same power output, a better performance in this case could be achieved through a decrease in energy expenditure. The main hypothesis proposed in the literature to account for the modifications of SR or pedal rate during multisport activities is relative to the relationship between the movement frequencies of successive exercises. [START_REF] Bernard | Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes[END_REF] suggested that the decrease in SR observed in their study at the start of the 3,000-m run in the 60 vs. the 100 rev•min -1 condition was directly related to the lower pedal rate adopted during prior cycling. In the present study, the decrease in pedal rate observed in the SCD trial compared with the SCA trial could in part be accounted for by the locomotor pattern adopted during prior swimming. Indeed, significant decreases in arm stroke frequency have been reported when swimming in a drafting position vs. swimming alone, from 2.5% to 5.6% [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chatard | Drafting distance in swimming[END_REF]. If the participants adopt the same leg kick pattern in the drafting and isolated conditions (i.e., 2-or 6-beat per arm stroke for triathletes), a lower absolute frequency of leg kicks could therefore be expected in the SCD trial compared with the SCA trial. This lower kick rhythm could be partly responsible for the significant decrease in pedal rate at the onset of cycling in SCD vs. SCA. The lower pedal rate observed in the present study was associated with a significantly higher cycling gross efficiency in the SCD trial compared to the SCA trial. The energy expenditure occurring during prolonged exercises, considered one of the main determinants of successful performance [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] O'toole | Applied physiology of a triathlon[END_REF], is commonly related to the modifications in the biomechanical aspects of locomotion [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF]. Among the biomechanical parameters often suggested to account for the increase in energy expenditure during cycling events, the pedal rate spontaneously adopted by the athletes is one of the most frequently cited in the literature [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF]. Several studies have shown a curvilinear relationship between VO 2 or energy cost and pedal rate during short cycling trials (maximal duration of 30 min) performed by triathletes. This relationship allowed them to determine an energetically optimal cadence (EOC), defined as the pedal rate associated with the lower VO 2 or energy cost value, ranging from 72.5 to 86 rev•min -1 among the studies [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF]. Interestingly, the value of EOC seems to depend on cycling intensity. Indeed, a linear increase in EOC has been reported in trained cyclists when power output was raised from 100 to 300 W [START_REF] Coast | Linear increase in optimal pedal rate with increased power output in cycle ergometry[END_REF]. In the present study, the pedal rate adopted by the triathletes in the SDC trial (85 rev•min -1 ) was in the range of the EOC identified in previous studies. This could help explain the lower energy expenditure observed in this trial compared with the SCA trial. When the pedal rate is increased from the EOC, as is the case in the SCA trial, a higher metabolic load is classically reported. This elevation in energy expenditure could be related either to an increase in internal work occurring during repetitive limb movements [START_REF] Francescato | Oxygen cost of internal work during cycling[END_REF], or to a higher cost of ventilation [START_REF] Coast | Ventilatory work and oxygen consumption during exercise and hyperventilation[END_REF]. The adoption of a particular pedal rate by cyclists or triathletes during competition could be related to several criteria. The parameters most often cited in the literature are a minimization of neuromuscular fatigue, a decrease in the force applied on the cranks, and a reduction of stress in the muscles of the lower limbs [START_REF] Neptune | A theoretical analysis of preferred pedaling rate selection in endurance cycling[END_REF][START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF][START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF]. Within this framework, pedaling at a higher rate is theoretically associated with a lowering of the pedaling force produced by the muscles of the lower limbs to maintain a given power output, delaying the onset of local neuromuscular fatigue [START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF]. This relationship has been reported in several studies, showing that the peak forces applied on the pedals were significantly lower with the elevation of pedal rate at constant power output [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF][START_REF] Sanderson | The influence of cadence and power output on the biomechanics of force application during steady-rate cycling in competitive and recreational cyclists[END_REF]. But [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF] also showed that the average resultant pedal force produced during short cycling trials varied with the pedal rate adopted by the participants, reaching minimum values at 90 rev•min -1 when power output was 100 W and 100 rev•min -1 for a higher power output (200 W). Moreover, [START_REF] Neptune | A theoretical analysis of preferred pedaling rate selection in endurance cycling[END_REF] have reported that during a 5-min ride at 265 W, the minimum values of 9 indices representative of muscle activation, force, stress, and endurance for 14 muscles of the lower limbs were obtained when the participants adopted a pedal rate of 90 vs. 75 and 105 rev•min -1 . They referred to this cadence (90 rev•min -1 ) as the theoretical mechanical optimal cadence. In the SCA trial of the present study, the pedal rate adopted by the triathletes (91.1 rev•min -1 ) corresponds to the mechanical optimal cadence. It could therefore be suggested that following the high intensity swim, involving a possible decrease in leg muscular capacity, the triathletes were more fatigued and intrinsically adopted a pedal rate close to mechanical optimal cadence in order to minimize neuromuscular fatigue. Conversely, following the swim at a lower relative intensity (SDC trial), they were less fatigued and therefore spontaneously chose a lower pedal rate associated with higher torques. However, this hypothesis must be considered with caution because the decreases in resultant torques observed with increasing cadence in the present study are smaller than the decreases in peak forces reported in previous studies for comparable power outputs (4.2% for this study vs. 13% for [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF]. The variation of the torques exerted during cycling in the 3 trials of this study shows an asymmetry between the dominant leg and the nondominant leg. Indeed, the MTD were significantly higher compared to the MTND in all trials and the PTD were significantly higher than the PTND in the SCD and SCA trials. The observation of significantly higher torques or forces exerted by the dominant leg compared to the other leg during cycling has been classically reported in the literature [START_REF] Daly | Asymmetry in bicycle ergometer pedaling[END_REF][START_REF] Sargeant | Forces applied to cranks of a bicycle ergometer during one-and two-leg cycling[END_REF]. This asymmetry seems to become more important with fatigue. Within this framework, [START_REF] Mccartney | A constant-velocity cycle ergometer for the study of dynamic muscle function[END_REF] found that the difference in maximal peak torque production between legs during a short-term all-out ride increased with the duration of exercise, reaching more than 15% for some participants at the end of exercise. The results of the present study are in accordance with these findings. Indeed, the differences in torque between the dominant and nondominant legs were more important in the trials preceded by a swimming bout compared with the CTRL trial (5.0% and 5.6% vs. 4.0%, respectively, in SCD, SCA, and CTRL for mean torques; 5.8% and 5.5% vs. 3.6%, respectively, in SCD, SCA, and CTRL for peak torques). In addition, the difference between PTD and PTND in the CTRL trial was not statistically significant. Finally, although the PTND was significantly higher in the SCD trial compared to the SCA trial, no significant difference between these trials was observed for PTD. This suggests that the dominant leg might be less sensitive to fatigue and/or cadence manipulation compared to the weakest leg. In the context of the swim-to-cycle trial of a sprint triathlon, asymmetry might only have a small influence on performance for two main reasons. First, the resultant torques vary to a small extent between the dominant and nondominant legs. Second, the duration of the cycling leg of sprint triathlons is usually quite short (approx. 29 min 22 sec in the study by [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF], and therefore fatigue is less likely to occur than in long-distance events. In conclusion, this study shows that the conditions of a 750-m swimming bout could affect biomechanical adaptation during subsequent cycling. In particular, when prior swimming was performed in a drafting position, the triathletes adopted a significantly lower pedal rate associated with higher mean and peak resultant torques recorded at the crank axle. These modifications could be partly explained by the delayed appearance of fatigue in the cycling bout of the SCD trial compared with the SCA trial. Needed are further studies that include measurements of the force applied on the pedals so we can more precisely examine the influence of prior swimming on the biomechanics of force application during cycling at a constant power output. Figure 1 - 1 Figure 1 -Experimental protocol. L: blood sampling; K4 b 2 : installation of the Cosmed K4b 2 analyzer. Table 1 , 1 p < 0.05). The results concerning physiological parameters demonstrate that swimming in a drafting position induced significantly lowered VO 2 and blood lactate values during subsequent cycling compared with a Table 1 Mean Values of Biomechanical Parameters Recorded During Cycling in CTRL, SCD, and SCA Trials 1 CTRL SCD SCA Pedal rate (rev•min -1 ) 85.1 ± 8.1 85.0 ± 6.2 90.2 ± 8.8 b MTD (Nm) 31.2 ± 3.5 31.3 ± 3.2 30.2 ± 3.5 b MTND (Nm) 30.0 ± 3.6 29.8 ± 3.4 28.6 ± 3.5 a,b,c PTD (Nm) 46.2 ± 5.2 47.4 ± 5.1 46.0 ± 6.2 PTND (Nm) 44.6 ± 7.7 44.8 ± 6.6 43.6 ± 6.9 b,c AD (degrees) 80.3 ± 10.7 83.3 ± 6.5 81.8 ± 8.3 AND (degrees) 79.3 ± 13.6 82.4 ± 11.5 82.7 ± 16.7 Note: MTD = mean resultant torque exerted during downstroke of dominant leg; MTND = mean resultant torque exerted during downstroke of nondominant leg; PTD = peak resul- tant torque exerted during downstroke of dominant leg; PTND = peak resultant torque exerted during downstroke of nondominant leg; AD = crank angle corresponding to PTD; AND = crank angle corresponding to PTND. Significantly different, p < .05: from CTRL trial; b from SCD trial; from dominant side. Table 2 Mean Values of Physiological Parameters Recorded During Cycling in CTRL, SCD, and SCA Trials 2 CTRL SCD SCA VO 2 (L•min -1 ) 3.90 ± 0.50 3.83 ± 0.53 4.03 ± 0.54 b LA (mmol•L -1 ) 3 min 4.5 ± 0.8 6.6 ± 1.9 7.9 ± 1.9 a,b 10 min 4.8 ± 2.2 6.8 ± 2.4 8.2 ± 2.7 a,b GE (%) 19.1 ± 1.5 19.5 ± 1.6 18.5 ± 0.6 b Note VO 2 = oxygen uptake; LA = blood lactate concentration; GE = gross efficiency. Significantly different, p < .05: from CTRL trial; b from SCD trial. Acknowledgments The authors are grateful to all the triathletes who took part in the experiment and acknowledge their wholehearted cooperation and motivation.
32,606
[ "19845", "752657", "1012603" ]
[ "303091", "303091", "303091", "303091", "134564", "303091" ]
01762716
en
[ "info" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762716/file/FGCS2018_CoMe4ACloud_AuthorsVersion.pdf
Zakarea Al-Shara email: [email protected] Frederico Alvares email: [email protected] Hugo Bruneliere email: [email protected] Jonathan Lejeune email: [email protected] Charles Prud'homme email: [email protected] Thomas Ledoux email: [email protected] CoMe4ACloud: An End-to-end Framework for Autonomic Cloud Systems Keywords: Cloud Computing, Autonomic Computing, Model Driven Engineering, Constraint Programming ment (cf. the relatively recent trend on Industrial Internet of Things and Cloud Autonomic Computing has largely contributed to the development of self-manageable Cloud services. It notably allows freeing Cloud administrators of the burden of manually managing varying-demand services, while still enforcing Service-Level Agreements (SLAs). All Cloud artifacts, regardless of the layer carrying them, share many common characteristics. Thus, it should be possible to specify, (re)configure and monitor any XaaS (Anything-as-a-Service) layer in an homogeneous way. To this end, the CoMe4ACloud approach proposes a generic model-based architecture for autonomic management of Cloud systems. We derive a generic unique Autonomic Manager (AM) capable of managing any Cloud service, regardless of the layer. This AM is based on a constraint solver which aims at finding the optimal configuration for the modeled XaaS, i.e. the best balance between costs and revenues while meeting the constraints established by the SLA. We evaluate our approach in two different ways. Firstly, we analyze qualitatively the impact of the AM behaviour on the system configuration when a given series of events occurs. We show that the AM takes decisions in less than 10 seconds for several hundred nodes simulating virtual/physical machines. Secondly, we demonstrate the feasibility of the integration with real Cloud systems, such as Openstack, while still remaining generic. Finally, we discuss our approach according to the current state-of-the-art. Introduction Nowadays, Cloud Computing is becoming a fundamental paradigm which is widely considered by companies when designing and building their systems. The number of applications that are developed for and deployed in the Cloud is constantly increasing, even in areas where software was traditionally not seen as the core ele-Manufacturing [START_REF] Xu | From Cloud Computing to Cloud Manufacturing[END_REF]). One of the main reasons for this popularity is the Cloud's provisioning model, that allows for the allocation of resources in an on-demand basis. Thanks to this, consumers are able to request/release compute/storage/network resources, in a quasi-instantaneous manner, in order to cope with varying demands [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF]. From the provider perspective, a negative consequence of this service-based model is that it may quickly lead the whole system to a level of dynamicity that makes it difficult to manage (e.g., to enforce Service Level Agreements (SLAs) by keeping Quality of Service (QoS) at acceptable levels). From the consumer perspective, the large amount and the variety of services available in the Cloud market [START_REF] Narasimhan | State of Cloud Applications and Platforms: The Cloud Adopters' View[END_REF] may turn the design, (re)configuration and monitoring into very complex and cumbersome tasks. Despite of several recent initiatives intending to provide a more homogeneous Cloud management support, for instance as part of the OASIS TOSCA [START_REF]Topology and Orchestration Specification for Cloud Applications (TOSCA)[END_REF] initiative or in some European funded projects (e.g., [START_REF] Ardagna | MODAClouds: A Model-driven Approach for the Design and Execution of Applications on Multiple Clouds[END_REF] [START_REF] Rossini | Cloud Application Modelling and Execution Language (CAMEL) and the PaaSage Workflow[END_REF]), current solutions still face some significant challenges. Heterogeneity. Firstly, the heterogeneity of the Cloud makes it difficult for these approaches to be applied systematically in different possible contexts. Indeed, Cloud systems may involve many resources potentially having various and varied natures (software and/or physical). In order to achieve well-tuned Cloud services, administrators need to take into consideration specificities (e.g., runtime properties) of several managed systems (to meet SLA guarantees at runtime). Solutions that can support in a similar way resources coming from all the different Cloud layers (e.g., IaaS, PaaS, SaaS) are thus required. Automation. Cloud systems are scalable by definition, meaning that Cloud system may be composed of large sets of components and hence complex software structures to be handled manually in an efficient way. This concerns not only the base configuration and monitoring activities, but also the way Cloud systems should behave at runtime in order to guarantee certain QoS levels and expected SLA contracts. As a consequence, solutions should provide means for gathering and analyzing sensor data, making decision and re-configuring (to translate taken decisions into actual actions on the system) when relevant. Evolution. Cloud systems are highly dynamic: clients can book and release "elastic" virtual resources at any moment at time, according to given SLA contracts. Thus, solutions need to be able to reflect and support transparently the elastic and evolutionary aspects of services. This may be non trivial, especially for systems involving many different services. In this context, the CoMe4ACloud collaborative project1 relies on three main pillars: Modeling/MDE [START_REF] Schmidt | Guest editor's introduction: Model-driven engineering[END_REF], Constraint Programming [START_REF]Handbook of Constraint Programming[END_REF], and Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF]. Its primary goal is to provide a generic and extensible solution for the runtime management of Cloud services, independently from the Cloud layer(s) they belong to. We claim that Cloud systems, regardless of the layer in the Cloud service stack, share many common characteristics and goals, which can serve as a basis for a more homogeneous model. In fact, systems can assume the role of both consumer /provider in the Cloud service stack, and the interactions among them are governed by SLAs. In general, Anything-as-a-Service (XaaS) objectives are very similar when generalizing it to a Service-Oriented Architecture (SOA) model: (i) finding an optimal balance between costs and revenues, i.e., minimizing the costs due to other purchased services and penalties due to SLA violation, while maximizing revenues related to services provided to customers; (ii) meeting all SLA or internal constraints (e.g., maximal capacity of resources) related to the concerned service. In previous work, we relied on the MAPE-K Autonomic Computing reference architecture as a means to build generic an Autonomic Manager (AM) capable of managing Cloud systems [START_REF] Lejeune | Towards a generic autonomic model to manage cloud services[END_REF] at any layer. The associated generic model basically consists of graphs and constraints formalizing the relationships between the Cloud service providers and their consumers in a SOA fashion. From this model, we automatically generate a constraint programming model [START_REF]Handbook of Constraint Programming[END_REF], which is then used as a decision-making and planning tool within the AM. This paper adds on our previous work in that we provide further details on the constraint programming models and translation schemes. Above all, in this work, we show how the generic model layer seamlessly connects to the runtime layer, i.e., how monitoring data from the running system are reflected to the model and how changes in the model (performed by the AM or human administrators) are propagated to the running system. We provide some examples showing this connection, notably over an infrastructure based on the OpenStack [START_REF]OpenStack Open Source Cloud Computing Software[END_REF]. We evaluate experimentally the feasibility of our approach by conducting a quantitative study over a simulated IaaS system. The objective is to analyze the AM behaviour in terms of adaptation decisions as well as to show how well it scales, considering the generic nature of the approach. Concretely, the results show the AM takes decisions in less than 10 seconds for several hundred nodes simulating virtual/physical machines, while remaining generic. The remainder of the paper is structured as follows. In Section 2, we provide the background concepts for the good understanding of our work. Section 3 presents an overview of our approach in terms of architecture and underlying modeling support. In Section 4, we provide a formal description of the AM and explain how we designed it based on Constraint Programming. Section 5 gives more details on the actual implementation of our approach and its connection to real Cloud systems. In Section 6, we provide and discuss related performance evaluation data. We describe in details the available related work in Section 7 and conclude in Section 8 by opening on future work. Background The CoMe4Acloud project is mainly based on three complementary domains of Computer science. Autonomic Computing Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF] emerged from the necessity to autonomously manage complex systems, in which the manual human-like maintenance becomes infeasible such as those in context of Cloud Computing. Autonomic Computing provides a set of principles and reference architecture to help the development of self-manageable software systems. Autonomic systems are defined as a collection of autonomic elements that communicate with each other. An autonomic element consists of a single autonomic manager (AM) that controls one or many managed elements. A managed element is a software or hardware resources similar to its counterpart found in non-autonomic systems, except for the fact that it is adapted with sensors and actuators so as to be controllable by autonomic managers. An autonomic manager is defined as a software component that, based on highlevel goals, uses the monitoring data from sensors and the internal knowledge of the system to plan and execute actions on the managed element (via actuators) in order to achieve those goals. It is also known as a MAPE-K loop, as a reference to Monitor, Analyze, Plan, Execute, Knowledge. As previously stated, the monitoring task is in charge of observing the data collected by software or hardware sensors deployed in the managed element. The analysis task is in charge of finding a desired state for the managed element by taking into consideration the monitored data, the current state of the managed element, and adaptation policies. The planning task takes into consideration the current state and the desired state resulting from the analysis task to produce a set of changes to be performed on the managed elements. Those changes are actually performed in the execution task within the desired time with the help of the actuators deployed on the managed elements. Last but not least, the knowledge in an autonomic system assemble information about the autonomic element (e.g., system representation models, information on the managed system's states, adaptation policies, and so on) and can be accessed by the four tasks previously described. Constraint Programming In the context of autonomic computing systems to manage the dynamics of cloud systems, in order to take into consideration goals or utility functions, it is necessary to implement some methods. In this work, the goals and utility functions are defined in terms of constraint satisfaction and optimization problems. To this end we rely on Constraint Programming to model and solve these kind of problems. Constraint Programming (CP) is a paradigm that aims to solve combinatorial problems defined by variables, each of them associated with a domain, and constraints over them [START_REF]Handbook of Constraint Programming[END_REF]. Then a general purpose solver attempts to find a solution, that is an assignment of each variable to a value from its domain which meet the constraints it is involved in. Examples of CP solvers include free open-source libraries, such as Choco solver [START_REF] Prud'homme | Choco Documentation, TASC, LS2N CNRS UMR 6241[END_REF], Gecode [START_REF]The Gecode Team, Gecode: A generic constraint development environment[END_REF] or OR-tools [START_REF]The OR-Tools Team, Google optimization tools[END_REF] and commercial softwares, such as IBM CPLEX CP Optimizer [START_REF]Cplex cp optimizer[END_REF] or SICStus Prolog [START_REF] Andersson | Sicstus prolog user's manual[END_REF]. Modeling a CSP In Constraint Programming, a Constraint Satisfaction Problem (CSP) is defined as a tuple X, D, C and consists of a set of n variables X = {X 1 , X 2 , . . . , X n }, their associated domains D, and a collection of m constraints C. D refers to a function that maps each variable X i ∈ X to the respective domain D(X i ). A variable X i can be assigned to integer values (i.e., D(X i ) ⊆ Z), a set of discrete values (i.e. D(X i ) ⊆ P(Z)) or real values (i.g. D(X i ) ⊂ R). Finally, C corresponds to a set of constraints {C 1 , C 2 , . . . , C m } that restrain the possible values variables can be assigned to. So, let (v 1 , v 2 , . . . , v j n ) be a tuple of possible values for subset X j = {X j 1 , X j 2 , . . . , X j n j } ⊆ X. A constraint C j is defined as a relation on set X j such that (v 1 , v 2 , . . . , v j n ) ∈ C j (D(X j 1 ) x D(X j 2 ) x . . . x D(X j n j )). Solving a CSP with CP In CP, the user provides a CSP and a CP solver takes care of solving it. Solving a CSP X, D, C is about finding a tuple of possible values (v 1 , v 2 , . . . , v n ) for each variable X i ∈ X such that ∀i ∈ [[1..n]], v i ∈ D(X i ) and all the constraints C j ∈ C are met. In the case of a Constraint Optimization Problem (COP), that is, when a optimization criterion have to be maximized or minimized, a solution is the one that maximizes or minimizes a given objective function f : D(X) → Z. A constraint model can be achieved in a modular and composable way. Each constraint expresses a specific sub-problem, from arithmetical expressions to more complex relations such as AllDifferent [START_REF] Régin | Generalized arc consistency for global cardinality constraint[END_REF] or Regular [START_REF] Pesant | A regular language membership constraint for finite sequences of variables[END_REF]. A constraint not only defines a semantic (AllDifferent: variables should take distinct value in a solution, Regular : an assignment should respect a pattern given by an automaton) but also embeds a filtering algorithm which detects values that cannot be extended to a solution. Modeling a CSP consists hence in combining constraints together, which offers both flexibility (the model can be easily adapted to needs) and expressiveness (the model is almost human readable). Solving a CSP consists in an alternation of a propagation algorithm (each constraint removes forbidden values, if any) and a Depth First Search algorithm with backtrack to explore the search space. Overall, the advantages of adopting CP as decision-making modeling and solving tool is manifold: no particular knowledge is required to describe the problem, adding or removing variables/constraints is easy (and thus useful when code is generated), the general purpose solver can be tweaked easily. Model-driven Engineering Model Driven Engineering (MDE) [START_REF] Schmidt | [END_REF][START_REF] Bézivin | Model Driven Engineering: An Emerging Technical Space[END_REF], more generally also referred to as Modeling, is a software engineering paradigm relying on the intensive creation, manipulation and (re)use of various and varied types of models. In a MDE/Modeling approach, these models are actually the first-class artifacts within related design, development, maintenance and/or evolution processes concerning software as well as their environments and data. The main underlying idea is to reason as much as possible at a higher level of abstraction than the one usually considered in more traditional approaches, e.g. which are often source code-based. Thus the focus is strongly put in modeling, or allowing the modeling of, the knowledge around the targeted domain or range of problems. One of the principal objectives is to capitalize on this knowledge/expertise in order to better automate and make more efficient the targeted processes. Since several years already, there is a rich international ecosystem on approaches, practices, solutions and concrete use cases in/for Modeling [START_REF] Brambilla | Model-Driven Software Engineering in Practice[END_REF]. Among the most frequent applications in the industry, we can mention the (semi-)automated development of software (notably via code generation techniques), the support for system and language interoperability (e.g. via metamodeling and model transformation techniques) or the reverse engineering of existing software solutions (via model discovery and understanding techniques). Complementarily, another usage that has increased considerably in the past years, both in the academic and industrial world, is the support for developing Domain-Specific Languages (DSLs) [START_REF] Fowler | Domain-Specific Languages[END_REF]. Finally, the growing deployment of so-called Cyber-Physical Systems (CPSs), that are becoming more and more complex in different industry sectors (thanks to the advent of Cloud and IoT for example), has been creating new requirements in terms of Modeling [START_REF] Derler | Modeling Cyber-Physical Systems[END_REF]. Approach Overview This section provides an overview of the CoMe4ACloud approach. First, we describe the global architecture of our approach, before presenting the generic me-tamodels that can be used by the users (Cloud Experts and Administrators) to model Cloud systems. Finally, to help Cloud users to deal with cross-layers and SLA, we provide a service-oriented modeling extension for XaaS layers. Architecture The proposed architecture is depicted in Figure 1 In order to better grasp the fundamental concepts of our approach, it is important to reason about the architecture in terms of life-cycle. The life-cycle associated with this architecture involves both kinds of models. A topology model t has to be defined manually by a Cloud expert at design time (step 1). The objective is to specify a particular topology of system to be modeled and then handled at runtime, e.g., a given type of IaaS (with Virtual/Physical Machines nodes) or SaaS (with webserver and database components). The topology model is used as the input of a specific code generator that parameterizes a generic constraint program that is integrated into the Analyzer (step 2) of the generic AM. The goal of the constraint program is to automatically compute and propose a new suitable system configuration model from an original one. Hence, in the beginning, the Cloud Administrator must provide an initial configuration model c0 (step 3), which, along with the fore-coming configurations, is stored in the AM's Knowledge base. The state of the system at a given point in time is gathered by the Monitor (step 4) and represented as a (potentially new) configuration model c0 . It is important to notice that this new configuration model c0 reflects the running Cloud system s current state (e.g., a host that went down or a load variation), but it does not necessarily respect the constraints defined by the Cloud Expert/Administrator, e.g., if a PM crashed, all the VMs hosted by it should be reassigned, otherwise the system will be in a inconsistent state. To that effect, the Analyzer is launched (step 5) whenever a new configuration model exists, whether it results from modifications that are manually performed by the Cloud Administrator or automatically performed by the Monitor. It takes into account the current (new) configuration model c0 and related set of constraints encoded in the CP itself. As a result, a new configuration model c1 respecting those constraints is produced. The Planner produces a set of ordered actions (step 6) that have to be applied in order to go from the source (c0 ) to the target (c1) configuration model. More details on the decision-making process, including the constraint program is given in Section 4. Finally, the Executor (step 7) relies on actuators deployed on the real Cloud System to apply those actions. This whole process (from steps 4 to 7) can be reexecuted as many times as required, according to the runtime conditions and the constraints imposed by the Cloud Expert/Administrator. It is important to notice that configuration models are meant to be representations of actual Cloud systems at given points in time. This can be seen with configuration model c (stored within the Knowledge base) and Cloud system s in Figure 1, for instance. Thus, the content of these models has to always reflect the current state of the corresponding running Cloud systems. More details on how we ensure the required synchronization between the model(s) and the actual system are given in Section 5.2. Generic Topology and Configuration Modeling One of the key features of the CoMe4ACloud approach is the high-level language and tooling support, whose objective is to facilitate the description of autonomic Cloud systems with adaptation capabilities. For that purpose, we strongly rely on an MDE approach that is based on two generic metamodels. As shown in Figure 2, the Topology metamodel covers 1) the general description of the structure of a given topology and 2) the constraint expressions that can be attached to the specified types of nodes and relationships. Starting by the structural aspects, each Cloud system's Topology is named and composed of a set of NodeTypes and corresponding RelationshipTypes that specify how to interconnect them. It can also have some global constraints attached to it. Each NodeType has a name, a set of AttributeTypes and can inherit from another NodeType. It can also have one or several specific Constraints attached to it. Cloud experts can declare the impact (or "cost") of enabling/disabling nodes at runtime (e.g., a given type of Physical Machine/PM node takes a certain time to be switched on/off). Each AttributeType has a name and value type. It allows indicating the impact of updating related attribute values at runtime. A ConstantAttributeType stores a constant value at runtime, a CalculatedAttributeType allows setting an Expression automatically computing its value at runtime. Any given RelationshipType has a name and defines a source and target Node-Type. It also allows specifying the impact of linking/unlinking corresponding nodes via relationships at runtime (e.g., migrating a given type of Virtual Machine/VM node from a type of PM node to another one can take several minutes). One or several specific Constraints can be attached to a RelationshipType. A Constraint relates two Expressions according to a predefined set of comparison operators. An Expression can be a single static IntegerValueExpression or an Attri-buteExpression pointing to an AttributeType. It can be a NbConnectionExpression representing the number of NodeTypes connected to a given NodeType or Relati-onshipType (at runtime) as predecessor/successor or source/target respectively. It can also be a AggregationExpression aggregating the values of a AttributeType from the predecessors/successors of a given NodeType, according to a predefined set of aggregation operators. It can be a BinaryExpression between two (sub)Expressions, according to a predefined set of algebraic operators. Finally, it can be a CustomExpression using any available constraint/query language (e.g., OCL, XPath, etc.), the full expression simply stored as a string. Tools exploiting corresponding models are then in charge of processing such expressions. As shown in Figure 3, the Configuration part of the language is lighter and directly refers to the Topology one. An actually running Cloud system Configuration is composed of a set of Nodes and Relationships between them. Each Node has an identifier and is of a given NodeType, as specified by the corresponding topology. It also comes with a boolean value indicating whether it is actually activated or not in the configuration. This activation can be reflected differently in the real system according to the concerned type of node (e.g., a given Virtual Machine (VM) is already launched or not). A node contains a set of Attiributes providing name/value pairs, still following the specifications of the related topology. Each Relationship also has an identifier and is of a given RelationshipType, as specified again by the corresponding topology. It simply interconnects two allowed Nodes together and indicates if the relationship can be possibly changed (i.e., removed) over time, i.e., if it is constant or not. Service-oriented Topology Model for XaaS layers The Topology and Configuration metamodels presented in the previous section provide a generic language to model XaaS systems. Thanks to that, we can model any kind of XaaS system that can be expressed by a Direct Acyclic Graph (DAG) with constraints having to hold at runtime. However, this level of abstraction can also be seen as an obstacle for some Cloud Experts and Administrators to model elements really specific to Cloud Computing. Thus, in addition to the generic modeling language presented before, we also provide in CoMe4ACloud an initial set of reusable node types which are related to the core Cloud concepts. They constitute a base Service-oriented topology model which basically represents XaaS systems in terms of their consumers (i.e., the clients that consumes the offered services), their providers (i.e., the required resources, also offered as services) and the Service Level Agreements (SLA) formalizing those relationships. Figure 4 shows an illustrative graphical representation of an example configuration model using the pre-defined node types. Root node types. We introduce two types of root nodes: RootP rovider and RootClient. In any configuration model, it can only exist one node of each root node type. These two nodes do not represent a real component of the system but they can be rather seen as theoretical nodes. A RootP rovider node (resp. RootClient node) 315 has no target node (resp. source node) and is considered as the final target (resp. initial source). In other words, a RootP rovider node (resp. RootClient node) node represents the set of all the providers (resp. the consumers) of the managed system. This allows grouping all features of both provider and consumer layers, especially the costs due to operational expenses of services bought from all the providers (re-320 presented by attribute SysExp in a RootP rovider node) and revenues thanks to services sold to all the consumers (represented by attribute SysRev in a RootClient node). SLA node types. We also introduce two types of SLA nodes: SLAClient and SLAP rovider. In a configuration model, SLA nodes define the prices of each service level that can be provided and the amount of penalties for violations. Thus, both types of SLA nodes provide different attributes representing the different prices, penalties and then the current cost or revenue (total cost) induced by current set of bought services (cf. the Service node types below) associated with it. A SLAClient node (resp. SLAP rovider node) has a unique source (resp. target) which is the RootClient node (resp. RootP rovider node) in the configuration. Consequently, an attribute SysRev (resp. SysExp) is equal to the sum of all attribute total cost of its sources node (resp. target nodes). Service node types. A SLA defines several Service Level Objectives (SLO) for each provided service [START_REF] Kouki | Csla: a language for improving cloud sla management[END_REF]. Thus, we have to provide base Service node types: each service provided to a client (resp. received from a provider) is represented by a node of type ServiceClient (resp. ServiceP rovider). The different SLOs are attributes of the corresponding Service nodes (e.g., configuration requirements, availability, response time, etc.). Since each Service node is linked with a unique SLA node in a configuration model, we define an attribute that designate the SLA node relating to a given service node. For a ServiceClient node (resp. ServiceP rovider node), this attribute is named sla client (resp. sla prov) and its value is a node ID which means that the node has a unique source (resp. target) corresponding to the SLA. Internal Component node type. InternalComponent represents any kind of node of the XaaS layer that we want to manage with the Generic AM (contrary to the previous node types which are theoretical nodes and provided as core Cloud concepts). Thus, it is kind of a common super-type of node to be extended by users of the CoMe4ACloud approach within their own topologies (e.g., cf. Listing 1 from Section 5.1). A node of this type may be used by another InternalComponent node or by a ServiceClient node. Conversely, it may require another InternalComponent node or a ServiceP rovider node to work. Decision Making Model In this section, we describe how we formally modeled the decision making part (i.e., the Analyzer and Planner ) of our generic Autonomic Manager (AM) by relying on Constraint Programming (CP). Knowledge (Configuration Models) As previously mentioned, the Knowledge contains models of the current and past configurations of the Cloud system (i.e., managed element). We define formal notations for a configuration at a given instant according to the XaaS model described in Figure 3. The notion of time and configuration consistency We first define T , the set of instants t representing the execution time of the system where t 0 is the instant of the first configuration (e.g., the very first configuration model initialized by the Cloud Administrator, cf. Figure 1). The XaaS configuration model at instant t is denoted by c t , organized in a Directed Acyclic Graph (DAG), where vertices correspond to nodes and edges to relationships of the configuration metamodel (cf. Figure 3). CST R c t denotes the set of constraints of configuration c t . Notice that these constraints refer to those defined in the topology model (cf. Figure 2). The property satisf y(cstr, t) is verified at t if and only if the constraint cstr ∈ CST R c t is met at instant t. The system is satisfied (satisf y(c t )) at instant t, if and only if ∀cstr ∈ CST R c t , satisf y(cstr, t). Finally, function H(c t ) gives the score of the configuration c at instant t : the higher the value, the better the configuration (e.g., in terms of balance between costs and revenues). Nodes and Attributes Let n t be a node at instant t. As defined in Section 3.2 it is characterized by: a set of constraints CST R n t specific to the type (cf. Figure 2). a set of attributes (atts n t ) defining the node's internal state. a node identifier (id n ∈ ID t ), An attribute att t ∈ atts n t at instant t is defined by: name name att , which is constant ∀t ∈ T , a value denoted val att t ∈ R ∪ ID t (i.e., an attribute value is either a real value or a node identifier) Configuration Evolution The Knowledge within the AM evolves as configuration models are modified over the time. In order to model the transition between configuration models, the time T is discretized by the application of a transition function f on c t such that f (c t ) = c t+1 . A configuration model transition can be triggered in two ways by: an internal event (e.g., the Cloud Administrator initializes (add) a software component/node, a PM crashes) or an external event (e.g., a new client arrival), which in both cases alters the system configuration and thus results in a new configuration model (cf. function event in Figure 5). This function models typically the Monitor component of the AM. the AM that performs the function control. This function ensures that satisf y(c t+1 ) is verified, while maximizing H(c t+1 ) 2 and minimizing the transition cost to change the system state between c t and c t+1 . This function characterizes the execution of the Analyzer, Planner and Executor components of the AM. Figure 5 illustrates a transition graph among several configurations. It shows that an event function potentially moves away the current configuration from an optimal configuration and that a control function tries to get closer an new optimal configuration while respecting all the system constraints. Set of all possible system configurations Analyzer (Constraint Model) In the AM, the Analyzer component is achieved by a constraint solver. A Constraint Programming Model [START_REF]Handbook of Constraint Programming[END_REF] needs three elements to find a solution: a static set of problem variables, a domain function, which associates to each variable its domain, and a set of constraints. In our model, the graph corresponding to the configuration model can be considered as a composite variable defined in a domain. For the constraint solver, the decision to add a new node in the configuration is impossible as it implies the adding of new variables to the constraint model during the evaluation. We have hence to define a set N t corresponding to an upper bound of the node set c t , i.e., c t ⊆ N t . More precisely, N t is the set of all existing nodes at instant t. Every node n t / ∈ c t is considered as deactivated and does not take part in the running system at instant t. Each existing node has consequently a boolean attribute called "activated" (cf. Node attribute activated in Figure 3). Thanks to this attribute the constraint solver can decide whether a node has to be enabled (true value) or disabled (false value). The property enable(n t ) verifies if and only if n is activated at t. This property has an incidence over the two neighbor sets preds n t and succs n t . Indeed, when enable(n t ) is false n t has no neighbor because n does not depend on other node and no node may depend on n. The set N t can only be changed by the Administrator or by the Monitor when it detects for instance a node failure or a new node in the running system (managed element), meaning that a node will be removed or added in N t+1 . Figure 6 depicts an example of two configuration transitions. At instant t, there is a node set N t = {n 1 , n 2 , . . . , n 8 } and c t = {n 1 , n 2 , n 5 , n 6 , n 7 }. Each node color represents a given type defined in the topology (cf. Figure 3). The next configuration at t + 1, the Monitor component detects that component n 6 of a green type has failed, leading the managed system to an unsatisfiable configuration. At t + 2, the control function detects the need to activate a deactivated node of the same type in order to replace n 6 by n 8 . This scenario may match the configuration transitions from conf 1 to conf 3 in Figure 5. Configuration Constraints The Analyzer should not only find a configuration that satisfies the constraints. It should also consider the objective function H() that is part of the configuration constraints. The graph representing the managed element (the running Cloud system) has to meet the following constraints: 1. any deactivated node n t at t ∈ T has no neighbor: n t does not depend on other nodes and there is no node that depends on n t . Formally, ¬enable(n t ) ⇒ succs n t = ∅ ∧ preds n t = ∅ 2. except for root node types (cf. Section 3.3), any activated node has at least one predecessor and one successor. Formally, enable(n t ) ⇒ | succs n t |> 0 ∧ | preds n t |> 0 3. if a node n ti is enabled at instant t i , then all the constraints associated with n a (link and attribute constraints) will be met in a finite time. Formally, enable(n ti ) ⇒ ∃t j ≥ t i , ∀cstr ∈ CST R n t i ∧cstr ∈ CST R n t j ∧ enable(n tj ) ∧ satisf y(cstr, t j ) 4. the function H() is equal to the balance between the revenues and the expenses of the system (cf. Figure 4). Formally, Algorithm 1: Global algorithm of the Analyzer Algorithm 1 is the global algorithm of the Analyzer which mimics Large Neighborhood Search [START_REF] Shaw | Using constraint programming and local search methods to solve vehicle routing problems[END_REF]. This strategy consists in two-step loop (lines 5 to 13) which is executed after the constraint model is instantiated in the solver (line 3) from the current configuration (i.e., variables and constraints are declared). First, in line 7, some variables of the solver model are selected to be fixed to their value in the previous satisfiable configuration ( in our case, the b) parameter). This reduces the number of values of some variables X s which can be assigned to, D 0 (X j ) ⊂ D(X j ), ∀X j ∈ X s . H(c t ) = Variables not selected to be fixed represent a variable area (V A) in the DAG. It corresponds to the set of nodes in the graph the solver is able to change their successors and predecessors links. Such a restriction makes the search space to explore by the solver smaller which tends to reduce solving time. The way variables are selected is managed by the initializer (line 4). Note that when the initializer is built, the initial variable area, V A i where i = 0, contains all the deactivated nodes and any nodes whose state has changed since the last optimal configuration (ex : attribute value modification, disappearance/appearance of a neighbour). Then, the solver tries to find a solution for the partially restricted configuration (line 9). If a solution is found, the loop breaks and the new configuration is returned (line 11). Otherwise, the variable area is extended (line 7). A call for a new initialization at iteration i means that the solver has proved that there is no solution in iteration i -1. Consequently, a new initialization leads to relax the previous V A i-1 , D i-1 (X j ) ⊆ D i (X j ) ⊆ D(X j ), ∀X j ∈ X s . At iteration i, V A i is equal to V A i-1 plus the sets of successors and predecessors of all nodes in V A i-1 . Finally, if no solution is found and the initializer is not able to relax domains anymore, D i (X j ) = D(X j ), ∀X j ∈ X s , the Analyzer throws an error. This mechanism brings three advantages: (1) it reduces the solving time because domains cardinality is restrained, (2) it limits the set of actions in the plan, achieving thus one of our objective and (3) it tends to produce configurations that are close to the previous one in term of activated nodes and links. Note that without the Neighborhood Search Strategy, the initial variable area VA 0 is equal to the whole graph leading thus to a single iteration. Planner (Differencing and Match) The Planner relies on differencing and match algorithms for object-oriented models [START_REF] Xing | Umldiff: An algorithm for object-oriented design differencing[END_REF] to compute the differences between the current configuration and the new configuration produced by the Analyze. From a generic point of view it exists five types of action : enable and disable node; link and unlink two nodes; and update attribute value. Implementation Details In this section, we provide some implementation details regarding the modeling languages and tooling support used by the users to specify Cloud systems as well as the mechanisms of synchronization between the models within the Autonomic Manager and the actual running Cloud systems. A YAML-like Concrete Syntax We propose a notation to allow Cloud experts quickly specifying their topologies and initializing related configurations. It also permits sharing such models in a simple syntax to be directly read and understood by Cloud administrators. We first built an XML dialect and prototyped an initial version. But we observed that it was too verbose and complex, especially for newcomers. We also thought about providing a graphical syntax via simple diagrams. While this seems appropriate for visualizing configurations, this makes more time-consuming the topology creation/edition (writing is usually faster than diagramming for Cloud technical experts). Finally, we designed a lightweight textual syntax covering both topology and configuration specifications. To provide a syntax that looks familiar to Cloud users, we considered YAML and its TOSCA version [START_REF]YAML (TOSCA Simple Profile[END_REF] featuring most of the structural constructs we needed (for topologies and configurations). We decided to start from this syntax and complement it with the elements specific to our language, notably concerning expressions and constraints as not supported in YAML (cf. Section 3.2). We also ignored some constructs from TOSCA YAML that are not required in our language (e.g., related to interfaces, requirements or capabilities). Moreover, we can still rely on other existing notations. For instance, by translating a configuration definition from our language to TOSCA, users can benefit from the GUI offered by external tooling such as Eclipse Winery [START_REF]Winery project[END_REF]. As shown on Listing 1, for each node type the user gives its name and the node type it inherits from (if any) (cf. Section 3.3). Then she describes its different attribute types via the properties field, following the TOSCA YAML terminology. Similarly, for each relationship type the expert gives its name and then indicates its source and target node types. As explained before (and not supported in TOSCA YAML), expressions can be used to indicate how to compute the initial value of an attribute type. For instance, the variable ClusterCurConsumption of the Cluster node type is initialized at configuration level by making a product between the value of other variables. Expressions can also be used to attach constraints to a given node/relationship type. For example, in the node type Power, the value of the variable PowerCurConsumption has to be lesser or equal to the value of the constant PowerCapacity (at configuration level). As shown on Listing 2, for each configuration the user provides a unique identifier and indicates which topology it is relying on. Then, for each actual node/relationship, its particular type is explicitly specified by directly referring to the corresponding node/relationship type from a defined topology. Each node describes the values of its different attributes (calculated or set manually), while each relationship describes its source and target nodes. Synchronization with the running system We follow the principles of Models@Runtime [START_REF] Blair | Models@ run.time[END_REF], by defining a bidirectional causal link between the running system and the model. The idea is to decouple the specificities of the causal link, w.r.t. the specific running subsystems, while keeping the Autonomic Manager generic, as sketched in Figure 7. It is important to recall that the configuration model is a representation of the running system and it can be modified in three different situations: (i) when the Cloud administrator manually changes the model; (ii) when it is the time to update the current configuration with data coming from the running system, which is done by the Monitor component; and (iii) when the Analyzer decides for a better configuration (e.g., with higher balance function), in which case the Executor performs the necessary actions on the running Cloud systems. Therefore, the causal link with the running system is defined by two different APIs, which allows to reflect both the changes performed by the generic AM to the actual Cloud systems, and the changes that occur on the system at runtime to the generic AM. To that effect, we propose the implementation of an adaptor for each target running system (managed element). From the Executor component perspective, the objective is to translate generic actions, i.e., enable/disable, link/unlink nodes, update attribute values, into concrete operations (e.g., deploy VM at a given PM) to be invoked over actuators of the different running subsystems (e.g., Openstack, AWS, Moodle, etc.). From the Monitor point of view, the adaptors' role is to gather information from sensors deployed at the running subsystems (e.g., a PM failure, a workload variation) and translate it into the generic operations to be performed on the configuration model by the Monitor, i.e., add/remove/enable/disable node, link/unlike nodes and update attribute value. It should be noticed that the difference between the two APIs is the possibility to add and remove nodes to the configuration model. In fact, the resulting configuration from the Analyzer does not imply the addition or removal of any node, since the constraint solver may not add/remove variables during the decision-making process, as already explained in Section 4.2. The Cloud Administrator and the Monitor, on the contrary may modify the configuration model (that is given as input to the constraint solver) by removing and adding nodes as a reflect of both the running Cloud system (e.g., a PM that crashed) or new business requirements or agreements (e.g., a client that arrives or leaves). Notice also that both adaptors and the Monitor component are the entry-points of the running subsystems and the generic AM, respectively. Thus, the adaptors and the Monitor are the entities that actually have to implement the APIs. We rely on a number of libraries (e.g., AWS Java SDK 3 , Openstack4j4 ) that ease the implementation of adaptors. For example, Listing 3 shows an excerpt of the implementation of the enable action for a VM node in Openstack4j. For the full implementation and for more examples, please see https://gitlab.inria.fr/ come4acloud/xaas. . av a il a bi l it y Zo n e ( targetCluster + " : " + targetPM ) 12 . build () ; 13 [START_REF]The OR-Tools Team, Google optimization tools[END_REF] Server vm = os . compute () . servers () . boot ( vm ) ; 3 https://aws.amazon.com/fr/sdk-for-java/ Performance Evaluation In this section, we present an experimental study of our generic AM implementation that has been applied to an IaaS system. The main objective is to analyze qualitatively the impact of the AM behaviour on the system configuration when a given series of events occurs, and notably the time required by the constraint solver to take decisions. Note that the presented simulation focuses on the performance of the controller. Additionnally, we also experimented with the same scenario on a smaller system but in a real OpenStack IaaS infrastructure 5 . In a complementary manner, a more detailed study of the proposed model-based architecture (and notably its core generic XaaS modeling language) can be found in [START_REF] Bruneliere | A Model-based Architecture for Autonomic and Heterogeneous Cloud Systems[END_REF] where we show the implementation of another use case, this time for a SaaS application6 . The IaaS system We relied on the same IaaS system whose models are presented in Listings 1 and 2 to evaluate our approach. In the following, we provide more details. For sake of simplicity, we consider that the IaaS provides a unique service to their customers: compute resource in the form of VMs. Hence, there exists a node type V M Service extending the ServiceClient type (cf. Section 3.3). A customer can specify the required number of CPUs and RAM as attributes of V M Service node. The prices for a unit of CPU/RAM are defined inside the SLA component, that is, inside the SLAV M node type, which extends the SLAClient type of the service-oriented topology model. Internally, the system has several InternalComponents: VMs (represented by the node type V M ) are hosted on PMs (represented by the node type P M ), which are themselves grouped into Clusters (represented by the node type Cluster). Each enabled V M has exactly a successor node of type P M and exactly a unique predecessor of type V M Service. This is represented by a relationship type stating that the predecessors of a P M are the V M s currently hosted by it. The main constraint of a V M node is to have the number of CPUs/RAM equal to attribute specified in its predecessor V M Service node. The main constraint for a P M is to keep the sum of allocated resources with V M less or equal than its capacity. A P M has a mandatory link to its Cluster, which is also represented by a relationship in the configuration model. A Cluster needs electrical power in order to operate and has an attribute representing the current power consumption of all hosted PMs. The P owerService type extends the ServiceP rovider type of the service-oriented topology model, and it corresponds to an electricity meter. A P owerService node has an attribute that represents the maximum capacity in terms of kilowatt-hour, which bounds the sum of the current consumption of all Cluster nodes linked to this node (PowerService). Finally, the SLAP ower type extends the SLAP rovider type and represents a signed SLA with an energy provider by defining the price of a kilowatt-hour. Experimental Testbed We implemented the Analyzer component of the AM by using the Java-based constraint solver Choco [START_REF] Prud'homme | Choco Documentation, TASC, LS2N CNRS UMR 6241[END_REF]. For scalability purposes the experimentation simulates the interaction with the real world, i.e., the role of the components Monitor and Executor depicted in Figure 1, although we have experimented the same scenario with a smaller system (fewer PMs and VMs) in a real OpenStack infrastructure7 ). The simulation has been conducted on a single processor machine with an Intel Core i5-6200U CPU (2.30GHz) and 6GB of RAM Memory running Linux 4.4. The system is modeled following the topology defined in Listing 1, i.e., compute services are offered to clients by means of Virtual Machines (VM) instances. VMs are hosted by PM, which in turn are grouped by Clusters of machines. As Clusters require electricity in order to operate, they can be linked to different power providers, if necessary (cf. section 6.1). The snapshot of the running IaaS configuration model (the initial as well as the ones associated to each instant t ∈ T ) is described and stored with our configuration DSL (cf. Listing 2). At each simulated event, the file is modified to apply the consequences of the event over the configuration. After each modification due to an event, we activated the AM to propagate the modification on the whole system and to ensure that the configuration meets all the imposed constraints. The simulated IaaS system is composed of 3 clusters homogeneous PMs. Each PM has 32 processors and 64 GB of RAM memory. The system has two power providers: a classical power provider, that is, brown energy provider and a green energy provider. The current consumption of a turned on PM is the sum of its idle power consumption (10 power units) when no guest VM is hosted with an additional consumption due to allocated resources (1 power unit per CPU and per RAM allocated). In order to avoid to degrade the analysis performance by considering too much physical resources compared to the number of consumed virtual resources, we limit the number of unused PM nodes in the configuration model while ensuring a sufficient amount of available physical resources to host a potential new VM. In the experiments, we considered five types of event: AddV M Service (a): a new customer arrival which requests for x V M Service (x ranges from 1 to 5). The required configuration of this request (i.e., the number of CPUs and RAM units and the number of V M Service) is chosen independently, with a random uniform law. The number of required CPU ranges from 1 to 8, and the number of required RAM units ranges from 1 to 16 GB. The direct consequences of such an event is the addition of one SLAV M , x V M Service nodes and x V M nodes in the configuration model file. The aim of the AM after this event is to enable the x new VM and to find the best PM(s) to host them. leavingClient (l): a customer decides to cancel definitively the SLA. Consequently, the corresponding SLAV M , V M Service and V M nodes are removed from the configuration. After such an event the aim of the AM is potentially to shut down the concerned PM or to migrate other VMs to this PM in order to minimize the revenue loss. GreenAvailable (ga): the Green Power Provider decreases significantly the price of the power unit to a value below the price of the Brown Energy Provider. The consequence of that event is the modification of the price attribute of the green SLAP ower node. The expected behaviour of the AM is to enable the green SLAP ower node in order to consume a cheaper service. GreenU nAvailable (gu): contrary to the GreenAvailable event, the Green Power Provider resets its price to the initial value. Consequently, the Brown Energy Provider becomes again the most interesting provider. The expected behaviour of the AM is to disable the green SLAP ower node to the benefit of the classical power provider. CrashOneP M (c): a PM crashes. The consequence on the configuration is the suppression of the corresponding PM node in the configuration model. The goal of the AM is to potentially turn on a new PM and to migrate the VMs which were hosted by the crashed PM. In our experiments, we consider the following scenario over the both analysis strategies without neigborhood and with neigborhood depicted in Section 4.2.2. Initially, the configuration at t 0 , no VM is requested and the system is turned off. At the beginning, the unit price of the green power provider is twice higher than the price of the other provider (8 against 4). The unit selling price is 50 for a CPU and 10 for a RAM unit. Our scenario consists to repeat the following sequence of events: 5 AddV M Service, 1 leavingClient, 1 GreenAvailable, 1 CrashOneP M , 5 AddV M Service, 1 leavingClient, 1 GreenU nAvailable and 1 CrashOneP M . This allows to show the behaviour of the AM for each event with different system's sizes. We show the impact of this scenario over the following metrics: the amount of power consumption for each provider (Figures 8a and8c); the amount of V M Service and size of the system in terms of number of nodes (Figure 8f); the configuration balance (function H()) (Figure 8e). the latency of the Choco Solver to take a decision (Figure 8g) the number of PMs being turned on (Figure 8d) the size of generated plan, i.e., the number of required actions to produce the next satisfiable configuration (Figure 8b) The x-axis in Figure 8 represents the logical time of the experiment in terms of configuration transition. Each colored area in this figure includes two configuration transitions: the event immediately followed by the control action. The color differs according to the type of the fired event. For the sake of readability, the x-axis does not begin at the initiation instant but when the number of node reaches 573 and events are tagged with the initials of the event's name. Analysis and Discussion First of all, we can see that both strategies have globally the same behaviour whatever the received event. Indeed, in both cases a power provider is deactivated when its unit price becomes higher than the second one (Figures 8a and8c). This shows that the AM is capable of adapting the choice of provided service according to their current price and thus benefit from sales promotions offered by its providers. When the amount of requests for V M Service increases (Figure 8f) in a regular basis, the system power consumption increases (Figures 8a and8c) sufficiently slowly so that the system balance also increases (Figure 8e). This can be explained by the ability of the AM to decide to turn on a new PM in a just-in-time way, that is, the AM tries to allocate the new coming VMs on existing enabled PM. On the other way around, when a client leaves the system, as expected, the number of V M Service nodes decreases but we can see that the number of PMs remains constant during this event, leading to a more important decrease of the system balance. Consequently, we can deduce that the AM has decided in this case to privilege the reconfiguration cost criteria at the expense of the system balance criteria. Indeed, we can notice in Figure 8b that the number of planning actions remains limited for the event l. However, we can observe some differences on the values between both strategies. The main difference is in the decision to turn on a PM in the events AddV M Service and CrashOneP M . In the AddV M Service event, the neighborhood strategy favors the start-up of new PM, contrary to the other strategy which favors the use of PM already turned on. Consequently, the neighborhood strategy increases the power consumption leading to a less interesting balance. This can be explained by the fact that the neighborhood strategy avoids to modify existing nodes which limits its capacity for actions. Indeed, this is confirmed in Figure 8b where the curve of the neighborhood strategy is mostly lower than the other one. However, the solving time is worse (Figure 8g) because the minimal required variable area (V A) to find a solution needs several iterations. Conversely, in the CrashOneP M event, we note that the number of PM is mostly the same with the neighborhood strategy while the other one starts up systematically a new PM. This illustrates the fact that, in case of node disappearance, the neighborhood strategy tries to use as much as possible the existing nodes by modifying it as less as possible. Without neighborhood, the controller is able to modify directly all variables of the model. As a result, it is more difficult to find a satisfiable configuration, which comes at the expense of a long solving time (Figure 8g) Finally, in order to keep an acceptable solving time while limiting the number of planning actions and maximizing the balance, it is interesting to choose the strategy according to the event. Indeed, the neighborhood strategy is efficient to repair nodes disappearance but the system balance may be lower in case of new client arrival. Although our AM is generic, we could observe that with the appropriate strategy, it can take decisions in less than 10 seconds for several hundred nodes. In terms of a CSP problem, the considered system's size corresponds to an order of magnitude of 1 million variables and 300000 constraints. Moreover, the taken decisions increase systemically the balance in case of favorable events (new service request from a client, price drop from a provider, etc.) and limits its degradation in case of adverse events (component crash, etc.) . Related Work In order to discuss the proposed solution, we identified common characteristics we believe important for autonomic Cloud (modeling) solutions. Table 1 compares our approach with other existing work regarding different criteria: 1) Genericity -The solution can support all Cloud system layers (e.g., XaaS), or is specific to some particular and well-identified layers; 2) UI/Language -It can provide a proper user interface and/or a modeling language intended to the different Cloud actors; 3) Interoperability -It can interoperate with other existing/external solutions, and/or is compatible with a Cloud standard (e.g., TOSCA); 4) Runtime support -It can deal with runtime aspects of Cloud systems, e.g., provide support for autonomic loops and/or synchronization. In the industrial Cloud community, there are many existing muti-cloud APIs/libraries 8 9 and DevOps tools 10 11 . APIs enable IaaS provider abstraction, therefore easing the control of many different Cloud services, and generally focus on the IaaS client side. DevOps tools, in turn, provide scripting language and execution platforms for configuration management. They rather provide support for the automation of the configuration, deployment and installation of Cloud systems in a programmatical/imperative manner. The Cloudify12 platform overcomes some of these limitations. It relies on a variant of the TOSCA standard [START_REF]Topology and Orchestration Specification for Cloud Applications (TOSCA)[END_REF] to facilitate the definition of Cloud system topologies and configurations, as well as to automate their deployment and monitoring. In the same vein, Apache Brooklyn13 leverages Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF] to provide support for runtime management (via sensors/actuators allowing for dynamically monitoring and changing the application when needed). However, both Cloudify and Brooklyn focus on the application/client layer and are not easily applicable to all XaaS layers. Moreover, while Brooklyn is very handy for particular types of adaptation (e.g., imperative event-condition-action ones), it may be limited to handle adaptation within larger architectures (i.e., considering many components/services and more complex constraints). Our approach, instead, follows a declarative and holistic approach which is more appropriated for this kind of context. Recently, OCCI (Open Cloud Computing Interface) has become one of the first standards in Cloud. The kernel of OCCI is a generic resource-oriented metamodel [START_REF] Nyren | Open cloud computing interface -core, specification document[END_REF], which lacks a rigorous and formal specification as well as the concept of (re)configuration. To tackle these issues, the authors of [START_REF] Merle | A precise metamodel for open cloud computing interface[END_REF] specify the OCCI Core Model with the Eclipse Modeling Framework (EMF) 14 , whereas its static semantics is rigorously defined with the Object Constraint Language (OCL) 15 . An EMF-based OCCI model can ease the description of a XaaS, which is enriched with OCL constraints and thus verified by a many MDE tools. The approach, however, does not cope with autonomic decisions at runtime that have to be done in order to meet those OCL invariants. The European project 4CaaSt proposed the Blueprint Templates abstract language [START_REF] Nguyen | Blueprint Template Support for Engineering Cloud-based Services[END_REF] to describe Cloud services over multiple PaaS/IaaS providers. In the same direction, the Cloud Application Modeling Language [START_REF] Bergmayr | UML-based Cloud Application Modeling with Libraries, Profiles, and Templates[END_REF] studied in the ARTIST EU project [START_REF] Menychtas | Software Modernization and Cloudification Using the ARTIST Migration Methodology and Framework[END_REF] suggests using profiled UML to model (and later deploy) Cloud applications regardless of their underlying infrastructure. Similarly, the mOSAIC EU project proposes an open-source and Cloud vendor-agnostic platform [START_REF] Sandru | Building an Open-Source Platform-as-a-Service with Intelligent Management of Multiple Cloud Resources[END_REF]. Finally, StratusML [START_REF] Hamdaqa | A Layered Cloud Modeling Framework[END_REF] IaaS providers. Thus they are quite layer-specific and do not provide support for autonomic adaptation. The MODAClouds EU project [START_REF] Ardagna | MODAClouds: A Model-driven Approach for the Design and Execution of Applications on Multiple Clouds[END_REF] introduced some support for runtime management of multiple Clouds, notably by proposing CloudML as part of the Cloud Modeling Framework (CloudMF) [START_REF] Ferry | Towards Model-Driven Provisioning, Deployment, Monitoring, and Adaptation of Multi-cloud Systems[END_REF][START_REF] Ferry | CloudMF: Applying MDE to Tame the Complexity of Managing Multi-cloud Applications[END_REF]. As in our approach, CloudMF provides a generic provider-agnostic model that can be used to describe any Cloud provider as well as mechanisms for runtime management by relying on Models@Runtime techniques [START_REF] Blair | Models@ run.time[END_REF]. In the PaaSage EU project [START_REF] Rossini | Cloud Application Modelling and Execution Language (CAMEL) and the PaaSage Workflow[END_REF], CAMEL [START_REF] Achilleos | Business-Oriented Evaluation of the PaaSage Platform[END_REF] extended CloudML and integrated other languages such as the Scalability Rule Language (SRL) [START_REF] Domaschka | Towards a Generic Language for Scalability Rules[END_REF]. However, contrary to our generic approach, in these cases the adaptation decisions are delegated to 3rd-parties tools and tailored to specific problems/constraints [START_REF] Silva | Model-Driven Design of Cloud Applications with Quality-of-Service Guarantees: The MODAClouds Approach[END_REF]. The framework Saloon [START_REF] Quinton | SALOON: a Platform for Selecting and Configuring Cloud Environments[END_REF] was also developed in this same project, relying on feature models to provide support for automatic Cloud configuration and selection. Similarly, [START_REF] Dastjerdi | An effective architecture for automated appliance management system applying ontology-based cloud discovery[END_REF] proposes the use of ontologies were used to express variability in Cloud systems. Finally, Mastelic et al., [START_REF] Mastelic | Towards uniform management of cloud services by applying model-driven development[END_REF] propose a unified model intended to facilitate the deployment and monitoring of XaaS systems. These approaches fill the gap between application requirements and cloud providers configurations but, unlike our approach, they focus on the initial configuration (at deploy-time), not on the run-time (re)configuration. Recently, the ARCADIA EU project proposed a framework to cope with highly adaptable distributed applications designed as micro-services [START_REF] Gouvas | A Context Model and Policies Management Framework for Reconfigurable-by-design Distributed Applications[END_REF]. While in a very early stage and with a different scope than us, it may be interesting to follow this work in the future. Among other existing approaches, we can cite the Descartes modeling language [START_REF] Kounev | A Model-Based Approach to Designing Self-Aware IT Systems and Infrastructures[END_REF] which is based on high-level metamodels to describe resources, applications, adaptation policies, etc. On top of Descartes, a generic control loop is proposed to fulfill some requirements for quality-of-service and resource management. Quite similarly, Popet al., [START_REF] Pop | Support Services for Applications Execution in Multiclouds Environments[END_REF] propose an approach to support the deployment and autonomic management at runtime on multiple IaaS. However both approaches are targeting only Cloud systems structured as a SaaS deployed in a IaaS, whereas our approach allows modeling Cloud systems at any layer. In [START_REF] Mohamed | An autonomic approach to manage elasticity of business processes in the cloud[END_REF], the authors extend OCCI in order to support autonomic management for Cloud resources, describing the needed elements to make a given Cloud resource autonomic regardless of the service level. This extension allows autonomic provisioning of Cloud resources, driven by elasticity strategies based on imperative Event-Condition-Action rules. The adaptation policies are, however, focused on the business applications, while our declarative approach, thanks to a constraint solver, is capable of controlling any target XaaS system so as to keep it close to the a consistent and/or optimal configuration. In [START_REF] García-Galán | User-centric Adaptation of Multi-tenant Services: Preference-based Analysis for Service Reconfiguration[END_REF], feature models are used to define the configuration space (along with user preferences) and game theory is considered as a decision-making tool. This work focuses on features that are selected in a multi-tenant context, whereas our approach targets the automated computation of SLA-compliant configurations in a cross-layer manner. Several approaches on SLA-based resource provisioning -and based on constraint solvers -have been proposed. Like in our approach, the authors of [START_REF] Dougherty | Model-driven auto-scaling of green cloud computing infrastructure[END_REF] rely on rely on MDE techniques and constraint programming to find consistent configurations of VM placement in order to optimize energy consumption. But no modeling or high-level language support is provided. Nonetheless, the focus remains on the IaaS infrastructure, so there is no cross-layer support. In [START_REF] Ghanbari | Optimal autoscaling in a iaas cloud[END_REF], the authors propose a new approach to autoscaling that utilizes a stochastic model predictive control technique to facilitate resource allocation and releases meeting the SLO of the application provider while minimizing their cost. They use also a convex optimization solver for cost functions but no detail is provided about its implementation. Besides, the approach addresses only the relationship between SaaS and IaaS layers, while in our approach any XaaS service can be defined. To the best of our knowledge, there is currently no work in the literature that features at the same time genericity w.r.t. the Cloud layers, interoperability with standards (such as TOSCA), high-level modeling language support and some autonomic runtime management capabilities. The proposed model-based architecture described in this paper is an initial step in this direction. Conclusion The CoMe4ACloud architecture is a generic solution for the autonomous runtime management of heterogeneous Cloud systems. It unifies the main characteristics and objectives of Cloud services. This model enabled us to derive a unique and generic Autonomic Manager (AM) capable of managing any Cloud service, regardless of the layer. The generic AM is based on a constraint solver which reasons on very abstract concepts (e.g., nodes, relations, constraints) and tries to find the best balance between costs and revenues while meeting constraints regarding the established Service Level Agreements and the service itself. From the Cloud administrators and experts point of view, this is an interesting contribution because it frees them from the difficult task of conceiving and implementing purpose-specific AMs. Indeed, this task can be now simplified by expressing the specific features of the XaaS Cloud system with a domain specific language based on the TOSCA standard. Our approach was evaluated experimentally, with a qualitative study. Results have shown that yet generic, our AM is able to find satisfiable configurations within reasonable solving times by taking the established SLA and by limiting the reconfiguration overhead. We also showed how we managed the integration with real Cloud systems like such as Openstack, while remaining generic. For future work, we intend to apply CoMe4ACloud to other contexts somehow related to Cloud Computing. For instance, we plan experiment our approach in the domain of Internet of Things or Cloud-based Internet of Things, which may incur challenges regarding the scalability in terms of model size. We also plan to investigate how our approach could be used to address self-protection, that is, to be able to deal with security aspects in an autonomic manner. Last but not least, we believe that the constraint solver may be insufficient to make decisions in a durable way, i.e., by considering the past history or even possible future states of the managed element. A possible alternative to overcome this limitation is to combine our constraint programming based decision making tool with control theoretical approaches for computing systems. Figure 1 : 1 Figure 1: Overview of the model-based architecture in CoMe4ACloud. Figure 2 : 2 Figure 2: Overview of the Topology metamodel -Design time. Figure 3 : 3 Figure 3: Overview of the Configuration metamodel -Runtime. Figure 4 : 4 Figure 4: Example of configuration model using base Service-oriented node types (illustrative representation). Figure 5 : 5 Figure 5: Examples of configuration transition in the set of configurations. 2 Figure 6 : 26 Figure 6: Examples of configuration transitions. 4 initializer 5 while not found solution and not error do 6 solver 456 ← buildInitializer(Satisf iableConf , M inBalance, withN eighgorhood); impossible to initialize variables") Listing 1 :Listing 2 : 12 Topology excerpt. Topology : IaaS node_types : I n t e r n a l C o m p o n e n t : ... PM : derived_from : I n t e r n a l C om p o n e n t properties : i mp a ct O fE n ab l in g : 40 i m p a c t O f D i s a b l i n g : 30 ... VM : derived_from : I n t e r n a l C o m p o n e n t properties : ... Cluster : derived_from : I n t e r n a l C om p o n e n t properties : constant C l u s t e r C o n s O ne C P U : type : integer constant C l u s t e r C o n s O ne R A M : type : integer constant C l u s t e r C o n s M i n O n e P M : type : integer variable C l u s t e r N b C P U A c t i v e : type : integer equal : Sum ( Pred , PM . P m Nb C PU Al l oc a te d ) variable C l u s t e r C u r C o n s u m p t i o n : type : integer equal : C l u s t e r C o n s M i n O n e P M * NbLink ( Pred ) + C l u s t e r N b C P U A c t i v e * C l u s t e r C o n s O n e C P U + C l u s t e r C o n s O n e R A M * Sum ( Pred , PM . P m S i z e R A M A l l o c a t e d ) variable P o w e r C u r C o n s u m p t i o n : type : integer equal : Sum ( Pred , Cluster . C l u s t e r C u r C o n s u m p t i o n ) constraints : P o w e r C u r C o n s u m p t i o n less_or_equal : PowerCapacity ... r e l a t i o n s h i p _ t y p e s : VM_To_PM : v a l i d _ s o u r c e _ t y p e s : VM v a l i d _ t a r g e t _ t y p e s : PM PM_To_Cluster : v a l i d _ s o u r c e _ t y p e s : PM v a l i d _ t a r g e t _ t y p e s : Cluster C lu s te r _T o _P o we r : v a l i d _ s o u r c e _ t y p e s : Cluster v a l i d _ t a r g e t _ t y p e s : Power ... Configuration excerpt. s t e r C u r C o n s u m p t i o n : 0 19 C l u s t e r N b C P U A c t i v e : 0 20 C l u s t e r C o n s O n e C P U : 1 21 C l u s t e r C o n s O n e R A M : 0 22 C l u s t e r C o n s M i n O n e P M : Figure 7 : 7 Figure 7: Synchronization with real running system. Listing 3 : 3 Excerpt of an IaaS adaptor with OpenStack. where ID t is the set of existing node identifiers at t and id n is unique ∀t ∈ T ; a type (type n ∈ T Y P ES) a set of predecessors (preds n t ∈ P(ID t )) and successors (succs n t ∈ P(ID t )) nodes in the DAG. Note that ∀n t a , n t b ∈c t , id n t b = id n t a ∃id n t b ∈ succs n t a ⇔ ∃id n t a ∈ preds n t b . It is worth noting that the notion of predecessors and successors here is implicit in the notion of Relationship of the configuration metamodel. An expected lower bound of the next balance. This value depends on the reason why the AM has been triggered. For instance, we know that if the reason is a new client arrival or if a provider decreases its prices, the expected balance must be higher than the previous one. Conversely, if there is a client departure, we can estimate that the lower bound of the next balance will be smaller (in this case, the old balance minus the revenue brought by the client); This forces the solver to find a solution with a balance greater than or equal to this input value. and att t exp ∈ atts n t RP ∧ att t exp = SysExp 4.2.2. Execution of the Analyzer The Analyzer needs four inputs to process the next configuration: a) The current configuration model which may be not satisfiable (e.g., c0 in Section 3.1); b) The most recent satisfiable configuration model (e.g., c0 in Section 3.1); c) att t rev -att t exp where att t rev ∈ atts n t RC ∧ att t rev = SysRev d) A boolean that indicates whether to use the Neighborhood Search Strategy, which is explained bellow. 1 Analyse (CurrentConf , Satisf iableConf , M inBalance, withN eighborhood) Result: a satisfiable Configuration 2 begin 3 solver ← buildConstraintModelFrom(CurrentConf ); Table 1 : 1 provides another language for Cloud applications dealing with different layers to address the various Cloud stakeholders concerns. All these approaches focus on how to enable the deployment of applications (SaaS or PaaS) in different Comparison of Cloud (modeling) solutions -for full support, ∼ for partial support Generi- UI / Interop-Runtime city Language erability support APIs/DevOps Cloudify Brooklyn ∼ [32] [33] [34] ∼ [35] [36] ∼ [37] [38, 39] ∼ [40] ∼ [41] ∼ [42] [43][44][45] - - [46] [47] [48] [49] [50] [51] CoMe4ACloud ∼ ∼ https://come4acloud.github.io/ Since the research of optimal configuration (a configuration where the function H() has the maximum possible value) may be too costly in terms of execution time, it is possible to assume that the execution time of the control function is limited by a bound set by the administrator. http://www.openstack4j.com CoMe4ACloud Openstack Demo:http://hyperurl.co/come4acloud_runtime CoMe4ACloud Moodle Demo: http://hyperurl.co/come4acloud CoMe4ACloud Openstack Demo: http://hyperurl.co/come4acloud_runtime Apache jclouds:https://jclouds.apache.org Deltacloud:https://deltacloud.apache.org Puppet:https://puppet.com Chef:https://www.chef.io/chef/ http://getcloudify.org https://brooklyn.apache.org https://eclipse.org/modeling/emf http://www.omg.org/spec/OCL
77,815
[ "1225401", "17629", "15836", "8201" ]
[ "525233", "489559", "527942", "489559", "525283", "541722", "481373", "489559", "489559", "525233" ]
01762840
en
[ "phys" ]
2024/03/05 22:32:13
2018
https://hal.science/hal-01762840/file/TardaniLangmuir2018_postprint.pdf
Franco Tardani Wilfrid Neri Cecile Zakri Hamid Kellay Annie Colin Philippe Poulin Shear Rheology Control of Wrinkles and Patterns in Graphene Oxide Films Drying graphene oxide (GO) films are subject to extensive wrinkling, which largely affects their final properties. Wrinkles were shown to be suitable in biotechno logical applications; however, they negatively affect the electronic properties of the films. Here, we report on wrinkle tuning and patterning of GO films under stress controlled conditions during drying. GO flakes assemble at an air-solvent interface; the assembly forms a skin at the surface and may bend due to volume shrinkage while drying. We applied a modification of evaporative lithography to spatially define the evaporative stress field. Wrinkle alignment is achieved over cm 2 areas. The wavelength (i.e., wrinkle spacing) is controlled in the μm range by the film thickness and GO concentration. Furthermore, we propose the use of nanoparticles to control capillary forces to suppress wrinkling. An example of a controlled pattern is given to elucidate the potential of the technique. The results are discussed in terms of classical elasticity theory. Wrinkling is the result of bending of the wet solid skin layer assembled on a highly elastic GO dispersion. Wavelength selection is the result of energy minimization between the bending of the skin and the elastic deformation of the GO supporting dispersion. The results strongly suggest the possibility to tune wrinkles and patterns by simple physicochemical routes. 1 undesirable for optical applications in which the film roughness and inhomogeneity yield light scattering. INTRODUCTION Graphene oxide (GO) is broadly used for the manufacture of graphene based membranes and films. These are currently investigated for a vast number of technological (electronics, [START_REF] Eda | Chemically Derived Graphene Oxide: Towards Large Area Thin Film Electronics and Optoelectronics[END_REF][START_REF] Yan | Graphene based flexible and stretchable thin film transistors[END_REF][START_REF] Chee | Flexible Graphene Based Supercapacitors: A Review[END_REF] optics, optoelectronics, [START_REF] Eda | Chemically Derived Graphene Oxide: Towards Large Area Thin Film Electronics and Optoelectronics[END_REF][START_REF] Chang | Graphene Based Nanomaterials: Synthesis, Properties, and Optical and Optoelectronic Applications[END_REF] filtration systems [START_REF] Liu | Graphene based membranes[END_REF] ) and biotechno logical (biomedical devices and tissue engineering [START_REF] Kumar | Comprehensive Review on the Use of Graphene Based Substrates for Regenerative Medicine and Biomedical Devices[END_REF] ) applica tions. The oxygen containing groups make GO soluble in water and thus easy to handle under environmentally friendly conditions. Graphene like properties are quickly restored after chemical [START_REF] Chua | Chemical reduction of graphene oxide: a synthetic chemistry viewpoint[END_REF] or thermal [START_REF] Pei | The reduction of graphene oxide[END_REF] reduction. Moreover, the large shape anisotropy of the sheets assures liquid crystalline (LC) behavior even under dilute conditions. [START_REF] Aboutalebi | Spontaneous Formation of Liquid Crystals in Ultralarge Graphene Oxide Dispersions[END_REF][START_REF] Xu | Aqueous Liquid Crystals of Graphene Oxide[END_REF][START_REF] Kim | Graphene Oxide Liquid Crystals[END_REF] Under shear, GO sheets tend to align and flatten due to the suppression of thermal undulations. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] Exotic assemblies can be expected if instabilities set up during deposition, by analogy to lamellar phases. [START_REF] Diat | Effect of shear on a lyotropic lamellar phase[END_REF] Some effort was recently devoted to exploiting the LC behavior to tune the film structure. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] Shear or liquid crystalline behavior directly affects the structure of wet films and membranes. However, solid films are used in actual devices and applications, requiring a drying step after wet deposition. Drying mechanisms are expected to play a significant role in the film structure beyond the initial possible ordering in the wet state. This is a critical issue especially for micrometer thick coatings, where drying induced inhomogeneity is likely to occur. The drying of a colloidal dispersion is a quite complex process. Changes in concentration and viscosity and the setting up of instabilities are only some of the processes the film undergoes during drying. [START_REF] Routh | Drying of thin colloidal films[END_REF] The coffee ring [START_REF] Deegan | Capillary flow as the cause of ring stains from dried liquid drops[END_REF] effect and cracking [START_REF] Singh | Cracking in Drying Colloidal Films[END_REF] may affect the macroscopic homogeneity of the manufactured film. Dry graphene oxide deposits are commonly characterized by extensive wrinkling. [START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF][START_REF] Ahmad | Water assisted stable dispersal of graphene oxide in non dispersible solvents and skin formation on the GO dispersion[END_REF] The phenomenon is associated with the application of tensile/compressive stresses in 2D systems. In the case of drying GO dispersions, the stress can arise from water evaporation. During evaporation, GO flakes progressively assemble in a stack fashion. The presence of hydrogen bonds increases interflake friction, and sliding is impeded. This no slide condition results in buckling and folding of the microscopic flake assembly [START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF][START_REF] Ahmad | Water assisted stable dispersal of graphene oxide in non dispersible solvents and skin formation on the GO dispersion[END_REF][START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] as a consequence of volume shrinkage. The presence of ripples and wrinkles may affect the final properties of the film. [START_REF] Cote | Tunable assembly of graphene oxide surfactant sheets: wrinkles, overlaps and impacts on thin film properties[END_REF] In principle, this effect can be desirable or not depending on the final application. Wrinkling can be sought after in biological applications. Indeed, wrinkled GO films are suitable for anisotropic cell growth [START_REF] Wang | wavelength tunable graphene based surface topographies for directing cell alignment and morphology[END_REF] and antibacterial activity. [START_REF] Zou | Wrinkled Surface Mediated Antibacterial Activity of Graphene Oxide Nanosheets[END_REF] Wrinkling can also improve film flexibility. A wrinkled layer can be stretched or bent with a reduced propensity for cracking. In addition, the excess surface area of wrinkles can be useful to enhance the energy storage capabilities of supercapacitors. However, wrinkling should be suppressed in the preparation of other electronic devices. For instance, wrinkled films showed a very large scattering of measured sheet resistance values. [START_REF] Cote | Tunable assembly of graphene oxide surfactant sheets: wrinkles, overlaps and impacts on thin film properties[END_REF] Wrinkles are also It is therefore critical to tune the formation of wrinkles and to develop a better understanding to control their structure or even to suppress them. Nowadays, it is possible to induce controlled wrinkle formation through the application of known mechanical stresses to dried films. For instance, mono or few layer graphene films were transferred onto prestrained stretchable substrates. [START_REF] Zang | Multifunctionality and control of the crumpling and unfolding of large area graphene[END_REF][START_REF] Thomas | Controlled Crumpling of Graphene Oxide Films for Tunable Optical Transmittance[END_REF] As the strain was released, films started wrinkling. In another example, the strain appeared sponta neously when ultrathin graphite films were suspended on predefined trenches. [START_REF] Bao | Controlled ripple texturing of suspended graphene and ultrathin graphite membranes[END_REF] In this latter case, a mechanical stress was also thermally induced by the mismatch of the thermal expansion coefficients of the graphene layer and that of the substrate. Concerning GO liquid crystalline systems, only few research efforts can be found in the literature. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] In the first case, [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF] the authors related the shear banding of the LC flow alignment to the wrinkles that appear after drying. In a more recent study on wrinkling, thick films were produced at high temperature (>50 °C) and low blade coating velocities. [START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] Different types of wrinkles were observed as a function of the drying conditions. Here, we show by using suspensions with different compositions that the rheological properties of the suspensions are key features in controlling the wrinkles and patterns formed upon drying. We show in particular how the wavelength of the wrinkles varies with the composition of the GO solution, without any obvious link with pre existing ordering in the GO liquid crystal films and without using prestretched substrates. A correlation with mechanical and shear rheological properties of the GO dispersion is discussed. We demonstrate the possibility to pattern wrinkles through the development of an evaporative lithography method. [START_REF] Harris | Patterning Colloidal Films via Evaporative Lithography[END_REF] The latter is shown to be negative, by contrast to other common colloid films, with the accumulation of particles in regions with lower evaporation rates. The distinctive negative evaporative lithography is also driven by the rheological properties of the GO dispersions. Finally, we show how the inclusion of spherical nanoparticles can be used to reduce, and even completely suppress, the formation of wrinkles, therefore providing a method to make perfectly flat GO films from drying solutions. When used alone, the spherical nanoparticles typically form films which crack upon drying. This mechanism which is opposite to buckling and wrinkling results from the tension of capillary bridges between the particles. Here this mechanism is used to balance the spontaneous tendency of GO films to wrinkle. The present results therefore provide a comprehensive basis to better control and tune the formation and structure of wrinkles in GO based films. EXPERIMENTAL SECTION Materials. Commercial graphene oxide solutions in water from Graphenea were used. The graphene oxide concentration is 4.0 mg/ mL. The flake lateral size is a few micrometers. Ludox HS 40 silica nanoparticles were used as additives. Dispersions are provided by Aldrich with a batch concentration of 40 wt % in water. The average nanoparticle diameter is 12 nm. Sample Preparation. GO concentrated dispersions were obtained with a two step centrifugation process. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] The first step was a mild centrifugation (1400g, 20 min) used to remove possible aggregates. The collected dispersion was centrifuged for 45 min at 50 000g. This second step allowed the separation of a concentrated LC pellet of graphene oxide in water. The concentration, determined through the dry extract, was (4.3 ± 0.3) wt %. All the other concentrations were obtained by dilution in water. Care was taken to ensure homogeneous mixing after water addition. Usually, diluted samples were vortex mixed for at least 30 s. Nanoparticles and GO dispersions were obtained by mixing the two dispersions in a volume ratio of 1:10. After being mixed, the hybrid systems were bath sonicated for 15 min. Rheology. Rheological measurements were performed with an AR2000 stress controlled rheometer from TA Instruments. All of the samples were analyzed with 40 mm cone and plate geometry at 25.0 °C. Evaporation was avoided by the use of a trap to keep the humidity rate constant during the measurements. Film Preparation. GO films were prepared with a doctor blade combined with rod coating technology. A drop of dispersion was put on a glass substrate, and coating proceeded at a velocity, v, of 1 in./s (2.5 cm/s). Different velocities did not show any particular effect, so the slowest available was chosen. The average film size was 2.5 × 1.5 cm 2 . To avoid geometrical issues, a constant amount of material was used to obtain films with the same surface and shape. [START_REF] Kassuga | The effect of shear and confinement on the buckling of particle laden interfaces[END_REF] Care was taken to avoid substrate inclination by fixing it with a tape and checking for planarity with a bubble level. The films were produced under controlled conditions of temperature and relative humidity of 26.0 °C and 35%, respectively. The films were not removed until they were completely dry. Dynamic Evaporative Lithography. Film drying was performed in a confined environment. Soon after coating, a mask was placed above the film, at a distance of ∼300 μm. The mask size was 2.0 × 7.5 cm 2 . The mask ensured evaporation in a well controlled manner. Then the mask was moved at a velocity of 5-10 μm/s, according to the contact line recession speed. Faster or slower velocities caused only a small perturbation in wrinkle order, maintaining the wavelength and alignment the same. Film Characterization. The films were characterized with optical and electronic microscopy. The thickness was determined through the use of an OM Contour GTK optical profilometer. Data were analyzed with the Vision 64 software under white light illumination vertical scanning interferometry mode (VSI). Films were cut at different points to assess the different regions and to get an average film thickness. Wavelength Determination. The wrinkle spacing (i.e., wave length, λ) was determined through a Gaussian analysis of distributions of 100 measures of distances extracted from 5 pictures per sample. One dimensional fast Fourier transformation (FFT) of different linear profiles extracted from optical micrographs was also used to confirm the obtained results. RESULTS Film Casting. GO films were prepared from water based dispersions in the GO concentration range of 0.6-4.3 wt %. Coating was performed in a controlled shear rate range, γ(= v/ H w ) ≈ 100-500 s -1 , where v (2.5 cm/s) is the velocity of the moving blade and H w is the thickness of the wet film. The investigated systems showed shear thinning behavior over the whole range of tested shear rates in rheology characterizations. In confined geometries, the systems form shear bands, as already observed elsewhere. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF] However, these flow induced structures rapidly disappeared during drying. The progressive rearrangement of flakes and the setting up of concentration gradients produced wrinkled films with no controlled structures. Control over temperature and humidity conditions was not sufficient to avoid this problem. For this reason, we developed a technique to induce progressive directional evaporation. The choice was motivated by the need to control solvent flow to promote flake assembly and solid film growth along a particular direction. The technique is schematized in Figure 1. Soon after casting, the wet films were covered with masks of different heights H m . The mask-wet film distance (H m -H w ) was fixed at ∼300 μm to stop evaporation under the covered area. The end of the film was left uncovered to start evaporation. The films progressively dried as the mask was retracted at a constant velocity, v m . During the whole process, wrinkles (Figure 1c) on the dry film developed from the wavy surface of the wet part. The controlled drying allowed control over wrinkle patterns. An average alignment was obtained along the drying direction. The volume shrinkage of the film induced a compressive stress that caused elastic instabilities. As observed elsewhere, [START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] rehydration relaxed and suppressed the formed wrinkles. The wrinkles appeared birefringent under crossed polarizers. Surface Wrinkling Characterization. As soon as the wet film started drying, wrinkles appeared on the surface at the free air-water interface (i.e., outside the mask). These surface wrinkles were compressed and folded after complete drying. This behavior is reminiscent of the behavior of polymeric systems [START_REF] Pauchard | Mechanical instability induced by complex liquid desiccation[END_REF] for which a skin layer forms and buckles. The pinning of the contact line and surface tension on the (covered) wet side clamped the skin at these two ends (i.e., the wet and the dry film boundaries) of the receding front. Then, evaporation induced a reduction in volume associated with a compressive strain producing wrinkling. This process finally allowed the alignment of the wrinkles perpendicular to the receding liquid front. The spacing of parallel wrinkles was analyzed as indicated in the Experimental Section. In Figure 2, 1D FFTs taken from grayscale profiles of the microscopic pictures are shown. For all of the tested concentrations and film thicknesses, nearly the same spacing was found for wet and dry films from a given system, even though the spacing distribution appears broader for dry films. Concentration and Thickness Effects. The wrinkle spacing, λ, was determined as a function of GO concentration and dry film thickness, H, as shown in Figure 3. First, the solid content of the dry film was kept constant. The wavelength was measured for films obtained from dispersions of different GO concentrations. To achieve this, the film area was kept constant, and H w was changed proportionally to the GO concentration. As shown in Figure 3a, λ decreases with a power law like decay at increasing dispersion concentration. Second, films of different H (i.e., final solid content) were produced. A certain GO dispersion was deposited at different H w values (i.e., different final deposit, H). The process was repeated for three different GO concentrations. In this second case, λ was found to increase linearly with H (Figure 3b). The slope of this linear increase changes with concentration. Thin Films. Very thin films were prepared by a modification of the blade coating approach. A flexible scraper was attached to the blade to reproduce the technique used in ref 31 (inset of Figure 3c). At very low thickness (inset of Figure 3c), films showed colors from thin film interference, and no more wrinkles were detected. The thickness boundary h between the wrinkling of thick films and no wrinkling of thin films was determined for different GO concentrations (Figure 3c). The absence of wrinkling in thin films reflects the absence of skin formation during drying. [START_REF] De Gennes | Solvent evaporation of spin cast films: ″crust″ effects[END_REF][START_REF] Bornside | Spin coating: One dimensional model[END_REF] Actually because of its very thin structure, the film dries as a whole gel. Similar behavior was reported for polymer films. Following de Gennes, [START_REF] De Gennes | Solvent evaporation of spin cast films: ″crust″ effects[END_REF] a skin forms when the polymer concentration in the top layer of an evaporating solution increases above a given value. A steady state solvent evaporative current is then established in the skin. By comparing solvent diffusion in the vapor phase with that through the skin, he determined a limiting skin thickness (τ 0 ) of around 70 nm. Therefore, for films of comparable thickness a steady state solvent evaporation is set throughout the whole film height, and no skin formation is expected. The measured h was actually in the same range for polymer systems, representing a good approximation of the skin layer (τ) of a (hypothetical) thick film (i.e., h ≈ τ). Nanoparticle Effect. The effect of added spherical nanoparticles on film structure was investigated. First, it was checked that the nanoparticles had no effect on the GO dispersion phase behavior in the concentration range of the present study. No phase separation or destabilization was observed for weeks. Then, films were prepared at a fixed H w of 40 μm. A map of the different film structures is reported in Controlled Patterns. By using evaporative lithography, it is possible to control not only wrinkle alignment but also the formation of particular patterns. Two examples are reported in Figure 5. Films were dried under a fixed holed mask. The 1 mm diameter holes were hexagonally arranged with an average pitch of d h ≤ 1 mm. The mask pattern was accurately reproduced on the films. Circular menisci were first formed under the open holes, where evaporation was higher. The meniscus diameter grew with time. Finally, different menisci joined under the covered part of the mask. This differential evaporation was responsible for the formation of hexagonal features with higher edges. Surprisingly, this behavior differs from conventional colloidal evaporative lithography. The present case is peculiar to a negative evaporative lithography, as the higher features were cast under the covered part, in contrast to the case of conventional nanoparticle aqueous dispersions. [START_REF] Harris | Patterning Colloidal Films via Evaporative Lithography[END_REF] The reverse situation was reported only in solvent mixtures (i.e., water-ethanol), when Marangoni effects play a role. [START_REF] Harris | Marangoni Effects on Evaporative Lithographic Patterning of Colloidal Films[END_REF] The same peculiar patterns were reproduced in pure GO as well as in GO NP hybrid systems. However, the presence of NPs completely removed wrinkles. This was particularly evident through cross polarizing microscopy. Hybrid films did not show any birefringence because of the absence of wrinkles. The wrinkle arrangement was the consequence of the stress field arising from uneven evaporation rates. Inside the open holes, small wrinkles were aligned perpendicularly with respect to the meniscus contact line, while higher crumpled regions appeared under the covered parts. In the hybrid systems, no small wrinkles were detected, while higher crumpled deposits were still present. The overall pattern appeared more blurred. DISCUSSION Wrinkling is a universal phenomenon when applying a compression/tension to elastic films. The general theory of Cerda and Mahadevan [START_REF] Cerda | Geometry and Physics of Wrinkling[END_REF] defines the wrinkle wavelength, λ, as follows λ ≈ ⎜ ⎟ ⎛ ⎝ ⎞ ⎠ B K 1/4 ( 1 ) where B is the film bending stiffness (= E f τ 3 , E f is Young's modulus, and τ is the film thickness) and K is the stiffness of an effective elastic foundation. It is possible to address a particular case by knowing the physics of the system. The present system is composed of a rigid film lying on an elastic support made of the GO suspension in a gel like liquid crystal state. In that case, K takes the form of an elastic bulk modulus, giving λ τ ≈ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ E E f s 1/3 (2) The wavelength, λ, is proportional to the top layer thickness, τ, with a slope defined by the mismatch of the film, E f , and substrate, E s , Young's moduli. GO exhibits surfactant like behavior, exposing the hydrophobic moiety at the air-water interface. [START_REF] Kim | Graphene Oxide Sheets at Interfaces[END_REF] In particular, the used GO possess the proper C/O ratio to be easily entrapped in the interface. [START_REF] Imperiali | Interfacial Rheology and Structure of Tiled Graphene Oxide Sheets[END_REF] In the used concentration range, due to their large lateral size, flake diffusion is limited. If the evaporation is quite fast, GO flakes tend to accumulate at the interface. Routh and Zimmerman 38 defined a specific Peclet number, Pe ≈ H w E /D, to quantify the ratio between diffusion (D) and the evaporation rate (E ). At Pe ≫1, evaporation overtakes diffusion and skinning is expected. Approximating E ̇ as the water front velocity (∼μm/s) and the diffusion coefficient as that of a sphere of 1 μm size, one obtains Pe ≈ 8, fulfilling the above conditions. Thus, a skin layer is expected to form on the top of the film. An analogous process was already observed in polymeric systems. [START_REF] Ciampi | Skin Formation and Water Distribution in Semicrystalline Polymer Layers Cast from Solution: A Magnetic Resonance Imaging Study[END_REF] The high frictional forces among the flakes and the pinning of the contact line suppress slippage, and the skin is finally folded under compression. [START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] The resultant in plane compressive stress is directed by the receding water front. In this hypothesis, eq 2 could explain the different dependencies shown in Figure 3.A skin layer of thickness τ is suspended on foundations with different elasticities (i.e., GO bulk dispersions). The shear elastic modulus (G′) of bulk GO dispersions increases with concentration, as observed by rheology experiments. In principle, knowledge of the Poisson ratio for the GO foundation layer allows the determination of the bulk Young modulus, E s . However, the qualitative trend in E s with concentration can still be defined by a rough approximation, Es ≈ G'. Using eq 2, an apparent bending stiffness B and its dependence on concentration can be inferred from the data in Figure 3, as shown in Figure 6. This bending stiffness increases as a power law (B ≈ conc b ) with an exponent b close to 3 (3.3). Considering the relation B = E f τ 3 , the behavior can be explained by assuming a direct proportionality of skin thickness, τ, with GO concentration. If one considers a Young modulus on the order of ∼10 2 GPa, [START_REF] Jimenez Rioboo | Elastic constants of graphene oxide few layer films: correlations with interlayer stacking and bonding[END_REF] then a skin layer in the range of 10-100 nm is expected. These considerations are actually in agreement with data obtained for very thin films (Figure 3c). The approximations of the Young and bending moduli are quite different from that obtained for monolayer graphene oxide. [START_REF] Imperiali | Interfacial Rheology and Structure of Tiled Graphene Oxide Sheets[END_REF] However, the present situation concerns multilayer assembly and the whole mechanical properties derived in a complex way from the monolayer. Compressive rheology characterization would be more appropriate in this particular situation. [START_REF] De Kretster | Compressive rheology: an overview[END_REF] However, shear and compression properties have been shown to behave in qualitatively the same manner. [START_REF] Zhou | Chemical and physical control of the rheology of concentrated metal oxide suspensions[END_REF] Therefore, the considerations above can give an idea of our wrinkling mechanics. Unfortunately, characterization of the skin layer is challenging, and a complete quantitative characterization of the film is not possible. Wrinkle alignment is a consequence of a resultant tensile stress applied perpendicularly to the drying front as the volume decreased with the contact lines of the skin pinned. The mask drives the formation of a horizontal drying front. This front separates the dry film from the liquid film under the mask. A skin on the drying front is pinned at the contact line and stretched backward by the liquid surface tension. These constraints drive a unidirectional tensile stress in response to the normal compression due to volume shrinkage. The shape of the mask assures negligible skin formation at the lateral sides controlling the alignment. The addition of nanoparticles modifies the physics of the system. In particular, we are interested here in particles that form cracks when used alone, a behavior exactly opposite to the case of GO systems which have extra surface area. The relatively low nanoparticles concentration presently used does not affect the phase behavior and the bulk mechanical properties of the dispersions. But other effects are coming into play. As known, above a critical thickness, films made solely of nanoparticles tend to crack. During evaporation, a close packed configuration of NPs forms at the contact line. As the solvent recedes (vertically) from this NP front, a large negative capillary pressure is produced in the menisci between particles. [START_REF] Lee | Why Do Drying Films Crack?[END_REF] This generates a compression stress normal to the surface and a large in plane tensile stress, as the rigid substrate prevents lateral deformation. Crack formation will release the tensile stress, at the expense of surface energy. The present situation is more complicated due to the presence of two different particles with two limiting distinct behaviors, one with the formation of cracks and the other with the formation of wrinkles. Actually, it was observed that the in plane stress induced by NPs at the surface of the films could even balance the stress that normally generates wrinkles. This can result in the total disappearance of wrinkles. The large size difference between flakes and nanoparticles is expected to produce stratification. With the same Pe argument for skin formation, one can expect an uneven distribution of flakes and nanoparticles during drying. [START_REF] Routh | Drying of thin colloidal films[END_REF] In a recent simulation, [START_REF] Cheng | Dispersing Nanoparticles in a Polymer Film via Solvent Evaporation[END_REF] the distribution of NPs in a skinning polymer solution was attributed mostly to NP-polymer and NPsolvent interactions. The nanoparticles used here are hydro philic and have a better affinity for the solvent. In this case, NPs are expected to accumulate under the superficial skin. Actually, no NPs are visible at the top of the dry films in experiments. During evaporation, volume shrinkage will set NP layers under compression. As the NPs are not deformable and their movement is hindered under confinement (i.e., GO stack assembly), in plane tensile stress is produced. This stress will first flatten the GO skin layer until there is enough disposable surface from the wrinkles, then again producing cracks. We can infer that the shape of the boundary lines comes from a complex interplay of wrinkle amplitude, wavelength, and dry thickness effects. The possibility of patterning a peculiar structure with negative lithography can be explained by considering the yield stress nature of GO liquid crystal dispersions. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] Menisci are created under the open areas of the mask due to solvent loss (Figure 5a). The stress generated by a Laplace pressure gradient (σ = 2γ/R ≈ 14 Pa, for water surface tension and hole size R = 1 mm) is not high enough to overtake the yield stress (i.e., σ y > 14 Pa for 2.0 wt % GO, SI) and to induce viscous flow. Therefore, the films will retain the resulting deformation, generating the pattern. As already mentioned, a compressive yield stress would be more appropriate, but the consideration still holds. CONCLUSIONS By taking advantage of dynamic evaporative lithography and tuning the rheological properties of GO dispersions, we were able to control the wrinkling of GO films. The obtained wrinkles were aligned over a macroscopic area of different GO deposits. The wavelength was tuned by changes in the concentration and thickness of the films. The phenomenon was attributed to the formation of a skin layer, subjected to compressive strain during drying. It was shown that this compressive stress was balanced with a tensile stress to get rid of the wrinkles. The latter was simply obtained with the addition of nanoparticles, making the concept easily imple mentable in applications. Controlling the phenomenon of wrinkling is critical in the fabrication of particular patterns, as illustrated in the present work. Notably, wrinkling can be altered without affecting the final macroscopic texture. This is actually the case of evaporative lithography in hybrid systems. This is important since wrinkles can affect the properties of the films. As already mentioned, [START_REF] Zou | Wrinkled Surface Mediated Antibacterial Activity of Graphene Oxide Nanosheets[END_REF] the relation between GO film roughness and its antimicrobial activity has been demonstrated. We expect that tuning the wrinkle spacing may add selectivity. Patterned films can be also used as templates for controlled nanoparticle deposition. In principle, anisotropic wettability may also be obtained. There are still some open questions related to the physical mechanisms involved in the investigated phenomena. We used a modification of the elasticity theory to qualitatively describe the mechanics of the system. The foundation considered in the theory is a purely elastic material, whereas in our case the material is viscoelastic. Moreover, the whole process is dynamic, as the top layer is forming and growing continuously. Therefore, more accurate theories are needed to consider all of these phenomena. From an experimental point of view, the characterization of the skin layer would also be interesting. The determination of the skin thickness will allow the confirmation of our hypothesis. It would also be interesting to look at the NPs' spatial distribution during film drying. The character ization of NPs and their packing would be helpful in quantifying the stress built up during drying. Figure 1 . 1 Figure 1. Film casting and drying is shown in (a) along with the overall wrinkle alignment (2D FFT inset) (b) and the microscopic appearance of a wrinkle (c). v and v m indicate the directions of blade and mask movement, and H m , H w , and H are the mask, wet film, and dry film heights, respectively. Scale bars are (b) 500 μm and (c) 500 nm. Figure 2 . 2 Figure 2. Wavelength determination examples, optical micrograph, grayscale profile, and 1D FFT for the wet (a) and dry (b) films. The scale bar is 100 μm. Figure 4 . 4 Three different situations were observed: wrinkling (filled squares), flattening (open squares), and cracking (crosses). Under certain conditions of GO and NP concentrations, wrinkling was completely suppressed. The addition of NPs favored the casting of smoother films. The threshold concentration of NPs to remove wrinkling increased with GO concentration. Above a certain threshold of NP content, films underwent cracking. At the highest GO concentration used, it was not possible to reduce wrinkling without cracking the dry film. Figure 3 . 3 Figure 3. λ (μm) is reported as a function of GO wt % at a fixed (1.3 ± 0.5 μm) dry thickness (a) and (b) at different dry thicknesses (H, μm) for 4.3 wt % (square, red line), 1.7 wt % (circle, green line), 0.68 wt % (triangle, blue line) GO dispersions. (c) Thin film thickness, h (nm), as a function of GO wt %. (Insets) The blade with a flexible scraper and the appearance of a film (scale bar 1.0 cm). Figure 4 . 4 Figure 4. Effect of NP addition. (a) Wrinkling phase diagram for hybrid films at 40 μm fixed H w . Wrinkled (black squares), smooth (empty square), and cracked (cross) films. (b, c) SEM and (d) optical images of smooth, wrinkled, and cracked films, respectively. Scale bars are 20 (b), 30 (c), and 250 μm (d). Figure 5 . 5 Figure 5. Patterning of GO (2.0 wt %) and GO + NPs (2.0 and 0.1 wt %, respectively) films: (a) schematic representation of the holed mask with the drying and patterning scheme; cross polarizing micrographs showing the obtained patterns for (b) pure GO and (c) GO NPs film. (Inset) Optical micrograph for the hybrid films, obtained in reflection. The scale bar is 1000 μm. Figure 6 . 6 Figure 6. Bending stiffness (B = E s λ 3 , N m) reported as a function of GO wt %. ACKNOWLEDGMENTS We acknowledge C. Blanc, collaborator at the GAELIC project, for fruitful discussions and E. Laurichesse for some technical support of the experimental part. Funding The research in the manuscript was conducted in the framework of the A.N.R. funded GAELIC project. AUTHOR INFORMATION Corresponding Author *E mail: [email protected]. Notes The authors declare no competing financial interest.
37,349
[ "1056918", "849110", "14460", "1010878", "756599" ]
[ "525419", "525419", "136813", "301585", "525419" ]
00176202
en
[ "phys", "math" ]
2024/03/05 22:32:13
2008
https://hal.science/hal-00176202v2/file/LorentzCRAS.pdf
Emanuele Caglioti François Golse The Boltzmann-Grad limit of the periodic Lorentz gas in two space dimensions The periodic Lorentz gas is the dynamical system corresponding to the free motion of a point particle in a periodic system of fixed spherical obstacles of radius r centered at the integer points, assuming all collisions of the particle with the obstacles to be elastic. In this Note, we study this motion on time intervals of order 1/r as r → 0 + . Résumé La limite de Boltzmann-Grad du gaz de Lorentz périodique en dimension deux d'espace. Le gaz de Lorentz périodique est le système dynamique correspondant au mouvement libre dans le plan d'une particule ponctuelle rebondissant de manière élastique sur un système de disques de rayon r centrés aux points de coordonnées entières. On étudie ce mouvement pour r → 0 + sur des temps de l'ordre de 1/r. Version française abrégée On appelle gaz de Lorentz le système dynamique correspondant au mouvement libre d'une particule ponctuelle dans un système d'obstacles circulaires de rayon r centrés aux sommets d'un réseau de R 2 , supposant que les collisions entre la particule et les obstacles sont parfaitement élastiques. Les trajectoires de la particule sont alors données par les formules [START_REF] Boca | The distribution of the free path lengths in the periodic two-dimensional Lorentz gas in the small-scatterer limit[END_REF]. La limite de Boltzmann-Grad pour le gaz de Lorentz consiste à supposer que le rayon des obstacles r → 0 + , et à observer la dynamique de la particule sur des plages de temps longues, de l'ordre de 1/r -voir (3) pour la loi d'échelle de Boltzmann-Grad en dimension 2. Or les trajectoires de la particule s'expriment en fonction de l'application de transfert d'obstacle à obstacle T r définie par [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF] -où la notation Y désigne la transformation inverse de [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF] -application qui associe, à tout paramètre d'impact h ′ ∈ [-1, 1] correspondant à une particule quittant la surface d'un obstacle dans la direction ω ∈ S 1 , le paramètre d'impact h à la collision suivante, ainsi que le temps s s'écoulant jusqu'à cette collision. (Pour une définition de la notion de paramètre d'impact, voir [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF].) On se ramène donc à étudier le comportement limite de l'application de transfert T r pour r → 0 + . Proposition 0.1 Lorsque 0 < ω 2 < ω 1 et α = ω2 ω1 / ∈ Q, l'application de transfert T r est approchée à O(r 2 ) près par l'application T A,B,Q,N définie à la formule (14). Pour ω ∈ S 1 quelconque, on se ramène au cas ci-dessus par la symétrie (15). Les paramètres A, B, Q, N mod. 2 intervenant dans l'application de transfert asymptotique sont définis à partir du développement en fraction continue (9) de α par les formules (11) et (12). On voit sur ces formules que les paramètres A, B, Q, N mod. 2 sont des fonctions très fortement oscillantes des variables ω et r. Il est donc naturel de chercher le comportement limite de l'application de transfert T r dans une topologie faible vis à vis de la dépendance en la direction ω. On montre ainsi que, pour tout h ′ ∈ [-1, 1], la famille d'applications ω → T r (h ′ , ω) converge au sens des mesures de Young (voir par exemple [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF] p. 146-154 pour une définition de cette notion) lorsque r → 0 + vers une mesure de probabilité P (s, h|h ′ )dsdh indépendante de ω : Théorème 0.2 Pour tout Φ ∈ C c (R * + ×] -1, 1[) et tout h ′ ∈] -1, 1[, la limite (16) a lieu dans L ∞ (S 1 ) faible-* lorsque r → 0 + , où la mesure de probabilité P (s, h|h ′ )dsdh est l'image de la probabilité µ définie dans (17) par l'application (A, B, Q, N ) → T A,B,Q,N (h ′ ) de la formule (14). De plus, cette densité de probabilité de transition P (s, h|h ′ ) vérifie les propriétés (18). Le théorème ci-dessus est le résultat principal de cette Note : il montre que, dans la limite de Boltzmann-Grad, le transfert d'obstacle à obstacle est décrit de manière naturelle par une densité de probabilité de transition P (s, h|h ′ ), où s est le laps de temps entre deux collisions successives avec les obstacles (dans l'échelle de temps de la limite de Boltzmann-Grad), h le paramètre d'impact lors de la collision future et h ′ celui correspondant à la collision passée. Le fait que la probabilité de transition P (s, h|h ′ ) soit indépendante de la direction suggère l'hypothèse d'indépendance (H) des quantités A, B, Q, N mod. 2 correspondant à des collisions successives. Théorème 0.3 Sous l'hypothèse (H), pour toute densité de probabilité f in ∈ C c (R 2 × S 1 ), la fonction de distribution f r ≡ f r (t, x, ω) de la théorie cinétique, définie par (3) converge dans L ∞ (R + × R 2 × S 1 ) vers la limite (22) lorsque r → 0 + , où F est la solution du problème de Cauchy (21) posé dans l'espace des phases étendu (x, ω, s, h) ∈ R 2 × S 1 × R * + ×] -1, 1[. Dans le cas d'obstacles aléatoires indépendants et poissonniens, Gallavotti a montré que la limite de Boltzmann-Grad du gaz de Lorentz obéit à l'équation cinétique de Lorentz (4). Le cas périodique est absolument différent : en se basant sur des estimations (cf. [START_REF] Bourgain | On the distribution of free path lengths for the periodic Lorentz gas[END_REF] et [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF]) du temps de sortie du domaine Z r défini dans (1), on démontre que la limite de Boltzmann-Grad du gaz de Lorentz périodique ne peut pas être décrite par l'équation de Lorentz (4) sur l'espace des phases R 2 × S 1 classique de la théorie cinétique : voir [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF]. Si l'hypothèse (H) ci-dessous était vérifiée, le modèle cinétique (22) dans l'espace des phases étendu fournirait donc l'équation devant remplacer l'équation cinétique classique de Lorentz (4) dans le cas périodique. The Lorentz gas The Lorentz gas is the dynamical system corresponding to the free motion of a single point particle in a periodic system of fixed spherical obstacles, assuming that collisions between the particle and any of the obstacles are elastic. Henceforth, we assume that the space dimension is 2 and that the obstacles are disks of radius r centered at each point of Z 2 . Hence the domain left free for particle motion is Z r = {x ∈ R 2 | dist(x, Z 2 ) > r} , where it is assumed that 0 < r < 1 2 . (1) Assuming that the particle moves at speed 1, its trajectory starting from x ∈ Z r with velocity ω ∈ S 1 at time t = 0 is t → (X r , Ω r )(t; x, ω) ∈ R 2 × S 1 given by Ẋr (t) = Ω r (t) and Ωr (t) = 0 whenever X r (t) ∈ Z r , X r (t + 0) = X r (t -0) and Ω r (t + 0) = R[X r (t)]Ω r (t -0) whenever X r (t -0) ∈ ∂Z r , (2) denoting ˙= d dt and R[X r (t)] the specular reflection on ∂Z r at the point X r (t) = X r (t ± 0). Assume that the initial position x and direction ω of the particle are distributed in Z r × S 1 with some probability density f in ≡ f in (x, ω), and define f r (t, x, ω) := f in (rX r (-t/r; x, ω), Ω r (-t/r; x, ω)) whenever x ∈ Z r . ( 3 ) We are concerned with the limit of f r as r → 0 + in some appropriate sense to be explained below. In the 2-dimensional setting considered here, this is precisely the Boltzmann-Grad limit. In the case of a random (Poisson), instead of periodic, configuration of obstacles, Gallavotti [START_REF] Gallavotti | Rigorous theory of the Boltzmann equation in the Lorentz gas[END_REF] proved that the expectation of f r converges to the solution of the Lorentz kinetic equation for (x, ω) ∈ R 2 × S 1 : (∂ t + ω • ∇ x )f (t, x, ω) = S 1 (f (t, x, ω -2(ω • n)n) -f (t, x, ω))(ω • n) + dn , f t=0 = f in . (4) In the case of a periodic distribution of obstacles, the Boltzmann-Grad limit of the Lorentz gas cannot be described by a transport equation as above: see [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF] for a complete proof, based on estimates on the free path length to be found in [START_REF] Bourgain | On the distribution of free path lengths for the periodic Lorentz gas[END_REF] and [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF]. This limit involves instead a linear Boltzmann equation on an extended phase space with two new variables taking into account correlations between consecutive collisions with the obstacles that are an effect of periodicity: see Theorem 4.1. The transfer map Denote by n x the inward unit normal to Z r at the point x ∈ ∂Z r , consider Γ ± r = {(x, ω) ∈ ∂Z r × S 1 | ± ω • n x > 0} , (5) and let Γ ± r /Z 2 be the quotient of Γ ± r under the action of Z 2 by translation on the x variable. For (x, ω) ∈ Γ + r , let τ r (x, ω) be the exit time from x in the direction ω and h r (x, ω) be the impact parameter: τ r (x, ω) = inf{t > 0 | x + tω ∈ ∂Z r } , and h r (x, ω) = sin(ω, n x ) . (6) Obviously, the map Γ + r /Z 2 ∋ (x, ω) → (h r (x, ω), ω) ∈] -1, 1[×S 1 (7) coordinatizes Γ + r /Z 2 , and we henceforth denote Y r its inverse. For each r ∈]0, 1 2 [, consider now the transfer map T r : ] -1, 1[×S 1 → R * + ×] -1, 1[ defined by T r (h ′ , ω) = (rτ r (Y r (h ′ , ω)), h r (X r (τ r (Y r (h ′ , ω)); Y r (h ′ , ω)), Ω r (τ r (Y r (h ′ , ω)); Y r (h ′ , ω)))) . (8) For a particle leaving the surface of an obstacle in the direction ω with impact parameter h ′ , the transition map T r (h ′ , ω) = (s, h) gives the (rescaled) distance s to the next collision, and the corresponding impact parameter h. Obviously, each trajectory (2) of the particle can be expressed in terms of the transfer map T r and iterates thereof. The Boltzmann-Grad limit of the periodic Lorentz gas is therefore reduced to computing the limiting behavior of T r as r → 0 + , and this is our main purpose in this Note. We first need some pieces of notation. Assume ω = (ω 1 , ω 2 ) with 0 < ω 2 < ω 1 , and α = ω 2 /ω 1 ∈]0, 1[\Q. Consider the continued fraction expansion of α: α = [0; a 0 , a 1 , a 2 , . . .] = 1 a 0 + 1 a 1 + . . . . ( 9 ) Define the sequences of convergents (p n , q n ) n≥0 and errors (d n ) n≥0 by the recursion formulas p n+1 = a n p n + p n-1 , p 0 = 1 , p 1 = 0 , d n = (-1) n-1 (q n α -p n ) , q n+1 = a n q n + q n-1 q 0 = 0 , q 1 = 1 , (10) and let N (α, r) = inf{n ≥ 0 | d n ≤ 2r 1 + α 2 } , and k(α, r) = - 2r √ 1 + α 2 -d N (α,r)-1 d N (α,r) . ( 11 ) Proposition 2.1 For each ω = (cos θ, sin θ) with 0 < θ < π 4 , set α = tan θ and ǫ = 2r √ 1 + α 2 , and A(α, r) = 1 - d N (α,r) ǫ , B(α, r) = 1 - d N (α,r)-1 -k(α, r)d N (α,r) ǫ , Q(α, r) = ǫq N (α,r) . ( 12 ) In the limit r → 0 + , the transition map T r defined in ( 8) is explicit in terms of A, B, Q, N up to O(r 2 ): T r (h ′ , ω) = T A(α,r),B(α,r),Q(α,r),N (α,r) (h ′ ) + (O(r 2 ), 0) for each h ′ ∈] -1, 1[. ( 13 ) In the formula above T A,B,Q,N (h ′ ) = (Q, h ′ -2(-1) N (1 -A)) if (-1) N h ′ ∈]1 -2A, 1] , T A,B,Q,N (h ′ ) = Q ′ , h ′ + 2(-1) N (1 -B) if (-1) N h ′ ∈ [-1, -1 + 2B[ , T A,B,Q,N (h ′ ) = Q ′ + Q, h ′ + 2(-1) N (A -B) if (-1) N h ′ ∈ [-1 + 2B, 1 -2A] , ( 14 ) for each (A, B, Q, N ) ∈ K :=]0, 1[ 3 ×Z/2Z, with the notation Q ′ = 1-Q(1-B) 1-A . The proof uses the 3-term partition of the 2-torus defined in section 2 of [START_REF] Caglioti | On the distribution of free path lengths for the periodic Lorentz gas III[END_REF], following the work of [START_REF] Blank | Thom's problem on irrational flows[END_REF]. For ω = (cos θ, sin θ) with arbitrary θ ∈ R, the map h ′ → T r (h ′ , ω) is computed using Proposition 2.1 in the following manner. Set θ = θ -m π 2 with m = [ 2 π (θ + π 4 )] and let ω = (cos θ, sin θ). Then T r (h ′ , ω) = (s, h) , where (s, sign(tan θ)h) = T r (sign(tan θ)h ′ , ω) . (15) 3. The Boltzmann-Grad limit of the transfer map T r The formulas (11) and (12) defining A, B, Q, N mod. 2 show that these quantities are strongly oscillating functions of the variables ω and r. In view of Proposition 2.1, one therefore expects the transfer map T r to have a limit as r → 0 + only in the weakest imaginable sense, i.e. in the sense of Young measuressee [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF], pp. 146-154 for a definition of this notion of convergence. The main result in the present Note is the theorem below. It says that, for each h ′ ∈ [-1, 1], the family of maps ω → T r (h ′ , ω) converges as r → 0 + and in the sense of Young measures to some probability measure P (s, h|h ′ )dsdh that is moreover independent of ω. Theorem 3.1 For each Φ ∈ C c (R * + × [-1, 1]) and each h ′ ∈ [-1, 1] Φ(T r (h ′ , •)) → ∞ 0 1 -1 Φ(s, h)P (s, h|h ′ )dsdh in L ∞ (S 1 ω ) weak-* as r → 0 + , ( 16 ) where the transition probability P (s, h|h ′ )dsdh is the image of the probability measure on K given by dµ(A, B, Q, N ) = 6 π 2 1 0<A<1 1 0<B<1-A 1 0<Q< 1 2-A-B dAdBdQ 1 -A (δ N =0 + δ N =1 ) (17) under the map K ∋ (A, B, Q, N ) → T A,B,Q,N (h ′ ) ∈ R + × [-1, 1] . Moreover, P satisfies: (s, h, h ′ ) → (1 + s)P (s, h|h ′ ) is piecewise continuous and bounded on R + × [-1, 1] × [-1, 1], and P (s, h|h ′ ) = P (s, -h| -h ′ ) for each h, h ′ ∈ [-1, 1] and s ≥ 0. ( ) 18 The proof of (16-17) is based on the explicit representation of the transition map in Proposition 2.1 together with Kloosterman sums techniques as in [START_REF] Boca | The distribution of the free path lengths in the periodic two-dimensional Lorentz gas in the small-scatterer limit[END_REF]. The explicit formula for the transition probability P is very complicated and we do not report it here, however it clearly entails the properties (18). The Boltzmann-Grad limit of the Lorentz gas dynamics For each r ∈]0, 1 2 [, denote dγ + r (x, ω) the probability measure on Γ + r that is proportional to ω • n x dxdω. This probability measure is invariant under the billiard map B r : Γ + r ∋ (x, ω) → B r (x, ω) = (x + τ r (x, ω)ω, R[x + τ r (x, ω)ω]ω) ∈ Γ + r . (19) For (x 0 , ω 0 ) ∈ Γ + r , set (x n , ω n ) = B n r (x 0 , ω 0 ) and α n = min(|ω n 1 /ω n 2 |, |ω n 2 /ω n 1 |) for each n ≥ 0, and define b n r = (A(α n , r), B(α n , r), Q(α n , r), N (α n , r) mod. 2) ∈ K for each n ≥ 0. ( 20 ) We make the following asymptotic independence hypothesis: for each n ≥ 1 and each Ψ ∈ C([-1, 1]× K n ) (H) lim r→0 + Γ + r Ψ(h r , ω 0 , b 1 r , . . . , b n r )dγ + r (x 0 , ω 0 ) = 1 -1 dh ′ 2 S 1 dω0 2π K n Ψ(h ′ , ω 0 , β 1 , . . . , β n )dµ(β 1 ) . . . dµ(β n ) Under this assumption, the Boltzmann-Grad limit of the Lorentz gas is described by a kinetic model on the extended phase space R 2 × S 1 × R + × [-1, 1] -unlike the Lorentz kinetic equation ( 4), that is set on the usual phase space R 2 × S 1 . Theorem 4.1 Assume (H), and let f in be any continuous, compactly supported probability density on R 2 × S 1 . Denoting by R[θ] the rotation of an angle θ, let F ≡ F (t, x, ω, s, h) be the solution of (∂ t + ω • ∇ x -∂ s )F (t, x, ω, s, h) = 1 -1 P (s, h|h ′ )F (t, x, R[π -2 arcsin(h ′ )]ω, 0, h ′ )dh ′ F (0, x, ω, s, h) = f in (x, ω) ∞ s 1 -1 P (τ, h|h ′ )dh ′ dτ (21) where (x, ω, s, h) runs through R 2 × S 1 × R * + ×] -1, 1[. Then the family (f r ) 0<r< 1 2 defined in (3) satisfies f r → ∞ 0 1 -1 F (•, •, •, s, h)dsdh in L ∞ (R + × R 2 × S 1 ) weak- * as r → 0 + . ( 22 ) For each (s 0 , h 0 ) ∈ R + × [-1, 1], let (s n , h n ) n≥1 be the Markov chain defined by the induction formula (s n , h n ) = T βn (h n-1 ) for each n ≥ 1 , together with ω n = R[2 arcsin(h n-1 ) -π]ω n-1 , (23) where β n ∈ K are independent random variables distributed under µ. The proof of Theorem 4.1 relies upon approximating the particle trajectory (X r , Ω r )(t) starting from (x 0 , ω 0 ) in terms of the following jump process with values in R 2 × S 1 × R + × [-1, 1] with the help of Proposition 2.1 (X t , Ω t , S t , H t )(x 0 , ω 0 , s 0 , h 0 ) = (x 0 + tω 0 , ω 0 , s 0 -t, h 0 ) for 0 ≤ t < s 0 , (X t , Ω t , S t , H t )(x 0 , ω 0 , s 0 , h 0 ) = (X τn + (t -s n )ω n , ω n , s n+1 -t, h n ) for s n ≤ t < s n+1 . (24) Unlike in the case of a random (Poisson) distribution of obstacles, the successive impact parameters on each particle path are not independent and uniformly distributed in the periodic case -likewise, the successive free path lengths on each particle path are not independent with exponential distribution. The Markov chain (23) is introduced to handle precisely this difficulty. Figure 1 . 1 Figure 1. Left: the transfer map (s, h) = Tr(h ′ , v), with h ′ = sin(n x ′ , v) and h = sin(nx, v). Right: Particles leaving the surface of one obstacle will next collide with one of generically three obstacles. The figure explains the geometrical meaning of A, B, Q.
17,038
[ "843086", "838357" ]
[ "92732", "18", "25" ]
01762898
en
[ "info" ]
2024/03/05 22:32:13
2017
https://inria.hal.science/hal-01762898/file/463502_1_En_39_Chapter.pdf
Miroslava Černochová email: [email protected] Tomáš Jeřábek email: [email protected] DIYLab as a way for Student Teachers to Understand a Learning Process Keywords: Digital Literacy, DIY, DIYLab Activity, Visualisation of Learning Process, Student Teacher The authors introduce their experiences gained in the EU project Do It Yourself in Education: Expanding digital literacy to foster student agency and collaborative learning (DIYLAB). The project was aimed to design an educational procedure based on DIY philosophy with a student-centred and heuristic approach to learning focused on digital literacy development and later to verify it in teaching practice in primary and secondary schools and HEIs in Finland, Spain and the Czech Republic. In the Czech Republic the project DIYLAB was realized as a teaching approach in initial teacher education with Bachelor and MA degrees for ICT, Biology, Primary Education and Art Education student teachers. DIYLab activities represented occasions for student teachers to bring interesting problems related to their study programmes and also their after-school interests. An integral part of DIYLab activities was problem visualisation using digital technology; visual, film, animation, etc. served as a basis for assessing both pupils' digital competence and their problem-solving capability. The DIYLab have influenced student teachers' pedagogical thinking of how to develop pupils' digital literacy and to assess digital literacy development as a process and not as a digital artefact. Following the project, the DIYLab approach is being included in future Bachelor and MA level initial teacher education with the aim to teach student teachers (1) to design DIY activities for digital literacy development supported inter-disciplinary relations in school education, and (2) to use digital technology to oversee and assess learning as a process. Introduction Since 2006, ICT has been included in the curriculum as compulsory for primary, lower and upper secondary education in the Czech Republic with the aim of developing digital literacy and the use of ICT skills so that pupils are enabled to use standard ICT technology. In schools there is, however, a tendency for ICT lessons to focus on outcomes which can be produced by basic formats using a typical office suite (for example, using a word processor, spreadsheet and presentation program) and on the ability to search and to process information (primarily via the Internet). Some pupils and HEI students manifest their discontentment against how digital technology is used in schools. They show teachers they are more experienced in using digital technology than their teachers. The 2006 ICT curriculum is a thing of the past. It does not sufficiently reflect new and advanced technology and the need to implement innovative teaching approaches in digital literacy development which emphasises the educational potential for new ways and strategies of learning, not just user skills. Thus the Czech government approved in 2014 the Strategy of Digital Education ( [START_REF] Moeys | Strategie digitálního vzdělávání[END_REF]) that would contribute, among other things [START_REF] Bergold | Participatory Research Methods: A Methodological Approach in Motion [110 paragraphs[END_REF] to create conditions for the development of the digital literacy and computational thinking of pupils, and (2) to create conditions for the development of the digital literacy and teachers' computational thinking. The DIYLab, being grounded in DIY (do-it-yourself) philosophy, is an example of an innovative approach to education which worked in schools and enabled the development of digital literacy. Pupils at all levels were at first cautious but attracted to the idea and motivated to learn. Teachers were empowered by coming to understand another strategy to enable students' learning. The DIY Concept The concept of DIY is not totally new. It can be found when speaking of the development, for example, of amateur radio as a hobby. The DIY movement has developed and spread step by step into different branches (technical education, Art, science, etc.). It has common features: it brings together enthusiastic people (who have the same aim and interest) to solve in a creative way interesting problems in their field and mutually to share "manuals" on how to proceed or how "you can do it yourself". Globally, there is a generation of DIY enthusiasts and supporters who join in various communities or networks. There is nothing that could limit activities of this generation of creative and thoughtful people; if they need to know something to be able to realise their DIY ideas they learn from one another. The DIY generation very often uses ICT for their creative initiatives. The DIY generation visualises stories to document the process explaining how problems were solved to be shared as tutorials by others. Freedom to make and to create using ICT is perceived as freedom of access, in the choice of tools and technology, and a release from reliance on specific software and hardware tools; it is using a variety of resources, making copies and sharing outcomes and methods. Implementation of DIY into Education To apply DIY in schools means enabling pupils to bring into school interesting ideas from the extra-curricular environment and to create conditions for their exploration; to put them within a school curriculum and to (re)arrange circumstances for collaboration and sharing experiences, in a similar way to that in which scientists and experts might work. Through such processes pupils can use their knowledge and skills from different subjects and interests so they can keep discovering new things and interdisciplinary contexts and connections ( [START_REF] Sancho-Gil | Envisioning DIY learning in primary and secondary schools[END_REF]). In such activities pupils organise themselves, their procedures and processes; principles of autonomous learning are thus being put into practice. According to Y. Kafai and K. Peppler ( [START_REF] Kafai | Youth, Technology, and DIY: Developing Participatory Competencies in Creative Media Production[END_REF]), it is possible to incorporate DIY activities into programming, designing models, constructing robots, creating manuals (tutorials) on how to do or how to learn something (for example, how to count using an abacus). Thus, DIY can potentially contribute to further mastery in the use of digital technology and consequently improve digital literacy. DIY in school contexts corresponds with the concept of learning as "the natural, unstoppable process of acquiring knowledge and mastery" and being aware that "the vast majority of the learning in your life doesn't happen when you're a kid in school" ( [6, p. 22]). This article focuses on DIYLab activities implemented through the project DIYLab in teacher education at the Faculty of Education but also includes an example from a school since one of the participant teachers was also a student teacher at the Faculty. First, it was necessary to map ideas of student teachers and teacher educators if, why and how to bring topics from outside students' activities into their study programme. The student teachers were expected to integrate their own interesting problem related to their life or hobbies, but unfortunately the majority of them did not come with any own initiative waiting for a task formulated by their teachers. It emerged that student teachers who participated in DIYLab had not been used to bringing their extra-curricular interests, hobbies, or expertise into university study. After that, it was necessary to specify a framework and key features of the DIY activities which were consequently realised by student teachers. A Model of DIYLab Activity Imperative in the practice of DIYLab is that all who apply the DIY idea in their activities endeavour to share with the outside world how they proceeded and how they solved a problem. They develop tutorials which visually (using movie, animation, etc.) document the process, explaining how problems were solved and what was learned. This means of transmitting to others how to proceed may be perceived as an author's self-reflection of his/her learning. Story-telling -a narrative assemblage -is a very important attribute of the DIY creation process ([4, p. 300]). The publishing of a procedure or a manual on how to create or produce something or how something was made can help others to produce something similar; it can help others to learn new methods or to create something completely new and original. The concept of DIY aligns with young people's experiences who point out that in schools "we miss so much of the richness of real learning, which relies on failure, trial and error, getting to know people, and reaching for things you didn't think were possible" ([6, p. 75]). Key features of DIYLab activities A model of DIYLab activities was based on DIY philosophy which is a studentcentred, heuristic approach to learning and problem-solving and which implies six pedagogical principles for approaches to learning (Table 1). A key aim of DIY activities beyond solving a problem is to provide a manual on how to solve the problem. This "handbook" is then published in a form which can be shared with others -the best and easy to understand way is in a visible form (e.g., video, animation). (2) to have the characteristics of inquiry-based teaching and learning methods DIY communities dedicate their time to original problems which have not been solved and which are different to traditional school tasks. ( 3) to support transdisciplinary knowledge To enable pupils to bring into school interesting ideas from the extra-curricular environment and to create conditions for their exploration. If pupils have an interesting problem to be solved, they do not worry about which school subject it relates to. J. Sancho et al. ([10]) (4) to contribute to autonomous / selfregulated learning Documenting how to proceed for others may be perceived as an author's self-reflection of his/her learning. DIY communities enjoy to find a solution "Building new tools and paths to help all of us learn". The requirement to visualise a learning process about "how the problem can be solved" as a message for others follows several reasons. Firstly, visual tools are normally understood as comprehensible regardless of which languages we speak. Secondly, we are all, student teachers included, increasingly surrounded by visual stories (e.g. YouTube, animated instructions for passengers how to behave during a flight). The skills required to use digital technology for visualisation fully correspond to a concept of digital literacy defined by Y. Eshet-Alkalai ([3, p. 93]): "digital literacy involves more than the mere ability to use software or operate a digital device; it includes a large variety of complex cognitive, motor, sociological, and emotional skills, which users need in order to function effectively in digital environments." Eshet-Alkalai's conceptual model of digital literacy consists of five digital literacy thinking skills, including the "photo-visual digital thinking skill: Modern graphic based digital environments require scholars to employ cognitive skills of "using vision to think" in order to create photo-visual communication with the environment. This unique form of digital thinking skill helps users to intuitively "read" and understand instructions and messages that are presented in a visualgraphical form, as in user interfaces and in children's computer games." ([3, p. 93]) Specification of a Research Field The implementation of the DIYLab project in teacher education was an opportunity to focus on the development of pedagogical thinking in student teachers; primarily to enrich by using an innovative didactical approach to digital literacy development, and to understand better the learning process. The project DIYLab was expected to answer questions such as how much student teachers are capable of visualising their learning, which type of visual description (narration) student teachers would produce or how difficult it is for them to visualise their learning process. The student teachers had not been used to considering why and how to visualise the learning process. They had studied learning theory in aspects of pedagogy and psychology. Therefore, it was expected that they would find visualising their learning processes challenging because they had never undertaken such a pedagogical task. During their HEI study, they mainly do self-reflections from didactic situations, teaching processes or learning only in oral or written forms, but not in a visual manner. Research Methodology Participatory Action Research -PAR ( [START_REF] Bergold | Participatory Research Methods: A Methodological Approach in Motion [110 paragraphs[END_REF]; ( [START_REF] Kemmis | Exploring the relevance of critical theory for action research: Emancipatory action research in the footsteps of Jürgen Habermas[END_REF]; [START_REF] Kemmis | Participatory action research. Communicative action and the public sphere[END_REF]) was the research methodology adopted since it allowed active engagement, intervention and the opportunity for participant observation. The approach was also consonant with the democratic values implicit in the above-stated DIY philosophy. The impact of DIY approach on teacher education was studied using qualitative research methods (focus groups, questionnaire surveys, interviews, observations and analyses of student teachers' DIY outcomes). Teacher educators evaluated not only the originality of student teachers' DIYLab activity procedures, but also how much these DIY activities corresponded to the six pedagogical principles (see Fig. 1) and to what extent student teachers managed to visualise a process and their ways of thinking and learning. Characteristics of student teachers participated in DIYLab activities From January 2015 to January 2016, at the Faculty of Education, 192 part-time and full-time student teachers (aged at least 20 years) and eight teacher educators from four departments (IT and Technical Education, Art Education, Biology and Environmental Studies, and Primary Education) were introduced to the DIY philosophy within compulsory courses focused on pedagogy, ICT education, computing education, biology, educational technology, multimedia etc. Analysis of Some DIYLab Activities Performed by Student Teachers The ). Some of them were published on the HUB (hub.diylab.eu). Ways in which the DIYLab activities met the defined requirements (1) Collaborative learning The collaborative approach to DIYLab activities was the most irregular one, and was dependent on each particular process and students. For the part-time student teachers who live and work in different places of the Czech Republic and only meet during classes at the Faculty of Education collaboration and co-operation with their coscholars are more fitting and appropriate than for full-time students. Some DIYLab activities were extremely specialised and tasks had to be solved individually. (2) Inquiry-based teaching and learning DIYLab activities were not for student teachers routine tasks usually assigned in seminars. In some cases, the student teachers faced technological problems (see Building android apps or the specific solution for Installing a camera in a birdhouse for the subject Multimedia Systems), in another cases they faced more theoretical didactic problems (see Collection of examples of problems which human cannot solve without using computer). ( 3) Trans-disciplinary knowledge Almost all of DIYLab activities had trans-disciplinary overlap. In some cases, the trans-disciplinary co-operation became obvious only thanks to the DIYLab project and had an impact in forming a student teachers' professional competence of selfreflection (e.g. How I'm becoming a teacher). Nearly thirty per cent (28.5%) of ICT student teachers stated that in their DIYLab activity they did not use knowledge from other subjects; if there was any required knowledge from other subjects, it was mostly from physics, mathematics, English, geography, medicine or cinematography or computer science, and rarely from biology, chemistry or art. Nevertheless, they appreciated the opportunity to collaborate with students of other study specialisations very much, and due to such collaboration from their point of view they learned a great deal. (4) Autonomous / Self-regulated learning The dimensions of independent learning and self-regulation underpin the whole process and were actively promoted, taking due account of the diversity of the students and their willingness to learn by new means. The student teachers appreciated the DIYLab approach to learning from two perspectives: they learned (a) another approach to solve an issue, and (b) how to properly lay out their work, to visualise and organise tasks in order to find solutions. ( 5) Digital literacy improvement / Digital competence In carrying out the DIYLab activities the student teachers worked with quite a narrow range of hardware and software which was largely determined by the technical equipment available at the Faculty of Education or in the resources available through their respective Bachelor's and Master's degree studies. Most of the student teachers involved in DIYLab activities were ICT students. In general, in the case of ICT student teachers it was virtually impossible to determine any improvement or progress in their digital literacy. Based on the outputs, the students were mostly using video, presentation and text editors. For ICT student teachers, the majority of DIY activities were only an opportunity (sometimes a routine) to apply their digital literacy skills to solving problems, while for Art or Biology Education student teachers DIY activities distinctively contributed to improvement of their skills in using digital technology. As a result of involvement in DIYLab they learned to create animations etc. DIY activities with ICT student teachers increased their didactic thinking about the role and possibilities of digital technology in education; besides that, they assisted their Art Education peers to be able to do animations or Biology student teachers to design a technological solution and to install a camera in a birdhouse. (6) Connection to study programmes / curriculum The student teachers did their DIYLab activities during one semester as a part of their final work with the aim of gaining credits and grades. Each DIYLab outcome consisted of two main parts: (i) a product as a solution of the problem (e.g., SW application, a set of 3D tools, models, database, mechanical drawing, electric circuit, robot), and (ii) a digital object (e.g. video, movie, animation) which visualises a process demonstrating how student teachers were progressing, how they learned a problem and how they managed to resolve a DIYLab activity. Fig. 1 shows the results of a questionnaire survey focused on how teacher educators evaluated their DIYLab activities connecting the six pedagogical principles. From this evaluation the following average values for each item are derived: contribution to autonomous / self-regulated learning (4.8), digital competence improvement (4.4), connection with the curriculum (4.3), support to trans-disciplinary knowledge (3.9), inquiry-based teaching and learning (3.6), and support for collaborative learning (3.1). The majority of problems solved in DIYLab were not characteristic of inquirybased problems. Teacher educators invested a lot of time facilitating student teachers to develop a DIYLab idea. For the teacher educators it was not always easy to motivate their students to bring their own projects. Students seemed to be afraid to step into new territory. The main motivation to carry out their DIY activities for some student teachers lay in getting credits, not in solving problems. In part-time study, there was not much time for defining and understanding a problem for inquiry-based teaching and inter-disciplinary links. Potentially, this is an advantage since it may have contributed to increased online collaboration between students and an increase in collaborative learning. There were several factors (teacher educator, student, solved problem, study specialisation, motivation, experiences, etc.) that influenced a way how particular pedagogical principles were accomplished in each individual DIYLab activity (Fig. 1). Examples of DIYLab activities carried out by Bc degree student teachers The student teachers on ICT Bachelor Studies' courses counted on their teachers to assign them a topic. Despite some of them work in computer companies or specialise in some aspect of computing, they rarely came up with their own proposals. When they had some ideas for topics for DIYLab activities these were related to their hobbies (e.g. diving, gardening, theatre). Some of them were surprised that they had to do something linking knowledge and experiences from different branches or disciplines. For example, there was a student who was interested in scuba diving and who proposed a project, Diver's LogBook (see http://hub.diylab.eu/2016/01/27/diverslogbook/). Another student is part of a theatre group, Kašpárek and Jitřenka, and she decided to initiate a project entitled, Database Development -database of theatre ensembles (http://hub.diylab.eu/2016/01/27/database-development-database-oftheatre-ensembles/). Bc. student teachers of Biology who studied the life of birds in a nesting box directed their activities to a project, Bird House (http://hub.diylab.eu/2016/01/27/bird-house/). In courses focused on digital technology, one ICT student looked for a solution as to How to create an animated popup message in Adobe After Effects. Bachelor student teachers weren't used to thinking about what and how they had learned, much less how to visualise their own learning process. They didn't consider thinking about learning and reflection on DIYLab activity to be "professional". Unlike Art Education students, the ICT student teachers are advanced in digital technology, but they lack knowledge and skills to observe and to visually display and present processes. Bachelor ICT student teachers very often reduced a visualisation of their DIYLab procedure to a set of screenshots. They recognised DIYLab only from the technological point of view and the extent to which software and hardware were applied. Generally, for Bc. student teachers it was very difficult to visualise their learning in DIY activities. They were not particularly interested in the pedagogical concept of learning and how to visualise its process because in Bc. study programmes the main focus is on acquiring knowledge from particular branches (Biology, ICT, Art, etc.), rather than understanding the learning process involved. Examples of DIY activities carried out by MA degree student teachers The MA degree student teachers did their DIYLab activities predominantly within didactic subjects or courses making limited use of technology. The majority of them were part-time students who work in schools as unqualified teachers of ICT or Informatics subjects, and so most of them tried to apply the DIYLab idea in their teaching with their pupils. MA student teachers thought about and mediated the topics and the purpose of DIYLab activities more deeply than Bachelor-level students mainly due to the fact that they realised their DIYLab activities primarily in courses focused on didactics' aspects and contexts. MA student teachers elaborated some general themes proposed by their teacher educators. The requirement to record and visualise a learning process did not surprise MA student teachers. They understand how important it is to visualise a learning process from a pedagogical point of view. Data taken from such visualisation can help teachers to understand better the learning outcomes of their pupils. However, they had no experience in the process of visualisation. Similarly to the Bachelor-level students, they very often reduced a visualisation of DIYLab procedure to a set of screenshots. A few of them did an animation of their way of thinking about the DIYLab activity. (e.g., Problems which a human cannot solve without using a computertomography). Some of them did a tutorial (An animated story about a small wizard, https://www.youtube.com/watch?v=QA1skX4GiBI). Some of them developed a methodical guide how to work with pupils (DIY_Little Dances in Scratch -Start to move_CZ, http://hub.diylab.eu/2016/01/11/little-dances-in-scratch-start-tomove/diy_little-dances-in-scratch-start-to-move_cz/), some did a comic strip. Examples of DIY activities carried out by pupils and completed in lessons managed by ICT student teachers on school practice Some part-time ICT student teachers decided to apply DIYLab to their class teaching in schools where their pupils did similar DIY activities. All these experiences from schools demonstrate a great enthusiasm and motivation to learn and to solve problems related to after-school activities and through which they develop their digital literacy. For example a girl (aged 15) enjoys recording and editing digital sounds in her free time. She designed a DIYLab activity as a sound story-telling of a boy who would like to meet his girlfriend (https://www.youtube.com/watch?v=a8TzZCAzxKo). She describes how she produced the story-telling as a movie in which she explains what she did, how she collected sounds and which software applications were used in her work (https://www.youtube.com/watch?v=jbSID9_B72k). Conclusions Although the EU project has now ended, the Faculty of Education will continue DIYLab activities as a compulsory assignment and an integrated component on courses for Bachelor degree ICT study and for full-time and part-time student teacher of MA degree ICT study. Great attention will be given to (i) ways how to motivate student teachers to choose appropriate topics from their after-school interests and hobbies and to design DIY activities for inquiry-based learning, (ii) methodological approaches as to how to visualise a process of learning in Informatics, Computing or ICT subjects; diaries, scenarios, process-folios or log books used in Art Education or in technical or technological oriented branches will be used as an inspiration for such an approach in ICT teacher education, and (iii) how to support a close interdisciplinary collaboration among student teachers and teacher educators. The challenge for the Czech context is to change the culture from teacherdependency to students as independent, autonomous learners in the classroom. Creativity in content, methods and pedagogy are absolute requirements to achieve this goal. The DIYlab project showed differences and limits in the culture of approaches to creativity from a pedagogical point of view: If we compare the DIY learning in educational practice in Prague with approaches to creative learning in Barcelona, which is a popular place for international creative artists and DIY communities, in the Czech context the DIYLab will need much longer to break free from the bureaucratic concept of teaching and the assessment of learning outcomes. The value of the evaluative criteria in framing the DIY process and the parameters used enabled analysis and could support the design and thinking by teachers considering using the DIY method. K voluntarily spend a lot of time in intense learning, they tackle highly technical practices, including film editing, robotics, and writing novels among a host of other activities across various DIY networks." To develop photo-visual digital thinking skill as a component of digital literacy Yto be connected with the curriculum School curriculum / study program for HEI students Fig. 1 . 1 Fig. 1. Teacher educator's evaluation of their DIYLab activities using a scale 0-5 with 0-being no accomplishment and 5-being maximal accomplishment. (Source: [2]) Table 1 . 1 Six pedagogical principles for a design of DIYLab activities Feature of DIYlab Idea Authors, activity resources (1) to support Members of DIY communities collaborate mutually. collaborative learning student teachers worked on 16 themes for DIYLab activities and produced within one semestre 81 digital outcomes of different quality and content: Multimedia project (6 DIY digital objects/ 11 students), Design of Android applications (4/11), Little dances with Scratch (4/12), Collection of examples of problems which human cannot solve without using computer (6/28), Contemporary trends in WWW pages development (5/9), Teaching learning object development (4/12), Wiki of teaching activities (1/8), Educational robotics project (6/16), Anatomy and morphology of plants (5/20), Biological and geological technology -field trips (2/3), How I´m becoming a teacher (17/23), Animated stories (11/13), and Teaching with tablets (10/26
29,095
[ "985762", "1030564", "1030565" ]
[ "304738", "304738", "304738" ]